Setting Up Server Farms for Load Balancing with IIS ARR
Jul 24, 2025 am 12:53 AMThe core steps to implement load balancing using IIS ARR to build a server farm include: 1. Install and enable IIS ARR, 2. Create a server farm and add back-end nodes, 3. Set up a load balancing algorithm and health check policies, 4. Configure URL rewrite rules for traffic forwarding. First, you need to manually install the ARR module through the server manager and enable the proxy function to support reverse proxy; at the same time, make sure that the URL rewrite module is installed. Then create a server farm in the IIS management interface, add the backend server address and confirm that its status is normal. Then select the appropriate load balancing algorithm based on the needs, such as polling, weighted polling, or minimum request count, and customize the health check path, timeout, failure threshold, and check interval for improved stability. Finally, configure the URL rewrite rules to match and forward the request to the corresponding server farm, and retain the original Host header information if necessary to avoid request errors. During the entire process, you need to pay attention to the firewall settings, health check path availability and the accuracy of rewriting rules.
If you want to know how to use IIS ARR to build a server farm to achieve load balancing, there are actually a few steps: configuring ARR, setting up a server farm, choosing a suitable load balancing algorithm, and adding health checks. Although these operations are not complicated, the details are prone to errors. Let me talk about how to do it from the perspective of actual deployment.

Install and enable IIS ARR
First, you must confirm that Application Request Routing (ARR) is installed on your server. This function is not enabled by default and needs to be added manually. You can install it by ticking ARR in the optional components of IIS by "Server Manager" → "Add Roles and Features".

After the installation is complete, open the site or server node in IIS Manager, find the "Application Request Routing Cache" module, and click the "Enable Agent" button. This step is critical, and if not enabled, ARR cannot work as a reverse proxy.
In addition, ARR relies on the URL rewrite module, so make sure that the URL Rewrite extension is also installed. Otherwise, you will encounter problems when configuring forwarding rules in the subsequent configuration.

Create a farm and add a backend node
There is a "Server Farms" at the bottom of the tree structure on the left side of the IIS management interface. Click to enter to create a new server farm. Give it a name, such as app-servers.
After creation, right-click the server farm, select "Add Server", and enter the address of each backend server you want to load, such as http://ipnx.cn/link/c57330a3a53f27a63f881d14065f4c46 or host name. After each server is added, ARR will try to do a health check by default.
It should be noted here that the status of each backend server may be displayed as "online" or "offline". If the status is wrong, first check the network connectivity and whether the target server responds to the request normally.
Set up load balancing algorithms and health check strategies
ARR provides several commonly used load balancing algorithms:
- Round Robin : Most commonly used, equally allocate requests.
- Weighted Round Robin : Suitable for servers with different performance.
- Least Requests: Send new requests to the server with the lowest number of requests.
You can select the appropriate method in the "Load Balance" setting of the server farm.
Regarding health checks, it is recommended not to use the default settings. The default is to ping the backend server's root path /
every few seconds, but this may affect performance or return a non-200 response. It is recommended to customize a path specifically for health checks, such as /healthcheck
, and set a reasonable failure threshold and interval.
For example:
- Set the path to
/healthcheck
- Response timeout is set to 5 seconds
- The number of failures reaches 3 times and then marked as offline
- Check interval is set to 30 seconds
This not only ensures stability, but does not put too much pressure on the back end.
Configure URL rewrite rules for traffic forwarding
The final step is to have ARR transfer the request to the farm correctly. You need to open "URL Rewrite" under the site and add an inbound rule.
The basic approach is:
- Match paths, such as
/*
- Use the operation type of "Send request to ARR farm"
- Select the server farm you just created
Note that the matching rules here should be adjusted according to actual needs. For example, if you want to forward /api/*
only to a specific server group, you can set the mode to ^api/(.*)
and then add the corresponding path stitching logic to the rewrite part.
In addition, sometimes the original Host header information needs to be retained. At this time, the incoming Host field should be set to {HTTP_HOST}
in the "Rewrite HTTP Request Header", otherwise the backend service may return an error due to the host mismatch.
Basically these steps. The whole process seems simple, but the common problems encountered in actual deployment include firewall interception, blocked health check paths, and URL rewriting rules not taking effect. As long as you investigate step by step, it can generally be solved.
The above is the detailed content of Setting Up Server Farms for Load Balancing with IIS ARR. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

In the field of modern computers, the TCP/IP protocol is the basis for network communication. As an open source operating system, Linux has become the preferred operating system used by many businesses and organizations. However, as network applications and services become more and more critical components of business, administrators often need to optimize network performance to ensure fast and reliable data transfer. This article will introduce how to improve the network transmission speed of Linux systems by optimizing TCP/IP performance and network performance of Linux systems. This article will discuss a

Introduction to the failover and recovery mechanism in the Nginx load balancing solution: For high-load websites, the use of load balancing is one of the important means to ensure high availability of the website and improve performance. As a powerful open source web server, Nginx's load balancing function has been widely used. In load balancing, how to implement failover and recovery mechanisms is an important issue that needs to be considered. This article will introduce the failover and recovery mechanism in Nginx load balancing and give specific code examples. 1. Failover mechanism

Use NginxProxyManager to implement reverse proxy load balancing strategy NginxProxyManager is an Nginx-based proxy management tool that can help us easily implement reverse proxy and load balancing. By configuring NginxProxyManager, we can distribute requests to multiple backend servers to achieve load balancing and improve system availability and performance. 1. Install and configure NginxProxyManager

Dynamic failure detection and load weight adjustment strategies in the Nginx load balancing solution require specific code examples. Introduction In high-concurrency network environments, load balancing is a common solution that can effectively improve the availability and performance of the website. Nginx is an open source, high-performance web server that provides powerful load balancing capabilities. This article will introduce two important features in Nginx load balancing, dynamic failure detection and load weight adjustment strategy, and provide specific code examples. 1. Dynamic failure detection Dynamic failure detection

Building a high-availability load balancing system: Best practices for NginxProxyManager Introduction: In the development of Internet applications, the load balancing system is one of the essential components. It can achieve high concurrency and high availability services by distributing requests to multiple servers. NginxProxyManager is a commonly used load balancing software. This article will introduce how to use NginxProxyManager to build a high-availability load balancing system and provide

High Availability and Disaster Recovery Solution of Nginx Load Balancing Solution With the rapid development of the Internet, the high availability of Web services has become a key requirement. In order to achieve high availability and disaster tolerance, Nginx has always been one of the most commonly used and reliable load balancers. In this article, we will introduce Nginx’s high availability and disaster recovery solutions and provide specific code examples. High availability of Nginx is mainly achieved through the use of multiple servers. As a load balancer, Nginx can distribute traffic to multiple backend servers to

How to use Workerman to build a high-availability load balancing system requires specific code examples. In the field of modern technology, with the rapid development of the Internet, more and more websites and applications need to handle a large number of concurrent requests. In order to achieve high availability and high performance, the load balancing system has become one of the essential components. This article will introduce how to use the PHP open source framework Workerman to build a high-availability load balancing system and provide specific code examples. 1. Introduction to Workerman Worke

Load balancing strategies are crucial in Java frameworks for efficient distribution of requests. Depending on the concurrency situation, different strategies have different performance: Polling method: stable performance under low concurrency. Weighted polling method: The performance is similar to the polling method under low concurrency. Least number of connections method: best performance under high concurrency. Random method: simple but poor performance. Consistent Hashing: Balancing server load. Combined with practical cases, this article explains how to choose appropriate strategies based on performance data to significantly improve application performance.
