Enabling client and upstream Keepalive connections can significantly improve Nginx performance; 2. Set keepalive_timeout and keepalive_requests on the client side to control idle time and request upper limit; 3. The upstream side needs to configure the keepalive number in the upstream block and use proxy_http_version 1.1 and proxy_set_header Connection "" to achieve connection multiplexing; 4. Incorrect configuration will cause connections to be unable to be reused or resources are exhausted. Correct tuning can reduce latency by 20–50% and reduce CPU overhead.
When optimizing web server performance—especially with Nginx— Keepalive connections are one of the most impactful settings you can tune. They reduce latency, lower CPU overhead from TCP handshakes, and improve overall throughput for both clients and upstream servers.

Here's what you need to know:
? What Are Keepalive Connections?
Keepalive keeps a TCP connection open after a request/response cycle so that multiple HTTP requests can be sent over the same connection instead of opening a new one each time. This avoids the overhead of repeated TCP and TLS handshakes.

In Nginx, there are two main contexts where keepalive matters:
- Client → Nginx (browser to server)
- Nginx → Upstream (like app servers or backends)
? Client-side Keepalive (Browser to Nginx)
This is handled automatically if the client supports HTTP/1.1 (which it almost always does). But you can fine-tune it in your server
block:

server { keepalive_timeout 65s; # How long to keep idle client connections open keepalive_requests 100; # Max requests per connection (default 100) }
-
keepalive_timeout
: If no new request comes within this time, the connection closes. -
keepalive_requests
: Limits how many requests one connection can serve—useful to prevent memory leaks or resource exhaustion.
? Tip: For high-traffic sites, lowering keepalive_timeout
to 15–30s can reduce idle connections without hurting performance much.
?? Upstream Keepalive (Nginx to Backend)
This is where many miss big performance gains. If Nginx proxies to a backend (eg, FastAPI, Node.js, PHP-FPM), enabling keepalive there reduces latency dramatically.
Step 1: Define an upstream with keepalive pool
upstream backend { server 127.0.0.1:8000; keepalive 32; # Number of idle keepalive connections per worker }
Step 2: Configure the proxy to reuse connections
location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection ""; }
Why these lines?
-
proxy_http_version 1.1;
— Required for keepalive (HTTP/1.0 doesn't support it). -
proxy_set_header Connection "";
— Clears theConnection: close
header from the client, allowing Nginx to reuse the upstream connection.
Without this, every proxy request opens a new TCP connection to your backend—wasteful and slow.
? Real-World Impact
- Latency : Eliminates TCP/TLS handshake per request → faster response times.
- Throughput : One connection can handle dozens or hundreds of requests.
- CPU/Memory : Fewer connections = less overhead on both Nginx and upstream servers.
For example, if your backend is a Python app behind Gunicorn, enabling upstream keepalive in Nginx can reduce average response time by 20–50% under load.
? Common Pitfalls
- Forgetting
proxy_set_header Connection "";
— upstream connections won't be reused. - Not setting
keepalive
inupstream
block — defaults to no keepalive. - Too high a
keepalive_timeout
— can exhaust file descriptors on busy servers. - Using HTTP/1.0 in proxy — won't support keepalive at all.
Basically, if you're using Nginx as a reverse proxy (which most people are), enabling both client and upstream keepalive is low-hanging fruit for better performance. It's not magic—it's just smart reuse of connections.
The above is the detailed content of Nginx and Keepalive Connections. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The main Nginx configuration file is usually located in the conf directory under /etc/nginx/nginx.conf (Ubuntu/Debian, CentOS/RHEL), /usr/local/etc/nginx/nginx.conf (macOSHomebrew) or the source code installation path; you can view the loaded configuration path through nginx-t, ps-ef|grepnginx check the path specified by the startup parameters, or use find/-namenginx.conf and locatenginx.conf to quickly find; the configuration file structure includes global settings, events blocks and http blocks, and common site configurations are common.

When Nginx experiences a "Toomyopenfiles" error, it is usually because the system or process has reached the file descriptor limit. Solutions include: 1. Increase the soft and hard limits of Linux system, set the relevant parameters of nginx or run users in /etc/security/limits.conf; 2. Adjust the worker_connections value of Nginx to adapt to expected traffic and ensure the overloaded configuration; 3. Increase the upper limit of system-level file descriptors fs.file-max, edit /etc/sysctl.conf and apply changes; 4. Optimize log and resource usage, and reduce unnecessary file handle usage, such as using open_l

Enabling Gzip compression can effectively reduce the size of web page files and improve loading speed. 1. The Apache server needs to add configuration in the .htaccess file and ensure that the mod_deflate module is enabled; 2.Nginx needs to edit the site configuration file, set gzipon and define the compression type, minimum length and compression level; 3. After the configuration is completed, you can verify whether it takes effect through online tools or browser developer tools. Pay attention to the server module status and MIME type integrity during operation to ensure normal compression operation.

The stub_status module displays the real-time basic status information of Nginx. Specifically, it includes: 1. The number of currently active connections; 2. The total number of accepted connections, the total number of processing connections, and the total number of requests; 3. The number of connections being read, written, and waiting. To check whether it is enabled, you can check whether the --with-http_stub_status_module parameter exists through the command nginx-V. If not enabled, recompile and add the module. When enabled, you need to add location blocks to the configuration file and set access control. Finally, reload the Nginx service to access the status page through the specified path. It is recommended to use it in combination with monitoring tools, but it is only available for internal network access and cannot replace a comprehensive monitoring solution.

The "Addressalreadyinuse" error means that another program or service in the system has occupied the target port or IP address. Common reasons include: 1. The server is running repeatedly; 2. Other services occupy ports (such as Apache occupying port 80, causing Nginx to fail to start); 3. The port is not released after crash or restart. You can troubleshoot through the command line tool: use sudolsof-i:80 or sudolnetstat-tulpn|grep:80 in Linux/macOS; use netstat-ano|findstr:80 in Windows and check PID. Solutions include: 1. Stop the conflicting process (such as sudos

The main difference between NginxPlus and open source Nginx is its enhanced functionality and official support for enterprise-level applications. 1. It provides real-time monitoring of the dashboard, which can track the number of connections, request rate and server health status; 2. Supports more advanced load balancing methods, such as minimum connection allocation, hash-based consistency algorithm and weighted distribution; 3. Supports session maintenance (sticky sessions) to ensure that user requests are continuously sent to the same backend server; 4. Allow dynamic configuration updates, and adjust upstream server groups without restarting the service; 5. Provides advanced cache and content distribution functions to reduce backend pressure and improve response speed; 6. Automatic configuration updates can be achieved through APIs to adapt to Kubernetes or automatic scaling environments; 7. Includes

A/B testing can be implemented through Nginx's split_clients module, which distributes traffic proportionally to different groups based on user attribute hashing. The specific steps are as follows: 1. Use the split_clients instruction to define the grouping and proportions in the http block, such as 50%A and 50%B; 2. Use variables such as $cookie_jsessionid, $remote_addr or $arg_uid as hash keys to ensure that the same user is continuously allocated to the same group; 3. Use the corresponding backend through if conditions in the server or location block; 4. Record the grouping information through a custom log format to analyze the effect; 5. Track the performance of each group with the monitoring tool

The method to enable HSTS is to configure the Strict-Transport-Security response header in the HTTPS website. The specific operations are: 1.Nginx adds the add_header directive in the server block; 2.Apache adds the header directive in the configuration file or .htaccess; 3.IIS adds customHeaders in web.config; it is necessary to ensure that the site fully supports HTTPS, parameters include max-age (valid period), includeSubDomains (subdomains are effective), preload (preload list), and the prereload is the prerequisite for submitting to the HSTSPreload list.
