Nginx Load Balancing Example

Posted on May 13, 2017 at 9:10 am

Nginx can be used as load balancer, so it can split the traffic within N servers specified in the upstream section of the Nginx configuration file. This is useful to scale the web traffic if it becomes too much for a single server or VPS. Using Nginx as load balancer is incredibly easy, in short, you need to configure the upstream and the location, that’s all.

This is the content of /etc/nginx/conf.d/default.conf:

upstream backendservers {
    server 1.1.1.1:80;
    server 1.1.1.2:80;
    server 1.1.1.3:80;
    server 1.1.1.4:80;
    server 1.1.1.5:80;
}
 
server {
    listen 80;
    server_name localhost 123.123.123.123;
    access_log off;
    error_log /var/www/localhost/logs/error.log warn;
    root /var/www/localhost/htdocs;
    index index.html index.htm;
 
    [...]
 
    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_connect_timeout 60s;
        proxy_pass http://backendservers;
    }
}

In the section “upstream backendservers” I add all servers that host my web application, so when the user visits the Nginx load balancer IP address (i.e http://123.123.123.123/), the load is balanced between all servers inside the upstream section. In my test, my web application had 1000 ms response time with one server, when I added 5 servers as backend servers, the response time decreased to 230 ms, that means the load was divided by all 5 servers.

Read more on the Nginx.org help page:

http://nginx.org/en/docs/http/ngx_http_upstream_module.html

Updated on June 5, 2017 at 11:12 pm

Receive updates via email

Other Posts

Updated Posts