Nginx Load Balancing Example

Posted on May 13, 2017 at 9:10 am

Nginx can be used as load balancer, so it can split the traffic within N servers specified in the upstream section of the Nginx configuration file. This is useful to scale the web traffic if it becomes too much for a single server or VPS. Using Nginx as load balancer is incredibly easy, in short, you need to configure the upstream and the location, that’s all.

This is the content of /etc/nginx/conf.d/default.conf:

upstream backendservers {
server {
    listen 80;
    server_name localhost;
    access_log off;
    error_log /var/www/localhost/logs/error.log warn;
    root /var/www/localhost/htdocs;
    index index.html index.htm;
    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
        proxy_connect_timeout 60s;
        proxy_pass http://backendservers;

In the section “upstream backendservers” I add all servers that host my web application, so when the user visits the Nginx load balancer IP address (i.e, the load is balanced between all servers inside the upstream section. In my test, my web application had 1000 ms response time with one server, when I added 5 servers as backend servers, the response time decreased to 230 ms, that means the load was divided by all 5 servers.

Read more on the help page:

Updated on June 5, 2017 at 11:12 pm

Receive updates via email

Other Posts

Updated Posts