Pavel Tashev
Pavel Tashev / Entrepreneur and Software Developer
Constantly learning new things. I love to be part of projects and StartUps that inspire me and improve the life of society.
Contact me via LinkedIn or Twitter
Isle of Man, UK and Bulgaria
How To Set Up Nginx Load Balancing

How To Set Up Nginx Load Balancing

How To Set Up Nginx Load BalancingImage from NGINX

Introduction

Before we start let’s review few basic definitions.

Nginx
“Nginx (pronounced “engine x”) is a web server with a strong focus on high concurrency, performance and low memory usage. It can also act as a reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer and an HTTP cache.” – Wikipedia

Load balancing
Load balancing distributes workloads across multiple computing resources, such as computers, servers, central processing units, disk drivers, etc. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource.

Load balancer
Load balancer is the server which distributes workloads across multiple computing resources also called back-end servers.

Back-end server
The back-end server is the server which takes the workload distributed by the load balancing server (load balancer).

In this tutorial we will:

  • • For two machines we will set up Nginx to work as back-end servers.
  • • For one of those machines we will se up Nginx to work as a load balancing server.

In all of the examples in this tutorial, the following server to IP map will apply:

Server A: 1.1.1.1
Server B: 2.2.2.2

Also we will set up the servers to handle requests for the following domain name:

example.com

All commands below are for Debian/Ubuntu Linux distribution.

Step 1 – Configure Nginx on Server A

The following steps will result in Server A and Server B sharing the load from website traffic.

The first thing we will do is to install Nginx.

sudo apt-get install nginx php5-fpm

The next thing is to set up Nginx to handle requests for example.com. Go ahead and go to the following directory:

cd /etc/nginx/sites-available

Open a new file which we will call “example”:

nano example

An place inside the following configuration:

server {
        # Nginx listens on port 8080 in order to handle requests specifically for example.com.
        listen 8080;

        # This is the directory where the website "example.com" is located.
        root /path/to/document/root/
        # Defines files that will be used as an index. 
        index index.php index.html index.htm;

        # The domain names which correspond to requests executed to port 8080.
        server_name example.com www.example.com;

        # Specifies that the charset of the content is UTF-8
        charset utf-8;

        # Media: images, icons, video, audio, HTC
        location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
                expires 1M;
                access_log off;
                add_header Cache-Control "public";
        }

        # Handle PHP requests
        location ~ \.php$ {
                try_files $uri /index.php =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }
}

Save and close the file.

The role of the file “example” is that every time when we execute request to 1.1.1.1:8080 or example.com:8080, Nginx will return the content requested by us. This can be a static content like image, audio file, etc. (which is cached in our case). The content can also be dynamic.

In order to understand the code above in details, read the comments inside the code.

Now we will set up the load balancer. Open the file “default”:

nano default

This is a default file which comes with the installation of Nginx. Clear tthe content which is already there and add the following configuration code:

##
# Load balancing
##
upstream backend {
        server 1.1.1.1:8080 max_fails=3 fail_timeout=30s;
        server 2.2.2.2:8080 max_fails=3 fail_timeout=30s;
}

server {
        # Nginx listens on port 80 in order to handle requests specifically for example.com.
        listen 80;

        # The domain names which correspond to requests executed to port 80.
        server_name example.com www.example.com;

        # When there is a request to example.com to port 80, Nginx forwards this request to any of servers listed in upstream "backend".
        location / {
                proxy_pass http://backend;
                proxy_set_header Host example.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }
}

The purpose of this code is simple and we will describe it few steps:

  • • The user types “example.com” in his/her web browser and hits Enter in order to send a request to the server where the website is physically located.
  • • The closest DNS server handles this request and with the help of the DNS map it knows that the request must be redirected to server with IP address 1.1.1.1 on port 80.
  • • The request is sent to 1.1.1.1 on port 80.
  • • The request is handled by Nginx (which is installed on the machine with IP 1.1.1.1) because Nginx listens for requests to domain name “example.com” on port 80.
  • • Nginx adds additional headers to the request (“Host” and “X-Forwarded-For”). They are required in order to inform the Back-end server for the host (domain name) otherwise there is a danger the back-end server to be unable to handle the request.
  • • After that Nginx re-send the request to one of the back-end servers listed in upstream. Please note that the name of the list is “backend” but you can choose any name of your choice. The request is re-send to port 8080!

Please note that Server A with IP 1.1.1.1 listens on two ports – 80 and 8080. The reason is simple. Port 80 is dedicated to the Load balancer and port 8080 is dedicated to the Back-end server (remember that Server A plays two roles!). Server B, which we will configure in few minutes, will be only a back-end server, so the Nginx which will be installed there will listen only on port 8080.

The next final step is enable those configurations. Go to:

cd /etc/nginx/sites-enabled

And add soft link for the newly created file “example”:

ln -s /etc/nginx/sites-available/example /etc/nginx/sites-enabled/example

Once you’re done, reload nginx:

sudo service nginx reload

Step 2 – Configure nginx on Server B

We need to set up a similar virtualhost block on Server B so it will also respond to requests for our domain. The first thing we will do is to install Nginx.

sudo apt-get install nginx php5-fpm

The next thing is to set up Nginx to handle requests for example.com. Go ahead and go to the following directory:

cd /etc/nginx/sites-available

Open a new file which we will call “example”:

nano example

An place inside the following configuration:

server {
        # Nginx listens on port 8080 in order to handle requests specifically for example.com.
        listen 8080;

        # This is the directory where the website "example.com" is located.
        root /path/to/document/root/
        # Defines files that will be used as an index. 
        index index.php index.html index.htm;

        # The domain names which correspond to requests executed to port 8080.
        server_name example.com www.example.com;

        # Specifies that the charset of the content is UTF-8
        charset utf-8;

        # Media: images, icons, video, audio, HTC
        location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
                expires 1M;
                access_log off;
                add_header Cache-Control "public";
        }

        # Handle PHP requests
        location ~ \.php$ {
                try_files $uri /index.php =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }
}

As you may noticed, the configuration is exactly the same for both back-end servers. The reason for that is because they both are listening on port 8080 in order to provide content requested for domain name example.com.

Go to:

cd /etc/nginx/sites-enabled

If you don’t need the default file (“defauult”) you can delete it:

rm default

And add soft link for the newly created file “example”:

ln -s /etc/nginx/sites-available/example /etc/nginx/sites-enabled/example

Once you’re done, reload nginx:

sudo service nginx reload

That is the only configuration that we need to do on this server!

One of the drawbacks when dealing with load-balanced web servers is the possibility of data being out of sync between the servers.

A solution to this problem might be employing a git or svn repository to sync to each server.

Take into account that Nginx allows you to choose the algorithm used to redirect the requests to the back-end server. There are also options to activate “Health Monitoring” and other features. For more details about this visit NGINX Load Balancing.

As always, any feedback in the comments is welcome!

Share me please:

Leave a Reply

j8VgoH

Please type the text above: