How To Set Up Nginx Load Balancing

How To Set Up Nginx Load Balancing

How To Set Up Nginx Load Balancing

Jun 30, 2015 – 6 min read

Introduction

Before we start with Nginx Load Balancing let’s review a few basic definitions (skip them if you are familiar).

NGINX
“Nginx (pronounced “engine x”) is a web server with a strong focus on high concurrency, performance and low memory usage. It can also act as a reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer and an HTTP cache.” – Wikipedia

Load balancing
Load balancing distributes workloads across multiple computing resources, such as computers, servers, central processing units, disk drivers, etc. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource.

Load balancer
Load balancer is the server which distributes workloads across multiple computing resources also called back-end servers.

Back-end server
The back-end server is the server which takes the workload distributed by the load balancing server (load balancer).

In this tutorial we will:

  • We will set up Nginx on two separate servers to work as back-end servers.
  • On one of the two machines we will set up Nginx to work as a load balancing server.

In all of the examples in this tutorial, the following server to IP map will apply:

Server A: 1.1.1.1
Server B: 2.2.2.2

Also we will set up the servers to handle requests for the following domain name – example.com. All commands below are for Debian/Ubuntu Linux distribution and PHP 5.x.

Configure Nginx on Server A

The following steps will result in Server A and Server B sharing the load from the website traffic. The first thing we will do is to install Nginx.


sudo apt-get install nginx php5-fpm

The next thing is to set up Nginx to handle requests for example.com. Go to the following directory:


cd /etc/nginx/sites-available

Open a new file which we will call “example”:


nano example

Place in the following configuration:

server {
        # Nginx listens on port 8080 in order to handle requests specifically for example.com.
        listen 8080;
 
        # This is the directory where the website "example.com" is located.
        root /path/to/document/root/
        # Defines files that will be used as an index.
        index index.php index.html index.htm;
 
        # The domain names which correspond to requests executed to port 8080.
        server_name example.com www.example.com;
 
        # Specifies that the charset of the content is UTF-8
        charset utf-8;
 
        # Media: images, icons, video, audio, HTC
        location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
                expires 1M;
                access_log off;
                add_header Cache-Control "public";
        }
 
        # Handle PHP requests
        location ~ \.php$ {
                try_files $uri /index.php =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }
}

Save and close the file.

The role of the file “example” is that every time when we execute request to 1.1.1.1:8080 or example.com:8080, Nginx will return the content requested by us. This can be a static content like image, audio file, etc. The content can also be dynamic.

In order to understand the code above in details, read the comments inside the code.

Now we will set up the load balancer. Open the file “default”:


nano default

This is a default file which comes with the installation of Nginx. Clear tthe content which is already there and add the following configuration code:


##
# Load balancing
##
upstream backend {
        server 1.1.1.1:8080 max_fails=3 fail_timeout=30s;
        server 2.2.2.2:8080 max_fails=3 fail_timeout=30s;
}
 
server {
        # Nginx listens on port 80 in order to handle requests specifically for example.com.
        listen 80;
 
        # The domain names which correspond to requests executed to port 80.
        server_name example.com www.example.com;
 
        # When there is a request to example.com to port 80, Nginx forwards this request to any of servers listed in upstream "backend".
        location / {
                proxy_pass http://backend;
                proxy_set_header Host example.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }
}

The purpose of this code the following:

  • The user types “example.com” in his/her web browser and hits Enter in order to send a request to the server where the website is physically located.
  • The closest DNS server handles this request and with the help of the DNS map it knows that the request must be redirected to server with IP address 1.1.1.1 on port 80.
  • The request is sent to 1.1.1.1 on port 80.
  • The request is handled by Nginx (which is installed on the machine with IP 1.1.1.1) because Nginx listens for requests to domain name “example.com” on port 80.
  • Nginx adds additional headers to the request (“Host” and “X-Forwarded-For”). They are required in order to inform the Back-end server for the host (the domain name) otherwise there is a danger the back-end server to be unable to handle the request.
  • After that Nginx re-sends the request to one of the back-end servers listed in upstream. Please note that the name of the list is “backend” but you can choose any name. The request is re-send to port 8080!

Please note that Server A with IP 1.1.1.1 listens on two ports – 80 and 8080. The reason is simple. Port 80 is dedicated to the Load balancer and port 8080 is dedicated to the Back-end server (remember that Server A plays two roles in that particular case!). Server B, which we will configure shortly, will be only a back-end server and Nginx installed there will listen only on port 8080.

The next and final step is to enable those configurations. Go to:



cd /etc/nginx/sites-enabled

and add a soft link for the newly created file “example”:



ln -s /etc/nginx/sites-available/example /etc/nginx/sites-enabled/example

Once you’re done, reload Nginx:



sudo service nginx reload

Configure Nginx on Server B

We need to set up a similar virtual host block on Server B so it will also respond to requests for our domain. The first thing we will do is to install Nginx.



sudo apt-get install nginx php5-fpm

The next thing is to set up Nginx to handle requests for example.com. Open the following directory:


cd /etc/nginx/sites-available

Open a new file which we will call “example”:


nano example

and place inside the following configuration:


server {
        # Nginx listens on port 8080 in order to handle requests specifically for example.com.
        listen 8080;
 
        # This is the directory where the website "example.com" is located.
        root /path/to/document/root/
        # Defines files that will be used as an index.
        index index.php index.html index.htm;
 
        # The domain names which correspond to requests executed to port 8080.
        server_name example.com www.example.com;
 
        # Specifies that the charset of the content is UTF-8
        charset utf-8;
 
        # Media: images, icons, video, audio, HTC
        location ~* \.(?:jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc)$ {
                expires 1M;
                access_log off;
                add_header Cache-Control "public";
        }
 
        # Handle PHP requests
        location ~ \.php$ {
                try_files $uri /index.php =404;
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }
}

As you may noticed, the configuration is exactly the same for the two back-end servers. The reason for that is because they both are listening on port 8080 in order to provide content requested for domain name example.com.

Go to:


cd /etc/nginx/sites-enabled

and add a soft link for the newly created file “example”:


ln -s /etc/nginx/sites-available/example /etc/nginx/sites-enabled/example

Once you’re done, reload Nginx:


sudo service nginx reload

That is the only configuration that we need on this server.

If everything else is set up, you can test the Nginx Load Balancing.

One of the drawbacks when dealing with load-balanced web servers is the possibility of data being out of sync between the servers. A solution to this problem might be employing a Git or SVN repository to sync to each server.

Take into account that Nginx allows you to choose the algorithm used to redirect the requests to the back-end server. There are also options to activate “Health Monitoring” and other features. For more details about this visit NGINX website.

If you need assistance with NGINX or setting up your Load Balancer, contact me.

No Comments

Post A Comment