Linode Library Home
Linode Library RSS Feed
Home :: High Availability
Print View View Source

Highly Available HTTP Load Balancer on Ubuntu 10.04 LTS (Lucid)

Published: Friday, August 13th, 2010 by Phil Paradis

High availability refers to the practice of keeping online resources available through node failure or system maintenance. This guide will demonstrate a method for using two Linodes to provide HTTP load balancing services with nginx. Services will be avilable even when one load balancer node is powered off or put into standby mode. IP failover, Heartbeat 3.x, and Pacemaker 1.x will be used for this example configuration.

This setup is intended to provide a load balancer that distributes inbound HTTP requests to a pool of frontend web/application servers in the same datacenter. The web servers could host mirrored content locally on each of their filesystems, or they could mount content directories over NFS and access databases hosted on another highly available server cluster. A simple network diagram illustrating such a setup might look like this:

Highly available, load balanced network with a load balancer, web servers, and a database and file server.

In the diagram above, a highly available load balancer comprised of two nodes distributes inbound HTTP connections to multiple frontend web servers. These web servers connect to a highly available database and file server comprised of two nodes. While this guide only covers configuration of the HTTP load balancer portion of the diagram, you could add your own web server nodes and highly available file/database resources to form a complete highly available and load balanced network configuration.

As high availability is a complex topic with many methods available for achieving various goals, it should be noted that the method discussed here may not be appropriate for some use cases. However, it should provide a good foundation for developing a customized HA solution.

Contents

Terminology

Throughout this document, the following terms are used:

You should substitute your own values for these terms wherever they are found.

Prerequisites

This guide assumes you have a minimum of four active Linodes on your account, and that two of them are freshly deployed Ubuntu 10.04 LTS (Lucid) instances. The remaining two or more Linodes will serve as front-end web servers. If needed, you can add another in Linode Manager by clicking the Linodes tab, and then clicking the Add a Linode link.

Data Center

All your Linodes must reside in the same datacenter for IP failover and private network communications between the nodes to work.

Disk Images

When you deploy your Linodes, be sure not to allocate all the available disk space to the main disk images. As part of this tutorial, you'll be creating three additional images on each Linode, so be sure to leave at least 2 GB free when deploying Ubuntu 10.04 to each. You may wish to leave more free disk space, depending on your needs. The additional disk images will be used to store web application and database data.

Private IP Addresses

Each Linode must have a private IP address assigned. For instructions, see Adding Private IP Addresses.

Note

You'll need to open a support ticket requesting an additional private IP address for the primary Linode (to serve as a "floating" address). Once your primary Linode has been allocated a second private IP, reboot both Linodes to allow the new IP addresses to be properly routed.

Public IP Address

Add a second IP address to your primary Linode by contacting support with your justification. After purchasing the additional IP, click the Remote Access tab for the primary Linode and make a note of the newly added IP address. This will serve as your "floating" IP.

DNS Records

You should configure DNS to point your highly available website's domain name (or subdomain) at the "floating" IP address. Requests made to this IP will be forwarded to your front-end web server pool. For instructions, see Adding DNS Records.

Basic System Configuration

Choose one Linode to serve as the "primary" node. Log into it via SSH as root and edit its /etc/hosts file to resemble the following:

File: /etc/hosts (on primary Linode)

127.0.0.1       localhost.localdomain       localhost
12.34.56.78     ha1-lb.example.com      ha1-lb
98.76.54.32     ha2-lb.example.com      ha2-lb

Remember to substitute your primary and secondary Linode's IP addresses for 12.34.56.78 and 98.76.54.32, respectively, along with appropriate hostnames for each. You will find the IP addresses for your Linodes on their "Remote Access" tabs in the Linode Manager.

For the sake of simplicity, it is recommended that you keep the short hostnames assigned as ha1-lb and ha2-lb. Next, issue the following commands to generate SSH keys for the root user on each VPS, synchronize their SSH host keys, set their hostnames, and allow passwordless logins from each to the other. SSH host key synchronization will prevent issues with key checking later on, which might otherwise occur should you need to perform an SSH login via a hostname pointing to a floating IP while the secondary node is serving as the primary in a failover condition. You will be prompted to assign passphrases to the SSH keys; this is optional, and you may skip this step by pressing the "Enter" key.

ssh-keygen -t rsa
scp ~/.ssh/id_rsa.pub root@ha2-lb:/root/ha1_key.pub
ssh root@ha2-lb "ssh-keygen -t rsa"
ssh root@ha2-lb "echo \`cat ~/ha1_key.pub\` >> ~/.ssh/authorized_keys2"
ssh root@ha2-lb "rm ~/ha1_key.pub"
scp root@ha2-lb:/root/.ssh/id_rsa.pub /root
cat ~/id_rsa.pub >> ~/.ssh/authorized_keys2
rm ~/id_rsa.pub

scp /etc/ssh/ssh_host* root@ha2-lb:/etc/ssh/
rm ~/.ssh/known_hosts
ssh root@ha2-lb "/etc/init.d/ssh restart"

scp /etc/hosts root@ha2-lb:/etc/hosts
echo "ha1-lb" > /etc/hostname
hostname -F /etc/hostname
ssh root@ha2-lb "echo \"ha2-lb\" > /etc/hostname"
ssh root@ha2-lb "hostname -F /etc/hostname"

Assign Static IP Addresses

By default, when Linodes are booted DHCP is used to assign IP addresses. This works fine for cases where a Linode will only have one IP address, as DHCP will always assign that IP to the Linode. If a Linode has or may have multiple IPs assigned to it, an explicit static configuration is required, as is the case with this configuration.

On the primary Linode, edit the /etc/network/interfaces file to resemble the following, making sure the values entered match those shown on the "Remote Access" tab for the primary Linode. The public IP address 12.34.56.78 should be changed to reflect the first public IP assigned to the primary Linode. The private IP address 192.168.88.88 should be changed to reflect the private IP assigned to the primary Linode. :

File: /etc/network/interfaces (on primary Linode)

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 12.34.56.78
netmask 255.255.255.0
gateway 12.34.56.1

auto eth0:0
iface eth0:0 inet static
address 192.168.88.88
netmask 255.255.128.0

Issue the following command to restart networking on the primary Linode:

/etc/init.d/networking restart

On the secondary Linode, edit the /etc/network/interfaces file to resemble the following, making sure the values entered match those shown on the "Remote Access" tab for the secondary Linode:

File: /etc/network/interfaces (on secondary Linode)

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 98.76.54.32
netmask 255.255.255.0
gateway 98.76.54.1

auto eth0:0
iface eth0:0 inet static
address 192.168.99.99
netmask 255.255.128.0

Issue the following command to restart networking on the secondary Linode:

/etc/init.d/networking restart

You should be able to ping each Linode's public address from the other, and you should be able to ping each Linode's private address from the other. If you can't, review your network configuration for errors.

Set Up IP Failover in Linode Manager

Navigate to the Remote Access tab for the primary Linode and make a note of its second private IP address. Next, navigate to the Remote Access tab for the secondary Linode and and set up IP failover. For instructions, see Configuring IP Failover.

Install Heartbeat, Pacemaker, and nginx

On the primary Linode, issue the following commands one at a time to install required packages. The second set of commands will ensure that the same packages are installed on the secondary Linode as well.

apt-get update
apt-get upgrade -y
apt-get install -y heartbeat pacemaker nginx
/etc/init.d/nginx stop
update-rc.d -f nginx remove

ssh root@ha2-lb "apt-get update"
ssh root@ha2-lb "apt-get upgrade -y"
ssh root@ha2-lb "apt-get install -y heartbeat pacemaker nginx"
ssh root@ha2-lb "/etc/init.d/nginx stop"
ssh root@ha2-lb "update-rc.d -f nginx remove"

After issuing the commands listed above, the required packages will be installed, and the nginx service will be temporarily stopped on both Linodes. Additionally, the system startup links for nginx will be removed on both Linodes, as Pacemaker will be responsible for starting and stopping it as necessary.

Configure Heartbeat

On the primary Linode, create a file named /etc/heartbeat/ha.cf with the following contents. Replace 98.76.54.32 with the statically assigned public IP address of the secondary Linode.

File: /etc/heartbeat/ha.cf (on primary Linode)

logfacility daemon
keepalive 2
deadtime 15
warntime 5
initdead 120
udpport 694
ucast eth0 98.76.54.32
auto_failback on
node ha1-lb
node ha2-lb
use_logd yes
crm respawn

On the secondary Linode, create a file named /etc/heartbeat/ha.cf with the following contents. Replace 12.34.56.78 with the statically assigned public IP address of the primary Linode.

File: /etc/heartbeat/ha.cf (on secondary Linode)

logfacility daemon
keepalive 2
deadtime 15
warntime 5
initdead 120
udpport 694
ucast eth0 12.34.56.78
auto_failback on
node ha1-lb
node ha2-lb
use_logd yes
crm respawn

On the primary Linode, create the file /etc/heartbeat/authkeys with the following contents. Make sure to change "CHANGEME" to a strong password consisting of letters and numbers.

File: /etc/heartbeat/authkeys (on primary Linode)

auth 1
1 sha1 CHANGEME

On the primary Linode, issue the following commands to set proper permissions on this file, copy it to the secondary Linode, and start the Heartbeat service on both nodes:

chmod 600 /etc/ha.d/authkeys
/etc/init.d/heartbeat start
scp /etc/ha.d/authkeys root@ha2-lb:/etc/ha.d/
ssh root@ha2-lb "chmod 600 /etc/ha.d/authkeys"
ssh root@ha2-lb "/etc/init.d/heartbeat start"

Configure nginx

The nginx web server will perform round-robin distribution of inbound HTTP requests to your frontend web servers. On the primary Linode, edit the file /etc/nginx/nginx.conf to resemble the following, making sure to use your frontend web server private IP addresses in the "upstream" section and your domain name in the "server" section.

File: /etc/nginx/nginx.conf (on primary Linode)

user www-data;
worker_processes  1;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
}

http {
    include /etc/nginx/mime.types;
    sendfile on;
    keepalive_timeout 65;
    tcp_nodelay on;
    gzip on;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";

    upstream webservers {
        server 192.168.15.15 max_fails=3 fail_timeout=5s;
        server 192.168.16.16 max_fails=3 fail_timeout=5s;
        server 192.168.17.17 max_fails=3 fail_timeout=5s;
    }

    server {
        server_name example.com www.example.com;
        location / {
            proxy_pass http://webservers;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_next_upstream timeout;
        }
    }
}

Issue the following command to copy the nginx configuration to your secondary Linode:

scp /etc/nginx/nginx.conf root@ha2-lb:/etc/nginx/

Configure Cluster Resources

It should be noted that unless you have a different editor set via the "EDITOR" environment variable, the cluster resource manager will use vim as its editing environment. If you would prefer to use nano instead, you may set this permanently by issuing the following commands on both Linodes:

export EDITOR=/bin/nano
echo "export EDITOR=/bin/nano" >> .bashrc

For the purposes of these instructions, it will be assumed that you are are using vim as your editor. On the primary Linode, issue the following command to start the cluster resource manager in "edit" mode:

crm configure edit

You will be presented with information resembling the following. If you don't see anything, enter ":q" to quit the editor and wait a minute before restarting it.

node $id="1d548242-8908-49c6-bcb3-a594267b81e2" ha1-lb
node $id="5a7ab511-e274-435f-8662-7dbc737a1786" ha2-lb
property $id="cib-bootstrap-options" \
        dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
        cluster-infrastructure="Heartbeat"

To begin editing your configuration, press the "i" key. To leave edit mode, press "Ctrl+c". To quit without saving any changes, press ":" and enter "q!". To save changes and quit, press ":" and enter "wq".

Insert the following lines in between the second "node" line at the top of the configuration and the "property" line at the bottom. Important: Be sure to replace both instances of 55.55.55.55 with the "floating" public IP address.

primitive ip1 ocf:heartbeat:IPaddr2 \
        params ip="55.55.55.55" nic="eth0:1" \
        op monitor interval="5s"
primitive ip1arp ocf:heartbeat:SendArp \
        params ip="55.55.55.55" nic="eth0:1"
primitive nginx ocf:heartbeat:anything \
        params \
           binfile="/usr/sbin/nginx" \
           cmdline_options="-c /etc/nginx/nginx.conf"
group HAServices ip1 ip1arp nginx \
        meta target-role="Started"
order ip-before-arp mandatory: ip1:start ip1arp:start
order ip-before-nginx mandatory: ip1:start nginx:start

Change the "property" section to resemble the following excerpt. You'll be adding an "expected-quorum-votes" entry due to the fact that your cluster only has two nodes, as well as adding the lines for "stonith-enabled" and "no-quorum-policy". Don't forget the trailing "\" after the "cluster-infrastructure" line.

property $id="cib-bootstrap-options" \
        dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \
        cluster-infrastructure="Heartbeat" \
        expected-quorum-votes="1" \
        stonith-enabled="false" \
        no-quorum-policy="ignore"

Add the following excerpt after the "property" section:

rsc_defaults $id="rsc-options" \
        resource-stickiness="100"

After making these changes, press "Ctrl+c" and enter ":wq" to save the configuration and exit the editor.

Monitor Cluster Resources

On the primary Linode, issue the commnd crm_mon to start the cluster monitor. You'll see output resembling the following:

============
Last updated: Fri Aug 13 20:49:19 2010
Stack: Heartbeat
Current DC: ha2-lb (5a7ab511-e274-435f-8662-7dbc737a1786) - partition with quorum

Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd
2 Nodes configured, 1 expected votes
1 Resources configured.
============

Online: [ ha2-lb ha1-lb ]

 Resource Group: HAServices
     ip1        (ocf::heartbeat:IPaddr2):       Started ha1-lb
     ip1arp     (ocf::heartbeat:SendArp):       Started ha1-lb
     nginx      (ocf::heartbeat:anything):      Started ha1-lb

In this example, the clustered resources are started on ha1-lb. To simulate a failover situation, issue the following command to put ha1-lb into standby:

crm node standby ha1-lb

Within a few seconds, the resources will be stopped on the initial node and started on the other one. To bring ha1-lb back online, simply issue the following command:

crm node online ha1-lb

At this point, you should be able to shut down the Linode hosting your resources and watch them automatically migrate to the other Linode (provided you have crm_mon running in a terminal on the still-active Linode). Note that because "resource-stickiness" is set at "100", resources should stay wherever they are migrated they until you manually move them to another node. This can be helpful in cases where you need to perform maintenance on a node, but don't want services resuming on it until you're ready. Congratulations, you've created a highly available HTTP load balancer!

More Information

You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

Creative Commons License

This guide is licensed under a Creative Commons Attribution-NoDerivs 3.0 United States License.

Last edited by Matthew Cone on Monday, July 16th, 2012 (r2976).