Global Site
Displaying present location in the site.
January 9th, 2023
Machine translation is used partially for this article. See the Japanese version for the original article.
Introduction
This time, we tried building an HA cluster using Network Load Balancer (hereinafter called "NLB") to switch HA cluster destinations on Amazon Web Services (hereinafter called "AWS"). NLB is one of Elastic Load Balancing (hereinafter called "ELB") that provides the load balancer function.
In an HA cluster using NLB, the load balancing function of NLB switches the connection destinations from the client machine to the HA cluster. In addition, the DNS name (virtual host name) of NLB is used to access the HA cluster.
The HA cluster based on DNS name control using AWS DNS resources is the one of the HA cluster configurations using DNS names for access. This configuration is to switch destinations by rewriting the A record in Amazon Route 53 (hereinafter called "Route 53") using the AWS CLI.
As of December 2022, Route 53 does not support AWS PrivateLink, so the HA cluster requires an internet connection to rewrite A records in Route 53 using the AWS CLI.
On the other hand, in the configuration of this article, the DNS name is used for access, but the NLB health check function is used for switching destinations, and the AWS CLI is not executed. Therefore, the internet connection of the HA cluster is no longer required.
Contents
1. What is Network Load Balancer?
As mentioned above, NLB is one of ELB that provides load balancer function on AWS.
In this chapter, we explain the functions of NLB that should be understood when building an HA cluster using NLB.
Please refer to the following for details.
What is a Network Load Balancer?
NLB Functional Overview
NLB is a service that functions at the fourth layer of the OSI reference model and can distribute incoming traffic across the instances registered in the target group (hereinafter called "target instances"). NLB performs health checks on the target instances periodically and distributes traffic across the TCP/UDP port number of the instances that have passed the health check.
- * Traffic such as ICMP other than TCP/UDP, and traffic to port numbers that are not set in NLB even for TCP/UDP cannot be distributed to the target instances.
NLB Schemes
There are two schemes of NLB: "Internal Load Balancer" and "Internet-Facing Load Balancer".
When creating NLB, set the IP address used to access NLB from client machines (hereinafter called "NLB IP address").
An "Internal Load Balancer" sets private IP addresses for the NLB IP addresses and distributes access from within the VPC to the target instances.
An "Internet-Facing Load Balancer" sets public IP addresses for the NLB IP addresses and distributes access from outside the VPC via the Internet to the target instances.
How to Access NLB
There are two ways to access NLB: using the NLB IP address or the NLB DNS name.
This section assumes that you are placing target instances in multiple AZs considering the availability.
One NLB IP address can be created per AZ. The default NLB configuration allows you to distribute traffic to target instances in the same AZ by accessing the NLB IP address. By enabling "Cross-Zone Load Balancing" on the NLB attributes, you can distribute traffic to target instances in different AZs.
However, if you create only one NLB IP address, as shown in the following figure, NLB will be a single point of failure (SPOF).
Therefore, it is recommended to create NLB IP addresses in multiple AZs as shown in the following figure.
This way, even if the NLB IP address used for access becomes abnormal, you can access the target instances by switching to another NLB IP address.
However, a mechanism to switch the NLB IP addresses on the client machine is required for accessing HA cluster.
On the other hand, one NLB DNS name is created per NLB, and all NLB IP addresses are associated with the DNS name.
Because the NLB DNS name is managed by Route 53 inside AWS, Route 53 automatically excludes the IP address of the NLB from name resolution when it becomes unhealthy. Therefore, by using the same NLB DNS name from the client machine, you can continue to access the target instances without being aware of switching the NLB IP addresses.
Also, the NLB DNS name is registered in the Route 53 public and private hosted zones inside AWS. By creating "Internal Load Balancer" and resolving the NLB DNS name in a private hosted zone using Route 53 resolvers, you can access NLB from your on-premises environment without an internet connection.
2. HA Cluster Configuration
This time, we will build an "HA cluster using NLB" in the N. Virginia region.
We also build a client machine in the Oregon region as a pseudo-environment for an on-premises environment.
- * When actually connecting from an on-premises environment, please replace the Oregon region with the on-premises environment.
This time, considering security, create an NLB with an "Internal Load Balancer" to make the configuration not require an internet connection. Also, to access NLB, we use the NLB DNS name which can be easily set to switch destinations. The access route will be described later.
In addition to the client machine, create a DNS server for NLB DNS name resolution and VyOS for VPN connections in the Oregon region VPC.
In addition to the HA cluster and NLB, create a Route 53 resolver to resolve the names of the NLB DNS name in the N. Virginia region VPC.
This time, as an example, we will build a cluster of web servers and distribute traffic on TCP port 80 with NLB.
To switch destinations, use the health check function of the load balancer and intentionally make it succeed or fail to switch.
Specifically, EXPRESSCLUSTER controls whether the port set for health check can communicate or not, and makes the active server possible to communicate and the standby server impossible to communicate. This allows NLB to allocate traffic only to the active server that have successfully performed health check.
Access Route
As mentioned above, this time, considering security, we create the access route not requiring an internet connection.
- (1) Create an NLB of "Internal Load Balancer" to access with a private IP address.
- (2) Connect the "Oregon region VPC" and the "N. Virginia region VPC" with a VPN so that they can communicate each other using private IP addresses.
- (3) Create the Route 53 resolver in the N. Virginia region VPC to resolve the NLB DNS name from the client machine.
- (4) Configure the DNS server in the Oregon region VPC to resolve the NLB DNS name with the Route 53 resolver created in (3).
These allow you to access NLB via NLB DNS name resolution without going through the Internet.
3. HA Cluster Building Procedure
We introduce the HA cluster bluiding procedure.
Since it is a pseudo-on-premises environment, the description of the route tables, security groups, and EC2 instances in the Oregon region is omitted.
3.1 Setting AWS Environment
3.1.1 Creating VPCs and Subnets
First, create VPCs and subnets, etc. VPC CIDR and subnet CIDR are as follows.
N. Virginia region (us-east-1)
- VPC(VPC ID : vpc-1111aaaa)
- - CIDR : 10.0.0.0/16
- - Subnets
- ■ Subnet-A1 (Private) : 10.0.10.0/24
- ■ Subnet-A2 (Private) : 10.0.110.0/24
- ■ Subnet-A3 (Private) : 10.0.111.0/24
- ■ Subnet-B1 (Private) : 10.0.20.0/24
- ■ Subnet-B2 (Private) : 10.0.120.0/24
- ■ Subnet-B3 (Private) : 10.0.121.0/24
- VPC(VPC ID : vpc-2222bbbb)
- - CIDR : 172.16.0.0/16
- - Subnets
- ■ Subnet-A1 (Public) : 172.16.0.0/24
3.1.2 Creating a Route Table
Create a route table in the N. Virginia region VPC as needed.
■ N. Virginia region (us-east-1)
In this configuration, only one route to the Oregon region VPC is added, so it is a common route table for all subnets.
Destination | Target | Notes |
---|---|---|
10.0.0.0/16 | local | Default route |
172.16.0.0/16 | vgw-1111aaaa (ID of virtual private gateway) |
For communication to the Oregon region VPC (After setting a VPN connection) |
3.1.3 Creating Security Groups
Create security groups in the N.Virginia region VPC as needed.
Configure security groups appropriately according to your system's policies.
3.1.4 Creating EC2 Instances
Create EC2 instances for the HA cluster on each of the Subnet-A2 (Private) and Subnet-B2 (Private) in the N.Virginia region.
In addition, as software used to build a web server, install IIS on Windows and install Apache on Linux.
- * Installation method etc. are omitted.
3.1.5 Creating an NLB
Create an NLB of "Internal Load Balancer" that targets the two instances created for the HA cluster.
Select Subnet-A1 (Private) and Subnet-B1 (Private) in the N. Virginia region as the subnets to set the NLB IP address.
This time, in order to switch destinations to the Web servers, set TCP port 80 on the listener.
- * When switching destination of multiple applications, add a listener and set the protocol and port number used by the application appropriately.
Since we are redirected to the target group creation screen, set the port number for the health check. For the port number, specify any port that you do not want to use for business application.
This time, as an example, set "12345" to the port number.
Next, set the "Healthy threshold" and "Interval" according to the target switching destination time. Setting a smaller "Healthy threshold" and a shorter "Interval" shortens the switching destination time, but a shorter "Interval" causes more frequent processing of health checks. This time, set the default value.
Check the two instances we created for the HA cluster and register them with the target.
Return to the load balancer creation screen, select the one we created earlier for the target group and create the load balancer. Also, enable "Cross-Zone Load Balancing" on the NLB attributes after creation.
3.1.6 Creating a VPN Connection
Create a virtual private gateway in the N. Virginia region VPC and a customer gateway with a public IP address of VyOS in the Oregon region VPC.
Also, create a VPN connection between the virtual private gateway and the customer gateway.
3.1.7 Setting a VyOS
Download the configuration information from the VPN connection we created.
Set the configuration information to VyOS in the Oregon region to enable the VPN connection.
3.1.8 Creating a Route 53 Resolver
Create a Route 53 resolver in the N. Virginia region VPC.
Also, select Subnet-A3 (Private) and Subnet-B3 (Private) for the inbound endpoints.
For detailed setup methods, refer to the following article:
3.2 Building an HA Cluster
The configuration of EXPRESSCLUSTER is as follows.
In this verification, EXPRESSCLUSTER X 5.0 (Internal Ver. Windows : 13.02, Linux : 5.0.2-1).
Register three resources in the failover group of EXPRESSCLUSTER: "Azure probe port resource" (*), "mirror disk resource", and web server control resource ("service resource" for Windows environment, "EXEC resource" for Linux environment).
Also, the "Probe wait timeout" in "Azure probe port resource" must be set to be longer than the NLB health check interval. In this case, the NLB health check interval is set to the default value of 30 seconds, so the "Probe wait timeout" is set to 31 seconds or more.
- EXPRESSCLUSTER
- - Failover group (failover)
- ■ Azure probe port resource
- - Probeport : 12345 <- The port number of the health check that we set when we created the NLB.
- - Probe wait timeout : 31 seconds
- ■ Mirror disk resource (Windows environment)
- - Data partition : M:\
- - Cluster partition : R:\
- ■ Service resource (Windows environment)
- - Service name : World Wide Web Publishing Service
- ■ Mirror disk resource (Linux environment)
- - Data partition : /dev/nvme1n1p2
- - Cluster partition : /dev/nvme1n1p1
- ■ EXEC resource (Linux environment)
- - Start script : Script to start Apache
- - Stop script : Script to stop Apache
- * The contents of the script are omitted.
- * "Azure probe port resource" has "Azure" in the name, but it can be used to build an HA cluster using load balancer in environments other than Azure.
Also, in EXPRESSCLUSTER X 4.3 or later, "Azure probe port resource" is not displayed by default in Cluster WebUI when it is not an Azure environment, so you need to press the "Show All Types" button in the figure below to display it.
4. Checking the Operation
Verify that the client machine in the Oregon region can access the Server1 web page using the NLB DNS name. Next, failover from Server1 to Server2 and verify that you can access the Server2 web page with the same DNS name.
- 1. Start the failover group on Server1.
- 2. From the browser of the client machine, access the NLB DNS name and verify that we can connect to the Sever1 web page.
- 3. From the Cluster WebUI, manually move the failover group from Server1 to Server2.
- 4. From the browser of the client machine, access the NLB DNS name and verify that we can connect to the Sever2 web page.
Conclusion
This time, we tried connecting to an HA cluster on AWS using NLB and VPN.
In this configuration, you do not need to run the AWS CLI from EXPRESSCLUSTER to change the setting of AWS environment. Also, you do not need an internet connection for client machine and servers for the HA cluster.
When configuring such a cluster due to security requirements, etc., please refer to this procedure to build an HA cluster.
If you consider introducing the configuration described in this article, you can perform a validation with the trial module of EXPRESSCLUSTER. Please do not hesitate to contact us if you have any questions.