Displaying present location in the site.

We Tried Building HA Cluster on Google Cloud Platform (Windows/Linux): Building

EXPRESSCLUSTER Official Blog

October 29th, 2021

Machine translation is used partially for this article. See the Japanese version for the original article.

Introduction

We tried building HA clusters using EXPRESSCLUSTER on Google Cloud Platform (hereinafter called "GCP").

We introduced overview of HA cluster configurations on GCP in this popuparticle.
We can build HA cluster configuration on GCP with two services, Google Cloud DNS (hereinafter called "Cloud DNS") and Google Cloud Load Balancing (hereinafter called "Cloud Load Balancing") to switch between the virtual machines that constitute the HA cluster.

In this article, we will introduce the procedure for building HA clusters.

Contents

1. HA Cluster Configurations

The followings are two configurations as HA clusters with EXPRESSCLUSTER on GCP.

  • HA Cluster with Cloud DNS
  • HA Cluster with Cloud Load Balancing(Internal TCP Load Balancing)
In this case, we will build HA clusters that is accessible from clients in the same Virtual Private Cloud network (hereinafter called "VPC network") on the GCP.

The OS, kernel, and EXPRESSCLUSTER version that we confirmed in this time are as follows.

Windows
  • OS            :Windows Server 2019 Datacenter
  • EXPRESSCLUSTER:EXPRESSCLUSTER X 4.3(Internal Version 12.30)
Linux
  • OS            :Red Hat Enterprise Linux 8.2
  • kernel        :4.18.0-193.28.1.el8_2.x86_64
  • EXPRESSCLUSTER:EXPRESSCLUSTER X 4.3(Internal Version 4.3.0-1)

1.1 HA Cluster with Cloud DNS

The following is the HA cluster with Cloud DNS to switch the destination of the HA cluster.

The clients access to the active instance by accessing the DNS name of the record registered in Cloud DNS.
Click popuphere for overview of how to switch the destinations of HA cluster with Cloud DNS.

1.2 HA Cluster with Cloud Load Balancing(Internal TCP Load Balancing)

The following is the HA cluster with Cloud Load Balancing to switch the destination of the HA cluster.

The clients access to the active instance by accessing the frontend IP address of Cloud Load Balancing.
Click popuphere for overview how to switch the destinations of HA cluster with Cloud Load Balancing.

2. HA Cluster Building Procedure

2.1 Building HA Cluster with Cloud DNS

2.1.1 Creating and Setting up VPC network

Create a VPC network.
This time, we will create a VPC network in the Oregon region(us-west1).
The configuration of the VPC network is as follows:

  • VPC Network(Name:test-vpc)
  • Subnets
  • subnetwork-1:10.0.1.0/24
  • subnetwork-11:10.0.11.0/24

Set up VPC network so that the instances for HA cluster can access to the Internet.
This is because the instances for HA cluster must access to the Internet when executing the gcloud command.
For access to the Internet, there are ways using NAT Gateway and NAT instances.
This time, we will use NAT Gateway (Cloud NAT).

Also, set up firewall rules to control communications within the VPC network.

For more information about creating a VPC network, configuring NAT gateway/NAT instances, and setting up firewall rules, see the GCP documentation.

2.1.2 Creating Instances

Create instances for the HA cluster.
Create one instance in each zone (us-west-1a, us-west-1b).

  • Server1
  • Region:us-west1 (Oregon)
  • Zone:us-west1-a
  • Disks:Add a persistent disk for the mirror disk.
  • Networking:
    • ・Network interfaces:
  • -Network:test-vpc
  • -Subnetwork:subnetwork-11
  • -Primary internal IP:10.0.11.101
  • -External IP:None
  • Server2
  • Region:us-west1 (Oregon)
  • Zone:us-west1-b
  • Disks:Add a persistent disk for the mirror disk.
  • Networking:
    • ・Network interfaces:
  • -Network:test-vpc
  • -Subnetwork:subnetwork-11
  • -Primary internal IP:10.0.11.102
  • -External IP:None
Grant permission to instances to control GCP resources when or after they are created.
Use Identity and Access Management (IAM) to grant the following permissions to instances:

  • Add or remove DNS records
  • View DNS records

For more information about creating instances and setting up Cloud IAM for instances, see the GCP documentation.

[Reference]
popupCompute Engine

To meet EXPRESSCLUSTER system requirements, do the followings after you create the instances:

  • OS, kernel upgrade/version down
  • Installing, updating, and downgrading packages

Use OS and kernel versions meeting EXPRESSCLUSTER system requirements.
See below for EXPRESSCLUSTER system requirements.

2.1.3 Setting Cloud DNS

Use Cloud DNS to create record set.
This time, we will register "cluster.sample.com" with the Resource Record Type A, which is used as the DNS name(Virtual host name) to connect to the HA cluster.
In addition, For the IP address of the Resource Record Type A to be registered, specify the IP address of the instance for HA cluster.

Create a DNS zone. When you create a zone, set the following:

  • DNS zone
  • Zone type:private
  • Zone name:test-zone
  • DNS name:sample.com
  • Options:Default(private)
  • Networks:test-vpc

Add a record set to the zone that you created. When you create a record set, set the following:

  • Record set
  • DNS name:cluster.sample.com
  • Resource Record Type:A
  • TTL:5
  • TTL Unit:second
  • IPv4 Address:10.0.11.101

For more information about creating DNS zones, see the GCP documentation.

2.1.4 Building HA Cluster

Build a mirror disk type HA cluster. EXPRESSCLUSTER Failover group contains two group resources: Google Cloud DNS resource and Mirror disk resource.

  • * This time, we haven't set it up, but we recommend using logical volumes when building a mirror disk type cluster.
For detailed instructions, see this popuparticle.
  • EXPRESSCLUSTER
  • Failover Group (failover)
  • Google Cloud DNS resource
  • Common
  • Zone Name:test-zone
  • DNS Name:cluster.sample.com
  • IP Address:10.0.11.101
  • TTL:5
  • Server1
  • Set Up Individually:On
  • IP Address:10.0.11.101
  • Server2
  • Set Up Individually:On
  • IP Address:10.0.11.102
  • Mirror disk resource (for Windows)
  • Data Partition Drive Letter:M:
  • Cluster Partition Drive Letter:R:
  • Mirror disk resource (for Linux)
  • Mount Point:Any (e.g. /mnt/mirror)
  • Data Partition Device Name:/dev/sdb2
  • Cluster Partition Device Name:/dev/sdb1

HA cluster with Cloud DNS use Google Cloud DNS resource to control (add/remove) DNS records.
Google Cloud DNS resource is available in EXPRESSCLUSTER X 4.3 or later.

2.1.5 Checking the Operation

Verify that the client can access the HA cluster by accessing the DNS name of "cluster.sample.com" added in record sets.

  • 1.Start failover group on Server1.
  • 2.Verify that you can access to "cluster.sample.com" from a client and access Server1.
  • 3.From Cluster WebUI, manually move the failover group from Server1 to Server2.
  • 4.Verify that you can access "cluster.sample.com" from a client and access Server2.

We were able to confirm that we could access the HA cluster with the DNS name of the record set registered from the client.

2.2 Building HA Cluster with Cloud Load Balancing(Internal TCP Load Balancing)

2.2.1 Creating and Setting up VPC network

Create a VPC network.
This time, we will create a VPC network in the Oregon region(us-west1).
The configuration of the VPC network is as follows:

  • VPC Network(Name:test-vpc)
  • Subnets
  • subnetwork-1:10.0.1.0/24
  • subnetwork-11:10.0.11.0/24

Set up firewall rules to control communications within the VPC network.

For more information about creating a VPC network, configuring NAT gateway/NAT instances, and setting up firewall rules, see the GCP documentation.

2.2.2 Creating instances

Create instances for the HA cluster.
Create one instance in each zone (us-west-1a, us-west-1b).

  • Server1
  • Region:us-west1 (Oregon)
  • Zone:us-west1-a
  • Disks:Add a persistent disk for the mirror disk.
  • Networking:
    • ・Network tags:allow-health-check
    • ・Network interfaces:
  • -Network:test-vpc
  • -Subnetwork:subnetwork-11
  • -Primary internal IP:10.0.11.101
  • -External IP:None
  • Server2
  • Region:us-west1 (Oregon)
  • Zone:us-west1-b
  • Disks:Add a persistent disk for the mirror disk.
  • Networking:
    • ・Network tags:allow-health-check
    • ・Network interfaces:
  • -Network:test-vpc
  • -Subnetwork:subnetwork-11
  • -Primary internal IP:10.0.11.102
  • -External IP:None

For more information about creating instances, see the GCP documentation.

[Reference]
popupCompute Engine
  • * The instances you created may require access to the Internet.
    As appropriate, make the configuration so that instances in the VPC network can access the Internet.

To meet EXPRESSCLUSTER system requirements, do the followings after you create the instances:

  • OS, kernel upgrade/version down
  • Installing, updating, and downgrading packages

Use OS and kernel versions meeting EXPRESSCLUSTER system requirements.
See below for EXPRESSCLUSTER system requirements.

2.2.3 Setting Cloud Load Balancing

Make various settings and use Cloud Load Balancing to create a load balancer.

Set up firewall rules.
Create the following firewall rules that allow communication from the Google Cloud health checking system (130.211.0.0/22 and 35.191.0.0/16) for the instances:

  • Firewall rules
  • Name:test-allow-health-check
  • Network:test-vpc
  • Priority:1000
  • Direction of traffic:Ingress
  • Action on match:Allow
  • Targets:Specified target tags
  • Target tags:allow-health-check
  • Source filter:IP ranges
  • Source IP ranges:130.211.0.0/22, 35.191.0.0/16
  • Protocols and ports:Allow all

Create instance groups.
Create two unmanaged instance groups having each backend instance created in "2.2.2 Creating instances", and the two instance groups are placed in separate zones.

  • Instance groups
  • Name:test-ig-a
  • Region:us-west1 (Oregon)
  • Zone:us-west1-a
  • Network:test-vpc
  • Subnetwork:subnetwork-11
  • VM instances:Server1

  • Name:test-ig-b
  • Region:us-west1 (Oregon)
  • Zone:us-west1-b
  • Network:test-vpc
  • Subnetwork:subnetwork-11
  • VM instances:Server2

Create the following load balancers that load balance communications within the VPC network:

  • Load balancer
  • Type:TCP Load Balancing
  • Internet facing or internal only:Only between my VMs
  • Multiple regions or single region:Single region only
  • Name:test-lb
  • Region:us-west1 (Oregon)
  • Network:test-vpc
  • Backend configuration
    • ・Backends:test-ig-a、test-ig-b
    • ・Health check:
  • -Name:test-health-check
  • -Protocol:TCP
  • -Port:12345
  • -Proxy protocol:None
  • -Health criteria:This time we will set the default value.
    • ・Session affinity:None
  • Frontend configuration:
  • ・Name:test-frontend
  • ・Subnetwork:subnetwork-1
  • ・Internal IP:10.0.1.100
  • ・Ports:the number of the port for the application
  • ・Global access:Disable

For more information on setting up Cloud Load Balancing, see the GCP documentation.

2.2.4 Building HA Cluster

Build a mirror disk type HA cluster. EXPRESSCLUSTER Failover group contains two group resources: Google Cloud virtual IP resource and Mirror disk resource.

  • * This time, we haven't set it up, but we recommend using logical volumes when building a mirror disk type cluster.
For detailed instructions, see this popuparticle.
  • EXPRESSCLUSTER
  • Failover group (failover)
  • Google Cloud virtual IP resource
  • Port Number:12345
  • Mirror disk resource (for Windows)
  • Data Partition Drive Letter:M:
  • Cluster Partition Drive Letter:R:
  • Mirror disk resource (for Linux)
  • Mount Point:Any(e.g. /mnt/mirror)
  • Data Partition Device Name:/dev/sdb2
  • Cluster Partition Device Name:/dev/sdb1

HA cluster with Cloud Load Balancing use Google Cloud virtual IP resource to control the port for load balancer health check.
Google Cloud virtual IP resource is available in EXPRESSCLUSTER X 4.2 or later.

2.2.5 Checking the Operation

Verify that the client can access the HA cluster by accessing the frontend IP address of Cloud Load Balancing (10.0.1.100).

  • 1.Start failover group on Server1.
  • 2.Verify that you can access the frontend IP address of Cloud Load Balancing (10.0.1.100) from a client and access Server1.
  • 3.From Cluster WebUI, manually move the failover group from Server1 to Server2.
  • 4.Verify that you can access the frontend IP address of Cloud Load Balancing (10.0.1.100) from a client and access Server2.

We were able to confirm that we could access the HA cluster with the frontend IP address of Cloud Load Balancing from the client.

Conclusion

This time, we introduced the procedure for building HA clusters using EXPRESSCLUSTER on GCP.
If you consider introducing the configuration described in this article, you can perform a validation with the popuptrial module of EXPRESSCLUSTER. Please do not hesitate to contact us if you have any questions.