Displaying present location in the site.

We Tried the Automatic Configuration of an HA Cluster by Using Ansible (Windows)

EXPRESSCLUSTER Official Blog

June 22nd, 2021

Introduction

In recent years, utilization of configuration management tools such as Ansible, Chef, and Puppet has been enhanced as methods to automate the configuration and operation of an infrastructure.

This time, we tried the automatic configuration of an HA cluster on Amazon Web Services (hereinafter called “AWS”) by using Ansible and CreateClusterOnLinux that is a tool for creating a configuration file for EXPRESSCLUSTER X. Let us introduce its environment and the way how to configure it.

  • *EXPRESSCLUSTER is a brand name for overseas sales while CLUSTERPRO is that for the sales in Japan.

Contents

1. About an Automatic Configuration Tool

1.1 What it Ansible?

Ansible in an open-source configuration management tool with the following features.

  • Scripts can be described in YAML language that has a high readability.
  • The results are always the same even after one operation is performed more than once because idempotency is maintained.
  • No agent needs to be installed to servers to be configured.

With Ansible, a server is remotely configured by using the configuration file called a playbook that defines the information of the server to be automatically configured.

1.2 What is CreateClusterOnLinux?

To build and configure an HA cluster with EXPRESSCLUSTER X, normally, the web-based Cluster WebUI is used, and on the other hand, to automatically configure an HA cluster, the command is needed.

In Open Knowledge project for EXPRESSCLUSTER by GitHub, the CreateClusterOnLinux repository has released the tool. Using this tool allows a configuration file for EXPRESSCLUSTER X to be created on command-base without using Cluster WebUI.

CreateClusterOnLinux runs on Linux and its execution file name is “clpcfset”. In accordance with the parameters given as options, “clpcfset” executes creation of a cluster, addition of a server, and addition of a group resource and a monitor resource for the configuration file (“clp.conf”) of EXPRESSCLUSTER located in the current directory

The below is the example of the command for adding a server to constitute an HA cluster.

$ clpcfset add srv "server-a" 0
(Add a host name, "server-a" with priority 0.)

The following is the example of the command for adding a service resource to start and stop IIS.

$ clpcfset add rsc "failover1" "service" "service-IIS"
(Add a service resource named “service-IIS” to the failover group, “failover1”.)

$ clpcfset add rscparam "service" "service-IIS" "parameters/name" "World Wide Web Publishing Service"
(Set a service name of IIS to the “Service name” parameter of the service resource.)

To create the configuration file that can actually be used, “clpcfset” needs to be repeatedly executed as many times as the number of setting items.
In this article, in order to create a configuration file for an HA cluster with two nodes, a script with the name of “clp_create_config.sh” is used. This “clp_create_config.sh” is included in “Ansible Playbook (“clp_playbook-202106.tar.gz”) for the automatic configuration of an HA cluster with EXPRESSCLUSTER X” shown later.

  • *“clpcfset” is included in EXPRESSCLUSTER with X 4.3 or later.
    This article assumes that we are using CreateClusterOnLinux downloaded from GitHub.

2. HA Cluster Configureation

2.1 HA Cluster Configuration

This time, we perform the automatic configuration of “HA cluster based on VIP control” while showing you each step.

(HA cluster based on VIP control is one of the cluster configurations using EXPRESSCLUSTER on AWS.)

For the automatic configuration in this article, please consider that a network and servers constituting an HA cluster have already been created. For the details about “HA cluster based on VIP control”, see the relevant chapter in the guide by following the links shown at the end of this 2.1 section.
Additionally, in AWS environment, the automatic configuration of an HA cluster is also possible by using CloudFormation. For its details, see popup this blog article.

The configuration diagram of the HA cluster configured this time is shown below:

Create one VPC for N. Virginia region, and allocate EC2 instances (Server-A、Server-B) respectively in the private subnets of the availability zones A and B as server instances to constitute a two-node cluster.

Since AWS CLI needs to be used for “HA cluster based on VIP control” configured this time, an Internet gateway and NAT instances are added for the purpose of communication with the end point.

On the public subnet in the availability zone A, create an instance (Client) of Windows OS which will be a client for checking behaviors of EXPRESSCLUSTER X and Web services. In addition, create an instance (Manager (Ansible)) of Linux OS to perform the automatic configuration by Ansible.

For more information on an HA cluster configuration in the AWS environment, see the configuration guides below.

[Reference]
popupDocumentation - Setup Guides
  • Windows > Cloud > Amazon Web Services

2.2 Configuring EXPRESSCLUSTER

Assume that an HA cluster is going to be configured with the following configuration. Use EXPRESSCLUSTER X 4.3 for Windows.

A failover group and group resources are created as follows:

Failover group name Group resource name Resource type Description
failover01 awsvip-web AWS Virtual IP resource For controlling VIP
failover01 service-IIS Service resource For providing Web services
failover01 md-data Mirror disk resource For mirroring data of Web services

Monitor resources are created as follows:

Monitor resource name Resource type Description
awsvipw-web AWS Virtual IP monitor resource For monitoring the awsvip-web resource
awsazw AWS AZ monitor resource For monitoring availability zones
servicew-IIS Service monitor resource For monitoring Web services
mdw-data Mirror disk monitor resource For monitoring mirror disks
mdnw-data Mirror connect monitor resource For monitoring the network for mirroring
userw User space monitor resource For monitoring stalls in user space

2.3 Ansible Playbook

In this article, we have prepared Ansible playbook and roles, which have the following features for an automatic configuration.

2.3.1 Roles for Setting up OS

With these roles, set the settings of Windows OS and install the software, which are needed to operate EXPRESSCLUSTER X.

Role name Feature
OS_set_hostname Changes the hostname of a server constituting an HA cluster.
OS_register_hosts Adds names and IPs of the servers constituting an HA cluster to the hosts file.
OS_install_python Downloads and installs Python 3.
OS_install_awscli Downloads, installs, and configures AWS CLI.
OS_prepare_mirrordisk Creates and formats the partition for mirror disk resources.
OS_install_IIS Installs IIS, changes its automatic startup to the manual one, and changes the document rule.
EXPRESSCLUSTER_X_install Installs EXPRESSCLUSTER X.
EXPRESSCLUSTER_X_register_license Registers the licenses of EXPRESSCLUSTER X.
EXPRESSCLUSTER_X_set_firewall Sets the firewall permission needed for EXPRESSCLUSTER X.

2.3.2 Role for Collecting Server Information

With the following role, create an information file (“inspects.txt”) needed to collect the information after the OS setup and to create a configuration file for EXPRESSCLUSTER X.
This “inspects.txt” is used as the information source when “clp_create_config.sh” creates a configuration file.

Role name Feature
EXPRESSCLUSTER_X_inspects Obtains GUIDs of the partitions for mirror disks of each server.

2.3.3 Roles for Configuring an HA Cluster

With these roles, create a configuration file (“clp.conf”) for EXPRESSCLUSTER X based on the information file (“inspects.txt”), and import it and apply the settings to servers constituting an HA cluster system.
Then launch the cluster system to start the service.

Role name Feature
EXPRESSCLUSTER_X_apply_config Imports the configuration file for EXPRESSCLUSTER X and applies the settings to each server.
EXPRESSCLUSTER_X_start_cluster Launches an HA cluster system.

3. Preparation

As described above, we are introducing the procedures of an automatic configuration of AWS environment using CloudFormation in popup another blog article.

Following the procedures in the blog enables an automatic configuration that covers the procedures from “3.2 Configuring the Network” to ”3.5 Creating EC2 Instances for HA Cluster Server” described later.

  • *Perform the followings respectively by yourself since those are not automatically configured.
  • Creating EC2 instances for management in “3.3 Creating EC2 Instances for Manager and Client”
  • “3.5.3 Setting Permission for WinRM” in “3.5 Creating EC2 Instances for HA Cluster Server”

3.1 Downloading the Files

Prepare the files below beforehand.
Use EXPRESSCLUSTER X 4.3 for Windows.

If used for the evaluation purpose, popupthe trial module of the installer and license key for EXPRESSCLUSTER X can be downloaded and used.
As of the time of publishing this article, the latest version of CreateClusterOnLinux is v2.0.6, which can be obtained by specifying the Tag of v2.0.6 or by downloading from popuphere.
Besides the above files, Python and AWS CLI for Windows are also needed, however, downloading these two items is not necessary as they are automatically installed from the playbook.

3.2 Configuring the Network

Create a VPC and subnets in the AWS environment by referring to the configuration diagram shown in “2.1 HA Cluster Configuration” and the following table.
Furthermore, properly set the Internet Gateway, the Route Tables, and the Security Groups.

VPC
Name CIDR Name CIDR Region
VPC-A 10.0.0.0/16 us-east-1
Subnets
Subnet name Network address Availability zone Description
Subnet-A1 10.0.10.0/24 us-east-1a Public subnet
Subnet-B1 10.0.20.0/24 us-east-1b Public subnet
Subnet-A2 10.0.110.0/24 us-east-1a Private subnet
Subnet-B2 10.0.120.0/24 us-east-1b Private subnet
Route Tables
Specify a virtual IP address (hereinafter called “VIP”) used for AWS Virtual IP resource to the Destination, and specify the ENI-ID attached to the server instance to the Target.
(The ENI-ID either of Server-A or Server-B can be set.)
Destination Target
"VIP address"/32 eni-xxxxxxxxxxxxxxxxx

3.3 Creating EC2 Instances for Manager and Client

Create a Linux instance (Manager (Ansible)) that can execute Ansible and CreateClusterOnLinux.
In addition, create a Windows instance (Client) as a client terminal.

EC2 Instances
Host name OS Subnet IP address Remarks
manager CentOS 8.2 Subnet-A1 10.0.10.xxx
(DHCP automatic setting)
For executing Ansible
client Windows Server 2019 Subnet-A1 10.0.10.xxx
(DHCP automatic setting)
For remote desk top connection, and for checking the behaviors on the web

3.4 Creating EC2 Instances for NAT

Create NAT instances, which will be used for the communication from the EC2 instances for HA cluster server to the endpoint of the region because VIP control is executed with AWS CLI.

In this article, NAT instances is used while there are other methods of using a proxy server or NAT gateways for the communication with the endpoint.

EC2 Instances
Host name OS Subnet IP address
nat1 Amazon Linux Subnet-A1 10.0.10.xxx
(DHCP automatic setting)
nat2 Amazon Linux Subnet-B1 10.0.20.xxx
(DHCP automatic setting)

Furthermore, add a route to NAT instances to the route tables associated with the private subnets (Subnet-A2, Subnet-B2).

Route Tables
Destination Target Subnet to be associated
0.0.0.0/0 nat-xxxxxxxxxxxxxxxxx
(NAT1 instance ID)
Subnet-A2
0.0.0.0/0 nat-yyyyyyyyyyyyyyyyy
(NAT2 instance ID)
Subnet-B2

3.5 Creating EC2 Instances for HA Cluster Server

3.5.1 Creating EC2 Instances

Create two EC2 instances for an HA cluster server according to the table below:

EC2 Instences
Host name OS Subnet IP address
server-a Windows Server 2019 Subnet-A2 10.0.110.100
server-b Windows Server 2019 Subnet-B2 10.0.120.100

Additionally, create and attach an EBS of 10 GB or more as the second disk for the mirror disk resource, to each of the instances.

3.5.2 Additional Settings to the EC2 Instances

Disable [Source/destination check] for each of the created EC2 instances for HA cluster server in the following procedure. Not performing this setting makes it fail to access the instances for the HA cluster server via VIP although such access was set by the AWS Virtual IP resource.

  • 1.Display the instance list from the AWS management console.
  • 2. Select the instance for HA cluster server, and go to [Actions] -> [Networking] -> [Change source/destination check].
  • 3.Check a checkbox for [stop], and click the [Save] button.

3.5.3 Setting Permission for WinRM

As required for Ansible operation, WinRM must be installed in the instances for the HA cluster server to which Ansible needs to connect in order to perform an automatic configuration.

Connect to the server instance via remote desk top. Then, launch Windows PowerShell with administrator privilege. (In a right-click menu, select [More] -> [Run as administrator].)

Obtain a setup script for WinRM (“ConfigureRemotingForAnsible.ps1”) by the following command.

> Invoke-WebRequest -Uri https://raw.githubusercontent.com/ansible/ansible/devel/examples/scripts/ConfigureRemotingForAnsible.ps1 -OutFile ConfigureRemotingForAnsible.ps1

Execute the setup script.

> powershell -ExecutionPolicy RemoteSigned .\ConfigureRemotingForAnsible.ps1

Perform the same settings as the above to each of the servers constituting an HA cluster system.

3.6 Setting up an Ansible Execution Environment

Perform the setup of Ansible for the Manager (Ansible) instance created in “3.3 Creating EC2 Instances for Manager and Client”.

3.6.1 Installing a Package for Executing Ansible

Connect to the Manager (Ansible) instance, then install Ansible and also the package needed for the automatic configuration.

In the case of Linux distribution without “dnf” command, install in the appropriate way such as using “yum” command, according to the type of distribution.

$ sudo dnf install python3 unzip
$ sudo dnf install tar
$ sudo pip3 install ansible pywinrm

3.6.2 Extracting Ansible Playbook

Create a work directory, and extract the Ansible playbook downloaded in “3.1 Downloading Files”.
From this section onward, please assume that all the files prepared by download beforehand are located in the user home directory (~/).

$ mkdir work
$ cd work
$ tar xzf ~/clp_playbook-202106.tar.gz

3.6.3 Placing CreateClusterOnLinux

Extract the CreateClusterOnLinux file, and copy the “clpcfset” command.

$ cd clp_playbook
$ tar xzf ~/CreateClusterOnLinux-2.0.6.tar.gz
$ cp -p CreateClusterOnLinux-2.0.6/src/v2/clpcfset clpcfset
$ rm -rf CreateClusterOnLinux-2.0.6

3.6.4 Placing Files Such as Installer

Copy the x64 directory containing the installer of EXPRESSCLUSTER X for Windows to the directory for the file transfer (files/Windows).

$ unzip ~/ecx43w_x64.zip
$ cp -pr ecx43w_x64/Windows/4.3/common/server/x64 files/Windows/
$ rm -rf ecx43w_x64

Additionally, copy the license key (*.key) of EXPRESSCLUSTER X to under the files/Windows directory.
$ cp -pr ~/*.key files/Windows/

3.6.5 Directory Tree for Files

By the procedures performed up to now, the files are located in the directory tree on Manager (Ansible) as follows:

work/clp_playbook/
       +--- 1_setup_servers_os.yml             Playbook for setting up server OS
       +--- 2_inspect_servers.yml              Playbook for collecting server information
       +--- 3_make_cluster.yml                Playbook for building and configuring HA cluster
       +--- clp_create_config.sh               Script for creating configuration file for HA cluster (Assisting the playbook)
       +--- clpcfset                           Execution file of CreateClusterOnLinux
       +--- files/                             For placing installer and license key for EXPRESSCLUSTER X
       |      +--- Windows/
       |             +--- x64/                 Set of EXPRESSCLUSTER X installer
       |             +---*.key                 License key for EXPRESSCLUSTER X
       +--- inventories/
       |      +--- hosts                       Inventory (for specifying server to be configured)
       |      +--- group_vars/                 Inventory (for setting playbook settings common to servers)
       |             +--- all.yml
       |      +--- host_vars/                  Inventory (for setting playbook for each server)
       |             +--- server-a.yml
       |             +--- server-b.yml
       +--- roles/                             Roles for Ansible playbook for automatic configuration with EXPRESSCLUSTER X
              +--- EXPRESSCLUSTER_X_apply_config/  *Description about files under respective roles’ directories is omitted.
              +--- EXPRESSCLUSTER_X_inspects/
              +--- EXPRESSCLUSTER_X_install/
              +--- EXPRESSCLUSTER_X_register_license/
              +--- EXPRESSCLUSTER_X_set_firewall/
              +--- EXPRESSCLUSTER_X_start_cluster/
              +--- OS_install_IIS/
              +--- OS_install_awscli/
              +--- OS_install_python/
              +--- OS_prepare_mirrordisk/
              +--- OS_register_hosts/
              +--- OS_set_hostname/

4. Automatic Configuration

4.1 Customizing According to the Configuration Environment

Edit the settings of each file according to the configuration environment.

Perform editing right under the directory (work/clp_playbook) storing the playbook, created in “3.6.2 Extracting Ansible Playbook”. Move to each relevant directory.

4.1.1 Settings for the Inventory File and the Server-specific Information

Register servers to be automatically configured to the inventory file.
The server names written here are used as the hostnames of Windows OS and the server names to be recognized by EXPRESSCLUSTER X, in the playbook.

$ vi inventories/hosts

For example, hosts are described as follows, where server-a and server-b are set.
[all]
server-a
server-b

4.1.2 Settings Common in the Whole Cluster

Configure the information common in the whole cluster as role variables.

$ vi inventories/group_vars/all.yml

The description example of “inventories/group_vars/all.yml” is as follows. For the details of role variables, refer to the README file described in the playbook.

# Ansible settings
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore

# OS_register_hosts
VAR_host_list:
  - ip: 10.0.110.100
    name: server-a
  - ip: 10.0.120.100
    name: server-b

# OS_install_awscli
VAR_aws_default_region: ap-northeast-1
VAR_aws_access_key_id: ****
VAR_aws_secret_access_key: ****

# OS_prepare_mirrordisk
VAR_disk_number: 1
VAR_cluster_partition_letter: "E"
VAR_data_partition_letter: "F"

# EXPRESSCLUSTER_X_register_licenses
VAR_EXPRESSCLUSTER_X_licenses:
  - trial_lcns_xxxxxx.key
  - trial_lcns_yyyyyy.key
       :
  - trial_lcns_zzzzzz.key

# EXPRESSCLUSTER_X_set_firewall
VAR_clp_servers:
  - "{{ VAR_host_list[0].ip }}"
  - "{{ VAR_host_list[1].ip }}"
VAR_clp_clients:
  - any
VAR_clp_restapi_clients:
  - any
VAR_clp_webui_clients:
  - any

# VIP for application
VAR_clp_service_vip: 20.0.0.100

4.1.3 Server-specific Settings

Configure the information different depending on the server as role variables.

$ vi inventories/host_vars/server-a.yml
$ vi inventories/host_vars/server-b.yml

The description example of “inventories/host_vars/*.yml” is the below. For the details of role variables, refer to the README file described in the playbook.
ansible_user: Administrator
ansible_password: '*********************'

VAR_clp_aws_vpc_id: vpc-nnnnnnnn
VAR_clp_aws_eni_id: eni-xxxxxxxxxxxxxxxxx
VAR_clp_aws_az: us-east-1a

4.2 Executing Ansible

4.2.1 Setting up OS

Execute the playbook that sets the Windows configuration (such as changing the hostname) and installs EXPRESSCLUSTER X. The OS will be restarted a few times during the playbook execution.

$ ansible-playbook -i inventories 1_setup_servers_os.yml

4.2.2 Collecting the Server Information

Execute the playbook that collects the information needed to create a configuration file for EXPRESSCLUSTER X and stores such information in the information file (“inspects.txt”).

$ ansible-playbook -i inventories 2_inspect_servers.yml

After the playbook execution completes, “inspects.txt” is created.

4.2.3 Building and Launching an HA Cluster

Execute the playbook that creates a configuration file for EXPRESSCLUSTER X then imports and applies the settings to respective servers constituting the HA cluster.

$ ansible-playbook -i inventories 3_make_cluster.yml

When the playbook execution is completed, an HA cluster will be launched and start the service. (It takes a while until the HA cluster’s launch is actually completed and the service becomes available for use.)

5. Checking the Operation

5.1 Checking the Operation of EXPRESSCLUSTER X

Connect to the active server by using Cluster WebUI.

http://"IP address of Server-A":29003

From the Status tab of Cluster WebUI, check whether servers, the failover group, respective resources and monitor resources are operated normally.

5.2 Accessing a Web Service

Next, from the instance (Client), make an access to the virtual IP address by using a web browser.

http://"VIP address"

If the access succeeded, a web page is displayed.

In this article, the F drive is the data partition for mirror disks, therefore, locating the html file (“index.html”) in the root of the F drive allows a web page to be viewed.

5.3 Failover in the HA Cluster

Make an access to the standby server by using Cluster WebUI.

http://"IP address of Server-B":29003

Shut down the active server from the Status tab of Cluster WebUI.

A while after the shutdown is completed, the failover group is failed over to the standby server.
From the Status tab of Cluster WebUI, check whether servers, the failover group, respective resources and monitor resources are operated normally.

(It is displayed that the monitor resources related to mirror disks is in a warning status, however, it is no problem.)

Reload a web browser on the instance (Client), and confirm that the web page is normally displayed.

Conclusion

This time, we performed the automatic configuration of an HA cluster with IIS based on VIP control in the AWS environment by using Ansible and CreateClusterOnLinux. Using Ansible and CreateClusterOnLinux enables the automation of the whole configuration work including the dealing of parameters different depending on the server, which makes it possible to suppress human errors with the configuration and enhance the work efficiency.
If you consider introducing the configuration described in this article, you can perform a validation with the popuptrial module of EXPRESSCLUSTER. Please do not hesitate to contact us if you have any questions.