Please note that JavaScript and style sheet are used in this website,
Due to unadaptability of the style sheet with the browser used in your computer, pages may not look as original.
Even in such a case, however, the contents can be used safely.

Technical Details

Functional Outline

1. Quick shift to failover for uninterrupted operations

As a default setting, EXPRESSCLUSTER takes about one minute to activate a standby server to take over the floating IP address when a failure occurs.
The internal retry and timeout settings for the period between when a failure is detected and when it is judged as a failure can also be adjusted, allowing the system to be switched to failover in as little as 10 seconds.

2. Detectable failures

EXPRESSCLUSTER monitors the system for a wide range of conditions, including hardware failures, operating system faults and application defects. EXPRESSCLUSTER also performs constant monitoring of both active and standby servers to prevent unsuccessful failover attempts. Moreover, each server unit can be equipped with an ID light that flashes when the server is down, signaling to system administrator that the state of the cluster system has changed.

(1) Server shutdown/power supply down
(2) OS panic
(3) Partial OS failure (disk I/O hang-up)
(4) Application or service stoppage
(5) Service hang-up*1
(6) Detection of NIC or public LAN abnormality
(7) Abnormality in EXPRESSCLUSTER server module

*1: Agent product required.

By using the optional alert service, users can be notified when a failover occurs due to one of failures (1) to (7) above. Note that a failover does not occur in the following cases.

  • Partial OS failure (other than disk I/O hang-up)
    In cases where the mouse or keyboard is unresponsive (OS itself is operating normally and the heartbeat packets that are used to monitor the health of each node are still arriving), EXPRESSCLUSTER does not recognize the condition as a failure and does not switch the system to failover.
  • Stalling of application or service
    In cases where an application or service still has processes alive but is not working properly, EXPRESSCLUSTER does not recognize the condition as a failure and does not switch the system to failover. However, it is possible EXPRESSCLUSTER to detect such situation and failover by using one of the monitoring options described later.

In other cases also, operation management software (such as ESMPRO/AlertManager) can be used to trigger the failover start command of EXPRESSCLUSTER upon the occurrence of any given error.

3. Detectable failures for specific application

Monitoring Agent (optional) can be used to continuously monitor specific application states and switch the system to failover when an abnormal response or hang-up is detected, enabling operations to continue uninterrupted.

Specific application 1
Specific application 2

Oracle Application Server 10g is supported.

[ Option when the application is not supported by monitoring agent ]
These applications can be monitored by creating a script program that constantly monitors whether the application is operating correctly and terminates itself when a problem occurs. This program is then monitored by EXPRESSCLUSTER.

It is also possible to use software's like ESMPRO/AlertManager to make EXPRESSCLUSTER switch the system to failover upon the occurrence of any given error. Failovers can also be triggered manually and custom applications using command to failover or shutdown OS.

4. Alert service

This is an option used to report critical events such as server-down and failover by e-mail.

  • Critical events that have occurred in EXPRESSCLUSTER server are reported to the user by e-mail. Users can also receive e-mail notifications when they are outside their regular working environment by registering their cell phone e-mail address.
  • Normal operation of the cluster system can also be reported (in addition to abnormal operation).
  • Users are quickly notified of any change in the cluster state as pop-up tip appears over the icon on the server's system tray when the server status changes. (Windows version only.)
  • Operation of the service can be started simply by setting mail server information and recipient's e-mail address. Settings such as user name and password can also be used to support mail server SMTP verification.
  • A warning light can be used to indicate server state.
Alert service

5. Available disk configuration

The user can select the disk configuration that best suits their usage environment. Disk configurations are broadly divided into two types: an expandable shared disk type and a cost-effective mirror type.

Diskless type

Diskless type

This is a cluster configuration that does not contain operational data requiring continuity. It is the simplest way of configuring the hardware, operating system and applications into a system with built-in redundancy.

The failover group in the figure on the left is a group of operational services that are moved between servers.

SAN connection, shared disk type

SAN connection, shared disk type

This is a cluster configuration in which a shared disk is used for operational data continuity. This configuration maximizes high-performance, high-reliability and high-capacity features of the shared disks.

Data mirror type

Data mirror type

This is a cluster configuration in which a mirrored disk in the server is used for operational data continuity. By using an internal disk, this configuration reduces costs and provides high availability.

Data mirror/shared disk combination types and hybrid types can also be selected.

Combination type

Combination type

This is a cluster configuration in which a shared disk and an internal disk are used for operational data continuity. This configuration is ideal for users who need to duplicate certain critical data by mirroring or who wish to employ a shared disk after starting operations with a mirror type configuration because the volume of operational data has grown.

NAS connection, shared disk type

NAS connection, shared disk type

This is a cluster configuration in which a partition on the NAS is used to locate inter-server operational data continuity information instead of SAN storage. This configuration can be used to improve efficiency in environments such as operation system development environments.

Shared disk type remote hybrid cluster

Shared disk type remote hybrid cluster

This is a configuration in which SAN connection and shared disk type clusters are clustered together and data continuity is achieved by using cluster software to mirror the operational data stored in the shared disk. This configuration enables the creation of a cluster system in which the shared-disk-type clusters are all on hot-standby.

6. Various cluster configurations

Up to 32 shared-disk-type servers and up to 9 mirror-type servers can be clustered.

Shared-disk-type cluster

Shared disk type

With a 32-server shared-disk configuration, a M:N configuration consisting of N standby servers can be created, meeting the needs of mission critical demands.

Mirror-type cluster

Mirror type

With a 9-server N:1 mirror configuration (8:1, max.), the number of standby servers can be reduced to one.

Note that this configuration is not suited to operations in which the write frequency in data mirroring is high.

Active-Standby (unidirectional standby)

Active-standby

This is the most typical cluster configuration and most operational services can be clustered in this way.

If an abnormality occurs in an active server, failover occurs and operations are transferred to a standby server.

Active-Active (bidirectional standby)

Active-active

This is a cluster configuration employed by users who want to effectively utilize a standby CPU and is used for different operational services.

This configuration may also be able to be used even when the operational services are of the same type and can be executed (activated) in parallel.

With this configuration, the disk used for data continuity when each operation is executed is divided.

Note that when failover occurs, the operations of two servers are executed by a single server.

N:1 standby

N1 standby

This is a cluster configuration employed by users who wish to avoid the problems inherent in a bidirectional standby configuration.

With this configuration, if an abnormality in one active server triggers failover, it does not affect the performance of the other active server.

N:1 standby

Mirror-type cluster

This is a cluster configuration employed by users who wish to avoid the problems inherent in a bidirectional standby configuration.

With this configuration, if an abnormality in one active server triggers failover, it does not affect the performance of the other active server.

M:N standby

MN standby

This cluster configuration is an expansion of the N:1 configuration and realizes an excellent combination of low-cost and high performance by connecting multiple servers to a high-cost shared disk.

Because there are multiple standby servers available, the problem of a single standby server being overloaded when multiple active servers fail can be avoided. Moreover, the inclusion of multiple standby servers provides high availability even for operational services that cannot be executed (activated) in parallel.

Remote cluster

Remote

All cluster types-1:1 mirror-type clusters, N:1 mirror-type clusters, N:1 shared-disk-type clusters, M:N shared-disk-type clusters-can be created as remote cluster configurations.

Operational data can also be backed up at a remote location by using remote cluster.

If an emergency should occur at a site, failover can be triggered at other sites.

7. Server switching in group units

Continuity-target resources in a cluster system (virtual IP, operational applications, etc.) can be defined as a failover group to create a system that operates uninterrupted.

failover group
A group of resources (cluster resources) used by the cluster server. The failover group is moved between nodes when failover occurs. Resources in the same group are always moved together.

Cluster resources
Resources that can be registered with a failover group.

  • Operational application/service
  • Shared disk, mirror disk
  • Virtual computer name (Windows version)
  • IP address (floating IP, virtual IP)
  • Etc.

8. Monitor/control screen

The cluster system is displayed on a GUI, via which it can be monitored and controlled. The same operations can also be executed via command line.. Note that a Web browser and a Java operational environment are required to operate the GUI of EXPRESSCLUSTER.

Monitor control