Please note that JavaScript and style sheet are used in this website,
Due to unadaptability of the style sheet with the browser used in your computer, pages may not look as original.
Even in such a case, however, the contents can be used safely.
As a default setting, EXPRESSCLUSTER takes about one minute to activate a standby server to take over the floating IP address when a failure occurs.
The internal retry and timeout settings for the period between when a failure is detected and when it is judged as a failure can also be adjusted, allowing the system to be switched to failover in as little as 10 seconds.
EXPRESSCLUSTER monitors the system for a wide range of conditions, including hardware failures, operating system faults and application defects. EXPRESSCLUSTER also performs constant monitoring of both active and standby servers to prevent unsuccessful failover attempts. Moreover, each server unit can be equipped with an ID light that flashes when the server is down, signaling to system administrator that the state of the cluster system has changed.
(1) Server shutdown/power supply down
(2) OS panic
(3) Partial OS failure (disk I/O hang-up)
(4) Application or service stoppage
(5) Service hang-up*1
(6) Detection of NIC or public LAN abnormality
(7) Abnormality in EXPRESSCLUSTER server module
*1: Agent product required.
By using the optional alert service, users can be notified when a failover occurs due to one of failures (1) to (7) above. Note that a failover does not occur in the following cases.
In other cases also, operation management software (such as ESMPRO/AlertManager) can be used to trigger the failover start command of EXPRESSCLUSTER upon the occurrence of any given error.
Monitoring Agent (optional) can be used to continuously monitor specific application states and switch the system to failover when an abnormal response or hang-up is detected, enabling operations to continue uninterrupted.


Oracle Application Server 10g is supported.
[ Option when the application is not supported by monitoring agent ]
These applications can be monitored by creating a script program that constantly monitors whether the application is operating correctly and terminates itself when a problem occurs. This program is then monitored by EXPRESSCLUSTER.
It is also possible to use software's like ESMPRO/AlertManager to make EXPRESSCLUSTER switch the system to failover upon the occurrence of any given error. Failovers can also be triggered manually and custom applications using command to failover or shutdown OS.
This is an option used to report critical events such as server-down and failover by e-mail.

The user can select the disk configuration that best suits their usage environment. Disk configurations are broadly divided into two types: an expandable shared disk type and a cost-effective mirror type.

This is a cluster configuration that does not contain operational data requiring continuity. It is the simplest way of configuring the hardware, operating system and applications into a system with built-in redundancy.
The failover group in the figure on the left is a group of operational services that are moved between servers.

This is a cluster configuration in which a shared disk is used for operational data continuity. This configuration maximizes high-performance, high-reliability and high-capacity features of the shared disks.

This is a cluster configuration in which a mirrored disk in the server is used for operational data continuity. By using an internal disk, this configuration reduces costs and provides high availability.
Data mirror/shared disk combination types and hybrid types can also be selected.

This is a cluster configuration in which a shared disk and an internal disk are used for operational data continuity. This configuration is ideal for users who need to duplicate certain critical data by mirroring or who wish to employ a shared disk after starting operations with a mirror type configuration because the volume of operational data has grown.

This is a cluster configuration in which a partition on the NAS is used to locate inter-server operational data continuity information instead of SAN storage. This configuration can be used to improve efficiency in environments such as operation system development environments.

This is a configuration in which SAN connection and shared disk type clusters are clustered together and data continuity is achieved by using cluster software to mirror the operational data stored in the shared disk. This configuration enables the creation of a cluster system in which the shared-disk-type clusters are all on hot-standby.
Up to 32 shared-disk-type servers and up to 9 mirror-type servers can be clustered.

With a 32-server shared-disk configuration, a M:N configuration consisting of N standby servers can be created, meeting the needs of mission critical demands.

With a 9-server N:1 mirror configuration (8:1, max.), the number of standby servers can be reduced to one.
Note that this configuration is not suited to operations in which the write frequency in data mirroring is high.

This is the most typical cluster configuration and most operational services can be clustered in this way.
If an abnormality occurs in an active server, failover occurs and operations are transferred to a standby server.

This is a cluster configuration employed by users who want to effectively utilize a standby CPU and is used for different operational services.
This configuration may also be able to be used even when the operational services are of the same type and can be executed (activated) in parallel.
With this configuration, the disk used for data continuity when each operation is executed is divided.
Note that when failover occurs, the operations of two servers are executed by a single server.

This is a cluster configuration employed by users who wish to avoid the problems inherent in a bidirectional standby configuration.
With this configuration, if an abnormality in one active server triggers failover, it does not affect the performance of the other active server.

This is a cluster configuration employed by users who wish to avoid the problems inherent in a bidirectional standby configuration.
With this configuration, if an abnormality in one active server triggers failover, it does not affect the performance of the other active server.

This cluster configuration is an expansion of the N:1 configuration and realizes an excellent combination of low-cost and high performance by connecting multiple servers to a high-cost shared disk.
Because there are multiple standby servers available, the problem of a single standby server being overloaded when multiple active servers fail can be avoided. Moreover, the inclusion of multiple standby servers provides high availability even for operational services that cannot be executed (activated) in parallel.

All cluster types-1:1 mirror-type clusters, N:1 mirror-type clusters, N:1 shared-disk-type clusters, M:N shared-disk-type clusters-can be created as remote cluster configurations.
Operational data can also be backed up at a remote location by using remote cluster.
If an emergency should occur at a site, failover can be triggered at other sites.
Continuity-target resources in a cluster system (virtual IP, operational applications, etc.) can be defined as a failover group to create a system that operates uninterrupted.
failover group
A group of resources (cluster resources) used by the cluster server. The failover group is moved between nodes when failover occurs. Resources in the same group are always moved together.
Cluster resources
Resources that can be registered with a failover group.
The cluster system is displayed on a GUI, via which it can be monitored and controlled. The same operations can also be executed via command line.. Note that a Web browser and a Java operational environment are required to operate the GUI of EXPRESSCLUSTER.

Partnership Navigation
NEC Products Links