In RHOSP environment, if an error occurs in a guest OS, the application on the guest OS cannot operate. By applying PRIMECLUSTER to the guest OS when an error occurs there can forcibly stop the virtual machine of the guest OS using the OpenStack API and fail over the application from the active guest OS to the standby guest OS, which enables a highly reliable guest OS environment.
Note
The root class of GDS cannot be used.
Within the project on RHOSP, the duplicate virtual machine name cannot be used.
The snapshot of the virtual machine can be obtained only when OS is stopped.
The auto-scale function of RHOSP cannot be used.
When using GLS, use the non-redundant NIC configuration of Virtual NIC mode as a redundant line control mode.
When configuring the cluster system between guest OSes in RHOSP using Easy Design and Configuration Feature, GLS cannot be used.
The following cluster systems are available in RHOSP environment:
Building the cluster system between guest OSes on one compute node
Building the cluster system between guest OSes on multiple compute nodes
See the table below for usages of each cluster system and notes when building each cluster system.
Cluster type | Usage | Note |
---|---|---|
Building the cluster system between guest OSes on one compute node |
|
|
Building the cluster system between guest OSes on multiple compute nodes |
| If an error occurs in the compute node in the environment where the high availability configuration for compute instances is not used, the cluster application is not switched and the cluster node becomes the status of LEFTCLUSTER. |
*1 For more information on high availability configuration for compute instances, refer to "Red Hat OpenStack Platform High Availability for Compute Instances."
Building the cluster system between guest OSes on one compute node
In this configuration, the cluster system can be operated on one compute node. It is suitable configuration for verifying the operation of userApplication operating on PRIMECLUSTER.
Building the cluster system between guest OSes on multiple compute nodes
In this configuration, by allocating different hardware (network or disk) for each compute node, the operation can be continued by failover even if the network or the disk fails.
Note
If an error occurs in the compute node in the environment where the high availability configuration for compute instances is not used, the node status becomes LEFTCLUSTER. For how to recover from LEFTCLUSTER, see "I.3.2.1 If Not Using the High Availability Configuration for Compute Instances."
By using the high availability configuration for compute instances, the operation can continue even if an error occurs in the compute node. However, recover both compute node and virtual machine where an error occurred manually. For the recovery procedure, see "I.3.2.2 If Using the High Availability Configuration for Compute Instances."
In RHOSP environment, set up the network configuration and the security groups as follows:
Network configuration:
The cluster interconnect must be the network independent from the administrative LAN, the public LAN, and the network used for the mirroring among servers function of GDS.
The virtual machines configuring the cluster can communicate with various service end points of RHOSP.
Security groups:
Set up the following two security groups:
The security group for both public and administrative LANs between the virtual machines configuring the cluster
The security group for cluster interconnect that disables a communication with other than the virtual machines configuring the cluster