This section describes how to install and set up related software when building a cluster system between guest OSes on multiple host OSes using Host OS failover function.
Figure 3.1 Flow for building a cluster system when using Host OS failover function
After installing the PRIMECLUSTER-related software, you need to set up the operating system, hardware, and so on that will be used and administered.
Perform the following as necessary.
In order for the host OS to work as the cluster, network setup is required.
This setup is for synchronizing the time on each node comprising the cluster system, which is necessary when creating a cluster.
This setup should be performed before installing PRIMECLUSTER.
If you plan to operate a guest OS as part of a cluster, set up the required disk devices, virtual bridges, virtual disks, user IDs, and guest OS initializations on the host OS.
Perform the following setup on the host OS after installing the operating system on the host OS and also before installing the operating system on the guest OS.
Creating the virtual disk
When using a shared disk or mirroring among servers on a guest OS, create the virtual disk.
Create the virtio-SCSI device or the virtio block device. For information on how to create them, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."
Note
For a disk to be added to a guest, specify with the by-id name.
Add a non-partitioned disk, not a partition or file, to the guest.
Installing and setting up related software
Install and set up the software product (ETERNUS Multipath Driver) required for using system disk of the guest OS on the host OS. For how to install and set up the related software, see "Software Information" for ETERNUS Multipath Driver.
Mirroring the guest OS system disk
To mirror the guest OS system disk, set up the mirrored volume of the local class or the shared class created on the host OS for the guest OS.
See
For details on settings, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."
Perform this setup on the host OS according to the following procedure after installing the operating system on the host OS and the guest OSes.
Setting up the virtual disk
For using a shared disk or mirroring among servers on a guest OS, you need to set up a virtual disk.
The following shows the setup procedure for the virtual disk in a KVM environment.
Stop the guest OS.
Add shareable and cache='none' to the virtio-SCSI device setting that is described in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS. Additionally, correct the device attribute to 'lun' if any other value is set.
# virsh edit guestname
Example before change
: <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/disk/by-id/scsi-36000b5d0006a0000006a1296001f0000'/> <target dev='sdh' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='7'/> </disk> :
Example after change
: <disk type='block' device='lun'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/disk/by-id/scsi-36000b5d0006a0000006a1296001f0000'/> <target dev='sdh' bus='scsi'/> <shareable/> <address type='drive' controller='0' bus='0' target='0' unit='7'/> </disk> :
Start the guest OS.
Stop the guest OS.
Select the stopped guest OS with the Virtual Machine Manager and click the [Open] button in the toolbar.
Click in the toolbar to display the detailed information of hardware.
Select a virtual disk (VirtIO Disk) from the hardware list in the left.
In the [Virtual disk] window, perform the following settings and click [Apply].
- Select the Shareable check box.
- Select 'none' for the cache model.
Check the version of the libvirt package on the host OS by using the rpm(8) command.
# rpm -qi libvirt
If the version of the libvirt package is libvirt-0.9.4-23.el6_2.4 or later, change the device attribute from disk to lun, which is set in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS.
# virsh edit guestname
Example before change
:
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/>
<target dev='vdb' bus='virtio'/>
<shareable/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
:
Example after change
:
<disk type='block' device='lun'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/>
<target dev='vdb' bus='virtio'/>
<shareable/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</disk>
:
Start the guest OS.
Stop the guest OS.
If the device attribute other than 'lun' is set in the settings of the virtio-SCSI device described in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS, correct the device attribute to 'lun.'
# virsh edit guestname
Example before change
: <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/disk/by-id/scsi-36000b5d0006a0000006a1296001f0000'/> <target dev='sdh' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='7'/> </disk> :
Example after change
:
<disk type='block' device='lun'>
<driver name='qemu' type='raw'/>
<source dev='/dev/disk/by-id/scsi-36000b5d0006a0000006a1296001f0000'/>
<target dev='sdh' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='7'/>
</disk>
:
Start the guest OS.
Stop the guest OS.
Select the stopped guest OS with the Virtual Machine Manager and click the [Open] button in the toolbar
Click in the toolbar to display the detailed information of hardware.
Select a virtual disk (VirtIO Disk) from the hardware list in the left.
In the [Virtual disk] window, set the serial number on [Serial number] of [Advanced options], and click [Apply].
The serial number should be a character string of up to 10 characters that does not duplicate in the virtual machine.
Check the version of the libvirt package on the host OS by using the rpm(8) command.
# rpm -qi libvirt
If the version of the libvirt package is libvirt-0.9.4-23.el6_2.4 or later, change the device attribute from disk to lun, which is set in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS.
# virsh edit guestname
Example before change
: <disk type='block' device='disk'> <driver name='qemu' type='raw'/> <source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/> <target dev='vdb' bus='virtio'/> <serial>serial number</serial> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> :
Example after change
: <disk type='block' device='lun'> <driver name='qemu' type='raw'/> <source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/> <target dev='vdb' bus='virtio'/> <serial>serial number</serial> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> :
Start the guest OS.
On the guest OS, make sure that the by-id file of virtual disk exists.
- Make sure that the by-id files exist in all virtio block devices used for mirroring among servers.
- Make sure that the serial number set in step 5 is included in the file name of by-id file.
# ls -l /dev/disk/by-id
:
lrwxrwxrwx 1 root root 9 Apr 18 08:44 virtio-disk001 -> ../../vdg lrwxrwxrwx 1 root root 9 Apr 18 08:43 virtio-disk002 -> ../../vdh : serial number
Setting up the virtual bridge (administrative LAN/public LAN/cluster interconnect)
For the network interfaces, including the administrative LAN, public LAN and cluster interconnect, that are used by virtual domains, you need to set up virtual bridges for the virtual networks beforehand.
(1) Setting up a virtual bridge for the administrative LAN
Edit the /etc/sysconfig/network-scripts/ifcfg-ethX file as follows:
DEVICE=ethX HWADDR=XX:XX:XX:XX:XX:XX BOOTPROTO=none ONBOOT=yes BRIDGE=brX
Note
For HWADDR, set the MAC address of the network interface card you are using.
Create the interface setting file, /etc/sysconfig/network-scripts/ifcfg-brX, for the virtual bridge.
DEVICE=brX TYPE=Bridge BOOTPROTO=static IPADDR=xxx.xxx.xxx.xxx NETMASK=xxx.xxx.xxx.xxx ONBOOT=yes
Note
For IPADDR and NETMASK, set IP addresses and netmasks to connect to the external network. When IPv6 addresses are required, make the setting so that IPv6 addresses are assigned.
(2) Setting up virtual bridges for the public LAN and cluster interconnect
Edit the /etc/sysconfig/network-scripts/ifcfg-ethX file as follows:
DEVICE=ethX HWADDR=XX:XX:XX:XX:XX:XX BOOTPROTO=none ONBOOT=yes BRIDGE=brX
Note
For HWADDR, set the MAC address of the network interface card you are using.
Create the interface setting file, /etc/sysconfig/network-scripts/ifcfg-brX, for the virtual bridge.
DEVICE=brX TYPE=Bridge ONBOOT=yes
Setting the guest OS in the host OS (in a KVM environment)
In a KVM environment, perform the following settings to stop the guest OS normally if the host OS is shut down by mistake while the guest OS running.
Define the following two values in /etc/sysconfig/libvirt-guests. When values are already defined, change them to the following values:
ON_SHUTDOWN=shutdown
SHUTDOWN_TIMEOUT=300
Specify the timeout duration (seconds) for shutdown of the guest OS to SHUTDOWN_TIMEOUT. Estimate the length of time for shutting down the guest OS and set the value. When multiple guest OSes are set, set the time whichever is greater. The above is an example when the time is 300 seconds (5 minutes).
Note
When setting /etc/sysconfig/libvirt-guests, do not describe the setting values and comments on the same line.
When changing the settings in /etc/sysconfig/libvirt-guests during operation, make sure to follow the procedure in "9.4.1.3 Changing the Settings in /etc/sysconfig/libvirt-guests."
Creating a user ID
Point
This user ID will be the one used by the shutdown facility to log in to the host OS to force shut down the nodes. This user ID and password are used for configuring the shutdown facility.
In a KVM environment, you need to set up a user for the shutdown facility for the guest OS control by PRIMECLUSTER.
(1) Creating a general user ID (optional)
Create a general user ID (optional) for the shutdown facility in the host OS.
# useradd <User ID>
(2) Setting up the "sudo" command
You need to set up the "sudo" command so that the general user ID (optional) for the shutdown facility can execute the command as the root user.
Use the visudo command to add the following setting so that the general user created in step (1) can execute the command without entering the password.
<User ID> ALL=(root) NOPASSWD: ALL
Moreover, in order to permit the "sudo" execution without "tty", add "#" to the beginning of the following line to comment it out.
Defaults requiretty
Install PRIMECLUSTER on the host OS.
For details, see "3.3 PRIMECLUSTER Installation."
You need to configure software and hardware that enables cluster high-speed failover after installing the OS and PRIMECLUSTER.
For details, see "3.1.6 Setting Up the Cluster High-Speed Failover Function."
To operate the PRIMECLUSTER-related software, you need to edit the values of the kernel parameters based on the environment.
Perform this setup before restarting the installed PRIMECLUSTER.
All the nodes on which PRIMECLUSTER is to be installed
The kernel parameters differ according to the products and components to be used.
Check the Kernel Parameter Worksheet and edit the value if necessary.
See
For information on the kernel parameters, see "A.6 Kernel Parameter Worksheet."
Note
To enable modifications, you need to restart the operating system.
Before building a cluster, preparation work is required in the host OS, such as starting up the Web-Based Admin View screen. For details, see "Chapter 4 Preparation Prior to Building a Cluster."
Build a cluster of PRIMECLUSTER on the host OS. For details, see "Chapter 5 Building a Cluster." To build a cluster, perform the procedures described in "5.1.1 Setting Up CF and CIP" and "5.1.2 Setting Up the Shutdown Facility." Also, for the shutdown facility, set shutdown agent in the same way as the setting between natives. See "5.1.2 Setting Up the Shutdown Facility," and check the hardware model/configuration to set up the appropriate shutdown agent.
Note
After setting CF, set the timeout value of the cluster system on the host OS to 20 seconds. For details on the setup, refer to "11.3.1 Changing Time to Detect CF Heartbeat Timeout."
Share the cluster interconnect LAN of the host OS with other guest OSes, and separate networks for each cluster system with Virtual LAN.
After building a cluster on the host OS, install the PRIMECLUSTER-related software, and set up the OS and hardware for installing and operating PRIMECLUSTER.
Perform the following as necessary.
See
For details on the configuration in a KVM environment, see "A.12.2.1 Cluster Configuration Worksheet."
This setup is for synchronizing the time on each node comprising the cluster system, which is necessary when creating a cluster.
This setup should be performed on the guest OS before installing PRIMECLUSTER.
See
For details on settings, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."
On the guest OSes comprising the cluster system, you need to set up the network, including IP addresses of the public LAN and administrative LAN.
This setup should be performed after installing the operating system.
After completing the installation of the bundled software on the guest OS, initialize the guest OS.
This setup should be performed for all guest OSes comprising the cluster system.
Note
Guest domain names set on installation of the guest OSes are used when setting up the Shutdown Facility.
For information on how to check guest domain names, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."
See
If you want to change the public LAN and administrative LAN used by PRIMECLUSTER, see "9.2 Changing the Network Environment."
Information
The Web-Based Admin View automatically configures the settings so that the interface, whose IP address host name is equivalent to the node name created when the PRIMECLUSTER was installed, can be used as a transfer route between cluster nodes and cluster management servers or between cluster management servers and clients.
Install PRIMECLUSTER on guest OSes.
For details, see "3.3 PRIMECLUSTER Installation."
To operate the PRIMECLUSTER-related software, you need to edit the values of the kernel parameters based on the environment.
Perform this setup before restarting the installed PRIMECLUSTER.
All the nodes on which PRIMECLUSTER is to be installed
The kernel parameters differ according to the products and components to be used.
Check the Kernel Parameter Worksheet and edit the value if necessary.
See
For information on the kernel parameters, see "A.6 Kernel Parameter Worksheet."
Note
To enable modifications, you need to restart the operating system.
Install software products to be operated on the PRIMECLUSTER system and configure the environment as necessary.
For details, see "3.4 Installation and Environment Setup of Applications."
Before building a cluster, preparation work is required in the host OS, such as starting up the Web-Based Admin View screen. For details, see "Chapter 4 Preparation Prior to Building a Cluster."
Build a cluster on the guest OS. For details on each item, see "Chapter 5 Building a Cluster."
Note
Share the cluster interconnect LAN of the guest OS with other guest OSes and the host OS, and separate networks for each cluster system with Virtual LAN.
Do not change a timeout value of the guest OS from 10 seconds at the CF setting.
For setup policy for survival priority, see "Survival scenarios" in "5.1.2 Setting Up the Shutdown Facility."
Create cluster applications on the guest OS. For details, see "Chapter 6 Building Cluster Applications."
Note
When creating a cluster application for a guest OS, do not set the ShutdownPriority attribute of RMS.