Top
PRIMECLUSTER  Installation and Administration Guide 4.5
FUJITSU Software

3.2.2 When building a cluster system between guest OSes on multiple host OSes without using Host OS failover function

This section describes how to install and set up related software when building a cluster system between guest OSes on multiple host OSes without using Host OS failover function.

Perform the steps shown in the figure below as necessary.

Figure 3.3 Flow of building the cluster system when not using the host OS failover function

3.2.2.1 Host OS setup (before installing the operating system on guest OS)

If you plan to operate a guest OS as part of a cluster, set up the required disk devices, virtual bridges, virtual disks, user IDs, and guest OS initializations on the host OS.

Perform the following setup on the host OS after installing the operating system on the host OS and also before installing the operating system on the guest OS.

  1. Creating the virtual disk

    When using a shared disk or mirroring among servers on a guest OS, create the virtual disk.

    Create the virtio-SCSI device or the virtio block device.

    For information on how to create them, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."

    Note

    • For a disk to be added to a guest, specify with the by-id name.

    • Add a non-partitioned disk, not a partition or file, to the guest.

  2. Installing and setting up related software

    Install and set up the software product (ETERNUS Multipath Driver) required for using system disk of the guest OS on the host OS. For how to install and set up the related software, see "Software Information" for ETERNUS Multipath Driver.

  3. Mirroring the guest OS system disk

    To mirror the guest OS system disk, set up the local mirrored volume created on the host OS for the guest OS.

    See

    For details on settings, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."

3.2.2.2 Host OS setup (after installing the operating system on guest OS)

Perform the following setup after installing the operating system on guest OS.

  1. Setting up the virtual disk

    For using a shared disk or mirroring among servers on a guest OS, you need to set up a virtual disk.

    The following shows the setup procedure for the virtual disk in a KVM environment.

    Using virtio-SCSI device as a shared disk
    1. Stop the guest OS.

    2. Add shareable and cache='none' to the virtio-SCSI device setting that is described in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS. Additionally, correct the device attribute to 'lun' if any other value is set.

      # virsh edit guestname

      Example before change

        :
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw'/>
          <source dev='/dev/disk/by-id/scsi-36000b5d0006a0000006a1296001f0000'/>
          <target dev='sdh' bus='scsi'/>
          <address type='drive' controller='0' bus='0' target='0' unit='7'/>
        </disk>
        :

      Example after change

        :
        <disk type='block' device='lun'>
          <driver name='qemu' type='raw' cache='none'/>
          <source dev='/dev/disk/by-id/scsi-36000b5d0006a0000006a1296001f0000'/>
          <target dev='sdh' bus='scsi'/>
          <shareable/>
          <address type='drive' controller='0' bus='0' target='0' unit='7'/>
        </disk>
        :
    3. Start the guest OS.

    Using virtio block device as a shared disk
    1. Stop the guest OS.

    2. Select the stopped guest OS with the Virtual Machine Manager and click the [Open] button in the toolbar.

    3. Click in the toolbar to display the detailed information of hardware.

    4. Select a virtual disk (VirtIO Disk) from the hardware list in the left.

    5. In the [Virtual disk] window, perform the following settings and click [Apply].

      - Select the Shareable check box.

      - Select 'none' for the cache model.

    6. Check the version of the libvirt package on the host OS by using the rpm(8) command.

      # rpm -qi libvirt
    7. If the version of the libvirt package is libvirt-0.9.4-23.el6_2.4 or later, change the device attribute from disk to lun, which is set in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS.

      # virsh edit guestname

      Example before change

        :
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='none'/>
          <source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/>
          <target dev='vdb' bus='virtio'/>
          <shareable/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </disk>
        :

      Example after change

        :
        <disk type='block' device='lun'>
          <driver name='qemu' type='raw' cache='none'/>
          <source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/>
          <target dev='vdb' bus='virtio'/>
          <shareable/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </disk>
        :
    8. Start the guest OS.

    Using virtio-SCSI device for mirroring among servers
    1. Stop the guest OS.

    2. If the device attribute other than 'lun' is set in the settings of the virtio-SCSI device described in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS, correct the device attribute to 'lun.'

      # virsh edit guestname

      Example before change

        :
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw'/>
          <source dev='/dev/disk/by-id/scsi-36000b5d0006a0000006a1296001f0000'/>
          <target dev='sdh' bus='scsi'/>
          <address type='drive' controller='0' bus='0' target='0' unit='7'/>
        </disk>
        :

      Example after change

        :
        <disk type='block' device='lun'>
          <driver name='qemu' type='raw'/>
          <source dev='/dev/disk/by-id/scsi-36000b5d0006a0000006a1296001f0000'/>
          <target dev='sdh' bus='scsi'/>
          <address type='drive' controller='0' bus='0' target='0' unit='7'/>
        </disk>
        :
    3. Start the guest OS.

    Using virtio block device for mirroring among servers
    1. Stop the guest OS.

    2. Select the stopped guest OS with the Virtual Machine Manager and click the [Open] button in the toolbar

    3. Click in the toolbar to display the detailed information of hardware.

    4. Select a virtual disk (VirtIO Disk) from the hardware list in the left.

    5. In the [Virtual disk] window, set the serial number on [Serial number] of [Advanced options], and click [Apply].
      The serial number should be a character string of up to 10 characters that does not duplicate in the virtual machine.

    6. Check the version of the libvirt package on the host OS by using the rpm(8) command.

      # rpm -qi libvirt
    7. If the version of the libvirt package is libvirt-0.9.4-23.el6_2.4 or later, change the device attribute from disk to lun, which is set in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS.

      # virsh edit guestname

      Example before change

        :
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw'/>
          <source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/>
          <target dev='vdb' bus='virtio'/>
          <serial>serial number</serial>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </disk>
        :

      Example after change

        :
        <disk type='block' device='lun'>
          <driver name='qemu' type='raw'/>
          <source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/>
          <target dev='vdb' bus='virtio'/>
          <serial>serial number</serial>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </disk>
        :
    8. Start the guest OS.

    9. On the guest OS, make sure that the by-id file of virtual disk exists.

      - Make sure that the by-id files exist in all virtio block devices used for mirroring among servers.

      - Make sure that the serial number set in step 5 is included in the file name of by-id file.

      # ls -l /dev/disk/by-id
      :
      lrwxrwxrwx 1 root root 9 Apr 18 08:44 virtio-disk001 -> ../../vdg lrwxrwxrwx 1 root root 9 Apr 18 08:43 virtio-disk002 -> ../../vdh : serial number
  2. Setting up the virtual bridge (administrative LAN/public LAN/cluster interconnect)

    For the network interfaces, including the administrative LAN, public LAN and cluster interconnect, that are used by virtual domains, you need to set up virtual bridges for the virtual networks beforehand.

    (1) Setting up a virtual bridge for the administrative LAN

    Edit the /etc/sysconfig/network-scripts/ifcfg-ethX file as follows:

    DEVICE=ethX
    BOOTPROTO=none
    ONBOOT=yes
    BRIDGE=brX

    Create the interface setting file, /etc/sysconfig/network-scripts/ifcfg-brX, for the virtual bridge.

    DEVICE=brX
    TYPE=Bridge
    BOOTPROTO=static
    IPADDR=xxx.xxx.xxx.xxx
    NETMASK=xxx.xxx.xxx.xxx
    ONBOOT=yes

    Note

    For IPADDR and NETMASK, set IP addresses and netmasks to connect to the external network. When IPv6 addresses are required, make the setting so that IPv6 addresses are assigned.

    (2) Setting up virtual bridges for the public LAN and cluster interconnect

    Edit the /etc/sysconfig/network-scripts/ifcfg-ethX file as follows:

    DEVICE=ethX
    BOOTPROTO=none
    ONBOOT=yes
    BRIDGE=brX

    Create the interface setting file, /etc/sysconfig/network-scripts/ifcfg-brX, for the virtual bridge.

    DEVICE=brX
    TYPE=Bridge
    ONBOOT=yes
  3. Setting the guest OS in the host OS (in a KVM environment)

    Perform the following settings to stop the guest OS normally if the host OS is shut down by mistake while the guest OS running.

    Define the following two values in /etc/sysconfig/libvirt-guests. When values are already defined, change them to the following values:

    • ON_SHUTDOWN=shutdown

    • SHUTDOWN_TIMEOUT=300

    Specify the timeout duration (seconds) for shutdown of the guest OS to SHUTDOWN_TIMEOUT. Estimate the length of time for shutting down the guest OS and set the value. When multiple guest OSes are set, set the time whichever is greater. The above is an example when the time is 300 seconds (5 minutes).

    Note

  4. Starting the libvirt-guests service

    • RHEL6 environment

      Execute the following command on all the nodes to check the startup status of the libvirt-guests service.

      # /sbin/service libvirt-guests status
      stopped

      If "stopped" is displayed, execute the following command.

      If "started" is displayed, it is not necessary to execute the command.

      # /sbin/service libvirt-guests start
    • RHEL7 environment

      Execute the following command on all the nodes to check the startup status of the libvirt-guests service.

      # /usr/bin/systemctl status libvirt-guests.service
      libvirt-guests.service - Suspend/Resume Running libvirt Guests
        Loaded: loaded (/usr/lib/systemd/system/libvirt-guests.service; disabled; vendor preset: disabled)
        Active: inactive (dead)

      If "inactive" is displayed in "Active:" field, execute the following command.

      If "active" is displayed in "Active:" field, it is not necessary to execute the command.

      # /usr/bin/systemctl start libvirt-guests.service
  5. Setting the startup operation of the libvirt-guests service

    • RHEL6 environment

      Make sure that the current libvirt-guests service is enabled on all the nodes.

      # /sbin/chkconfig --list libvirt-guests
      libvirt-guests  0:off   1:off   2:off   3:off   4:off   5:off   6:off

      If any one of the run levels 2, 3, 4, 5 is "off", execute the following command.

      If all of the run levels 2, 3, 4, 5 are "on", it is not necessary to execute the command.

      # /sbin/chkconfig --level 2345 libvirt-guests on
    • RHEL7 environment

      Make sure that the current libvirt-guests service is enabled on all the nodes.

      # /usr/bin/systemctl list-unit-files --type=service | grep libvirt-guests.service
      libvirt-guests.service                        disabled 

      If "disabled" is displayed in "libvirt-guests.service" field, execute the following command.

      If "enabled" is displayed in "libvirt-guests.service" field, it is not necessary to execute the following command.

      # /usr/bin/systemctl enable libvirt-guests.service
  6. Creating a user ID

    Point

    This user ID will be the one used by the shutdown facility to log in to the host OS to force shut down the nodes. This user ID and password are used for configuring the shutdown facility.

    You need to set up a user for the shutdown facility for the guest OS control by PRIMECLUSTER.

    (1) Creating a general user ID (optional)

    Create a general user ID (optional) for the shutdown facility in the host OS.

    # useradd <User ID>

    (2) Setting up the "sudo" command

    You need to set up the "sudo" command so that the general user ID (optional) for the shutdown facility can execute the command as the root user.

    Use the visudo command to add the following setting so that the general user created in step (1) can execute the command without entering the password.

    <User ID>   ALL=(root) NOPASSWD: ALL

    Moreover, in order to permit the "sudo" execution without "tty", add "#" to the beginning of the following line to comment it out.

    Defaults    requiretty

3.2.2.3 Guest OS setup

Perform the following procedure on all guest OSes of a cluster.

  1. Setting up the network

    On the guest, you need to set up the network, including IP addresses of the public LAN and administrative LAN.

    This setup should be performed after installing the operating system.

    See

    For information on changing the public LAN and administrative LAN that the PRIMECLUSTER system uses, see "9.2 Changing the Network Environment."

    Information

    Web-Based Admin View automatically sets up an interface that was assigned the IP address of the host name corresponding to the node on which PRIMECLUSTER was installed. This interface will be used as a transmission path between cluster nodes and cluster management server, and between cluster management servers and clients.

  2. Installing the bundled software on the guest OS

    Install the bundled software on the guest OS.

  3. Initial setting

    Initialize the guest OS.

    See

    For details on settings, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."

  4. Checking the guest domain name

    Check the guest domain names set on installation of the guest OSes. These names are used when setting up the Shutdown Facility. For information on how to check guest domain names, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."

3.2.2.4 NTP setup (host OS and guest OS)

Before building the cluster, make sure to set up NTP that synchronizes the time of each node in the cluster system.

This setup should be performed on the host OS and guest OS before installing PRIMECLUSTER.

See

For details on settings, see "Red Hat Enterprise Linux 6 Virtualization Administration Guide" or "Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide."

3.2.2.5 Installing PRIMECLUSTER on guest OSes

Install PRIMECLUSTER on guest OSes.

For details, see "3.3 PRIMECLUSTER Installation."

3.2.2.6 Checking and setting the kernel parameters

To operate the PRIMECLUSTER-related software, you need to edit the values of the kernel parameters based on the environment.

Perform this setup before restarting the installed PRIMECLUSTER.

Target node:

All the nodes on which PRIMECLUSTER is to be installed

The kernel parameters differ according to the products and components to be used.

Check "Setup (initial configuration)" of PRIMECLUSTER Designsheets and edit the value if necessary.

See

For information on the kernel parameters, see "3.1.7 Checking and Setting the Kernel Parameters."

Note

To enable modifications, you need to restart the operating system.

3.2.2.7 Installing and setting up applications

Install software products to be operated on the PRIMECLUSTER system and configure the environment as necessary.

For details, see "3.4 Installation and Environment Setup of Applications."