Top
PRIMECLUSTER  Installation and Administration Guide 4.3
FUJITSU Software

3.2.2 When building a cluster system between guest OSes on multiple host OSes without using Host OS failover function

This section describes how to install and set up related software when building a cluster system between guest OSes on multiple host OSes without using Host OS failover function.

Perform the steps shown in the figure below as necessary.

See

For details on the configuration in a Xen environment, see "A.12.2 When building a cluster system between guest OSes on multiple host OSes".
For details on the configuration in a KVM environment, see "A.13.2.1 Cluster Configuration Worksheet".

3.2.2.1 Host OS setup (before installing the operating system on guest OS)

If you plan to operate a guest OS as part of a cluster, set up the required disk devices, virtual bridges, virtual SCSI devices, user IDs, and guest OS initializations on the host OS.

Perform the following setup on the host OS after installing the operating system on the host OS and also before installing the operating system on the guest OS.

  1. Creating the virtual SCSI devices

    Xen environment

    When using a shared disk on a guest OS, create the virtual SCSI devices and make them sharable.

    KVM environment

    When using a shared disk on a guest OS, create the virtual SCSI devices and make them sharable.

    For information on how to create the virtual SCSI devices, see "Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide."

  2. Installing and setting up related software

    Install and set up the software product (ETERNUS Multipath Driver) required for using system disk of the guest OS on the host OS. For how to install and set up the related software, see "Software Information" for ETERNUS Multipath Driver.

    Note

    For immediate cluster failover if an I/O device where the system volume is allocated fails

    In the default setting of the ext3 or ext4 file system, even if an I/O device where the system volume is allocated fails, a cluster failover does not occur and the system operation may continue based on the data stored on the memory.

    If you want PRIMECLUSTER to trigger a cluster failover immediately when an I/O device where the system volume is allocated fails, perform the following setting.

    Setting

    Specify "errors=panic" to the mount option of each partition (the ext3 or the ext4 file system) included in the system volume.

    Example: To set it in /etc/fstab (when /, /var, and /home exist in one system volume)

    LABEL=/     /     ext3 errors=panic 1 1
    LABEL=/boot /boot ext3 errors=panic 1 2
    LABEL=/var  /var  ext3 errors=panic 1 3
    LABEL=/home /home ext3 errors=panic 1 4

    However, an immediate cluster failover may not become available due to taking time for an I/O error to reach the file system. The regularly writing to the system volume enhances the detection frequency of I/O error.

  3. Mirroring the guest OS system disk

    To mirror the guest OS system disk, set up the local mirrored volume created on the host OS for the guest OS.

    Note

    For a disk to be mirrored on a guest, add it as a PCI device.

    See

    For details on settings, see "Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide."

3.2.2.2 Host OS setup (after installing the operating system on guest OS)

Perform the following setup after installing the operating system on guest OS.

  1. Setting up virtual SCSI devices

    Xen environment

    For using a shared disk on a guest OS, you need to set up a virtual SCSI device.

    KVM environment

    For using a shared disk on a guest OS, you need to set up a virtual SCSI device.

    The following shows the setup procedure for virtual SCSI devices in a KVM environment.

    1. Stop the guest OS.

    2. Select the stopped guest OS with the Virtual Machine Manager and click the [Open] button in the toolbar.

    3. Click in the toolbar to display the detailed information of hardware.

    4. Select a virtual disk (VirtIO Disk) from the hardware list in the left.

    5. In the [Virtual disk] window, perform the following settings and click [Apply].

      - Select the Shareable check box.

      - Select 'none' for the cache model.

    6. Check the version of the libvirt package on the host OS by using the rpm(8) command.

      # rpm -qi libvirt
    7. If the version of the libvirt package is libvirt-0.9.4-23.el6_2.4 or later, change the device attribute from disk to lun, which is set in the guest setting file (/etc/libvirt/qemu/guestname.xml) on the host OS.

      # virsh edit guestname

      Example before change

          :
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw'/>
          <source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/>
          <target dev='vdb' bus='virtio'/>
          <shareable/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </disk>
          :

      Example after change

          :
        <disk type='block' device='lun'>
          <driver name='qemu' type='raw'/>
          <source dev='/dev/disk/by-id/scsi-1FUJITSU_30000085002B'/>
          <target dev='vdb' bus='virtio'/>
          <shareable/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </disk>
    8. Start the guest OS.

    Note

    • For a disk to be added to a guest, specify with the by-id name.

    • When you add a Virtio block device to a guest, add it without dividing a disk.

  2. Setting up the virtual bridge (administrative LAN/public LAN/cluster interconnect)

    Xen environment

    For the network interfaces, including the administrative LAN, public LAN and cluster interconnect, that are used by virtual domains, you need to set up virtual bridges for the virtual networks beforehand.

    The following virtual bridge configurations are used in cluster systems.

    • gextbr:

      Virtual bridge for the public LAN/cluster interconnect

    • xenbr:

      Virtual bridge for the administrative LAN

    Example: Define the virtual bridges (gextbr4 and gextbr5) for the cluster interconnect in the script file (network-bridge-pcl) of the previously set up virtual bridge.

    gextbr4,gextbr5:

    Virtual bridge for the cluster interconnect

    xenbr0,xenbr1:

    Virtual bridge for the administrative LAN

    gextbr2,gextbr3:

    Virtual bridge for the public LAN

    network-bridge-pcl:

    The file name of the script to call the virtual bridge creation script

    (1) Check that the script (/etc/xen/scripts/gext-network-bridge) exists. If not, create it.

    See

    For details, see the manual of "PRIMECLUSTER Global Link Services Configuration and Administration Guide: Redundant Line Control Function".

    (2) Edit the script that defines the virtual bridge (network-bridge-pcl) which is located in "/etc/xen/scripts" on the host OS.

    #!/bin/sh
    #
    # Sample of Create/Delete virtual bridge
    #
    # $1 start : Create virtual bridge
    #    stop  : Delete virtual bridge
    #    status: Display virtual bridge information
    
    # Exit if anything goes wrong
    set -e
    
    command=$1
    
    glsxenscript=/opt/FJSVhanet/local/sbin/hanetxen
    xenscript=/etc/xen/scripts/network-bridge
    xenscriptgext=/etc/xen/scripts/gext-network-bridge
    
    # op_start:subscript for start operation #
    op_start () {
            $xenscript $command vifnum=0 netdev=eth0
            $xenscript $command vifnum=1 netdev=eth1
            $xenscriptgext $command extnum=2 netdev=eth2
            $xenscriptgext $command extnum=3 netdev=eth3
            $xenscriptgext $command extnum=4 netdev=eth4.10 ***added
            $xenscriptgext $command extnum=5 netdev=eth5.20 ***added
    }
    
    # op_stop:subscript for stop operation #
    op_stop () {
            op_start $command
    }
    
    case "$command" in
             start)
                      # Create your virtual bridge
                      $glsxenscript stop
                      op_start
                      $glsxenscript start
             ;;
    
             stop)
                      # Delete virtual bridge
                      $glsxenscript stop
                      op_stop
             ;;
    
             status)
                      # display virtual bridge information
                      $xenscript status
             ;;
    
             *)
                      echo "Unknown command: $command" >&2
                      echo 'Valid commands are: start, stop, status' >&2
                      exit 1
    
    esac

    (3) Set execute permissions

    Execute the command below and set the execute permissions to the script (network-bridge-pcl).

    # cp  network-bridge-pcl /etc/xen/scripts
    # cd /etc/xen/scripts
    # chmod +x network-bridge-pcl

    (4) Register with the xend service

    Check the "network-script" parameter of the xend service configuration file (/etc/xen/xend-config.sxp). Set the "network-bridge-pcl" if it is not set.

    # Your default ethernet device is used as the outgoing interface, by default.
    # To use a different one (e.g. eth1) use
    #
    # (network-script 'network-bridge netdev=eth1')
    #
    # The bridge is named xenbr0, by default.  To rename the bridge, use
    #
    # (network-script 'network-bridge bridge=<name>')
    #
    # It is possible to use the network-bridge script in more complicated
    # scenarios, such as having two outgoing interfaces, with two bridges, and
    # two fake interfaces per guest domain.  To do things like this, write
    # yourself a wrapper script, and call network-bridge from it, as appropriate.
    #
    (network-script network-bridge-pcl)
    KVM environment

    For the network interfaces, including the administrative LAN, public LAN and cluster interconnect, that are used by virtual domains, you need to set up virtual bridges for the virtual networks beforehand.

    (1) Setting up a virtual bridge for the administrative LAN

    Edit the /etc/sysconfig/network-scripts/ifcfg-ethX file as follows:

    DEVICE=ethX
    HWADDR=XX:XX:XX:XX:XX:XX
    BOOTPROTO=none
    ONBOOT=yes
    BRIDGE=brX

    Note

    For HWADDR, set the MAC address of the network interface card you are using.

    Create the interface setting file, /etc/sysconfig/network-scripts/ifcfg-brX, for the virtual bridge.

    DEVICE=brX
    TYPE=Bridge
    BOOTPROTO=static
    IPADDR=xxx.xxx.xxx.xxx
    NETMASK=xxx.xxx.xxx.xxx
    ONBOOT=yes

    Note

    For IPADDR and NETMASK, set IP addresses and netmasks to connect to the external network. When IPv6 addresses are required, make the setting so that IPv6 addresses are assigned.

    (2) Setting up virtual bridges for the public LAN and cluster interconnect

    Edit the /etc/sysconfig/network-scripts/ifcfg-ethX file as follows:

    DEVICE=ethX
    HWADDR=XX:XX:XX:XX:XX:XX
    BOOTPROTO=none
    ONBOOT=yes
    BRIDGE=brX

    Note

    For HWADDR, set the MAC address of the network interface card you are using.

    Create the interface setting file, /etc/sysconfig/network-scripts/ifcfg-brX, for the virtual bridge.

    DEVICE=brX
    TYPE=Bridge
    ONBOOT=yes
  3. Setting the guest OS in the host OS (in a KVM environment)

    In a KVM environment, perform the following settings to stop the guest OS normally if the host OS is shut down by mistake while the guest OS running.

    Define the following two values in /etc/sysconfig/libvirt-guests. When values are already defined, change them to the following values:

    • ON_SHUTDOWN=shutdown

    • SHUTDOWN_TIMEOUT=300

    Specify the timeout duration (seconds) for shutdown of the guest OS to SHUTDOWN_TIMEOUT. Estimate the length of time for shutting down the guest OS and set the value. When multiple guest OSes are set, set the time whichever is greater. The above is an example when the time is 300 seconds (5 minutes).

    Note

  4. Creating a user ID

    Point

    This user ID will be the one used by the shutdown facility to log in to the host OS to force shut down the nodes. This user ID and password are used for configuring the shutdown facility.

    Xen environment

    Create the required general user ID (FJSVvmSP) on the host OS for the guest OS control by PRIMECLUSTER.

    # useradd FJSVvmSP
    KVM environment

    In a KVM environment, you need to set up a user for the shutdown facility for the guest OS control by PRIMECLUSTER.

    (1) Creating a general user ID (optional)

    Create a general user ID (optional) for the shutdown facility in the hypervisor.

    # useradd <User ID>

    (2) Setting up the "sudo" command

    You need to set up the "sudo" command so that the general user ID (optional) for the shutdown facility can execute the command as the root user.

    Using the "visudo" command, set up the general user ID created in step (1) so that it can execute the command without entering the password.

    # visudo

    Example

    <User ID>   ALL=(root) NOPASSWD: ALL

3.2.2.3 NTP setup (host OS and guest OS)

This setup is for synchronizing the time on each node comprising the cluster system, which is necessary when creating a cluster.

This setup should be performed on the host OS and guest OS before installing PRIMECLUSTER.

See

For details on settings, see "Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide".

3.2.2.4 Guest OS setup

After completing the installation of the bundled software on the guest OS, initialize the guest OS.

This setup should be performed for all guest OSes comprising the cluster system.

See

For details on settings, see "Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide".

Moreover, on the guest OSes comprising the cluster system, you need to set up the network, including IP addresses of the public LAN and administrative LAN.

Perform this setup on all guest OSes of a cluster.

Note

Guest domain names set on installation of the guest OSes are used when setting up the Shutdown Facility.

For information on how to check guest domain names, see "Red Hat Enterprise Linux 6 Virtualization Host Configuration and Guest Installation Guide".

See

If you want to change the public LAN and administrative LAN used by PRIMECLUSTER, see "8.2 Changing an IP Address on the Public / Administrative LAN".

Information

The Web-Based Admin View automatically configures the settings so that the interface, whose IP address host name is equivalent to the node name created when the PRIMECLUSTER was installed, can be used as a transfer route between cluster nodes and cluster management servers or between cluster management servers and clients.

3.2.2.5 Installing PRIMECLUSTER on guest OSes

Install PRIMECLUSTER on guest OSes.

For details, see "3.3 PRIMECLUSTER Installation."

3.2.2.6 Checking and setting the kernel parameters

To operate the PRIMECLUSTER-related software, you need to edit the values of the kernel parameters based on the environment.

Perform this setup before rebooting the installed PRIMECLUSTER.

Target node:

All nodes on which PRIMECLUSTER is to be installed

The kernel parameters differ according to the products and components to be used.

Check the Kernel Parameter Worksheet and edit the value if necessary.

See

For information on the kernel parameters, see "A.6 Kernel Parameter Worksheet".

Note

To enable modifications, you need to restart the system after installation.

3.2.2.7 Installing and setting up applications

Install software products to be operated on the PRIMECLUSTER system and configure the environment as necessary.

For details, see "3.4 Installation and Environment Setup of Applications."