Top
PRIMECLUSTER  Installation and Administration Guide 4.5
FUJITSU Software

H.2.1 Software Installation

Install the software required for PRIMECLUSTER on each node.

The explanation is divided into the following topics:

H.2.1.1 Installation and Configuration of Related Software

After installing the software related to PRIMECLUSTER, you need to take it into operation and make various settings for the OS and the hardware.

Perform the following steps as necessary.

  1. Creating Virtual Machines

    Take the following steps to set system disks and related devices, shared disks and related devices, and the virtual network.

    • Setting up system disks and related devices

      • When you create a new virtual machine by using vSphere Client or vSphere Web Client, select [Eager Zeroed] to set provisions of the system disk.

      • For types of SCSI controllers, set to "LSI Logic Parallel" or "VMware Paravirtual".

      • Set to "None" for sharing of the SCSI bus.

    • Setting up shared disks (when using the I/O fencing function)

      • Add a shared disk to be taken over in the cluster system to the virtual machines as Raw Device Mapping (RDM). Also create a data store to be shared among multiple ESXi hosts. This data store must be different from the shared disk to be taken over in the cluster system. On the data store, deploy the mapping file (.vmdk) of the shared disk.

      • To add a shared disk to the first virtual machine, select "Raw Device Mapping".

      • To add a shared disk to the second virtual machine, select "Use an existing virtual disk" and specify the mapping file of the shared disk added to the first virtual machine.

      • Set the compatibility mode of shared disk to "Physical."

      • For virtual device nodes, use a new SCSI controller which is different from the system disk.

        (Example: For the SCSI disk [SCSI(X:Y)], X indicates the controller number, and Y indicates the disk number. When the virtual device node of system disk is [SCSI(0:0)], do not use the virtual device node with the controller number 0 [SCSI(0:Y)]. Use [SCSI(1:0)] etc.)

      • Set the controller number and the disk number of virtual device nodes to be consistent among all the nodes that configure the cluster system.

      • For types of SCSI controllers, set the same type as the system disk on a guest OS.

      • For sharing SCSI buses, set to "Physical."

      • For all the ESXi hosts on which PRIMECLUSTER runs, it is necessary to mark the disk device of Raw Device Mapping used for the shared disk of PRIMECLUSTER as "Permanent Reservation".

        Use the following esxcli command to mark the device as permanent reservation.

        esxcli storage core device setconfig -d <naa.id> --perennially-reserved=true

        See KB1016106 in the Knowledge Base site of VMware Inc. for configuration instructions.

        Note

        Do not mark the LUN of the VMFS datastore in which the mapping file of the shared disk is allocated as "Permanent Reservation".

    • Setting up shared disks (when using the function to stop the link with VMware vCenter Server)

      • To use the virtual disk as the shared disk, create the data store shared with each ESXi host. Create the virtual disk in this data store.

      • For virtual device nodes, use a new SCSI controller which is different from the system disk.

        (Example: For the SCSI disk [SCSI(X:Y)], X indicates the controller number, and Y indicates the disk number. When the virtual device node of system disk is [SCSI(0:0)], do not use the virtual device node with the controller number 0 [SCSI(0:Y)]. Use [SCSI(1:0)] etc.)

      • Set the controller number and the disk number of virtual device nodes to be consistent among all the nodes that configure the cluster system.

      • For types of SCSI controllers, set the same type as the system disk on a guest OS.

      • For sharing SCSI buses, set as follows:

        - In the cluster environment between guest OSes on a single ESXi host

             [Virtual]

        - In the cluster environment between guest OSes on multiple ESXi hosts

             [Physical]
    • Setting up the virtual network

      • When creating the virtual machine, create at least two network systems for the cluster interconnect and connect them to different physical adapters.

      • For sharing the physical network adapter that is used as the cluster interconnect with multiple clusters, allocate a different port group to each cluster system for a vSwitch. In this case, set different VLAN ID to each port group.

      Note

      • When bundling the network that is specified to the interconnect by using NIC teaming of VMware, make sure to use any one of the following configurations to set the load balancing option (active-active configuration) to NIC teaming.

        1. Route based on source port ID

        2. Route based on source MAC hash

        3. Use explicit failover order

        Redundant configuration (active-standby) is enabled in any configurations other than the above configurations 1 to 3.

      • When using VMware vSphere HA, apply the settings to the destination host of the virtual machine.

  2. NTP settings (Guest OS)

    Before building the cluster, make sure to set up NTP that synchronizes the time of each node in the cluster system.

    Make these settings on the guest OS before you install PRIMECLUSTER.

  3. Guest OS settings (Guest OS)

    Take the following steps to set the guest OS.

    • File system settings for system volume

      If an I/O device where the system volume is placed fails, a cluster failover does not occur and the system operation may continue based on the data stored on the memory.

      If you want PRIMECLUSTER to trigger a cluster failover by panicking a node in the event that an I/O device where the system volume is placed fails, set the ext3 or the ext4 file system to the system volume and perform the following setting.

      Setting

      Specify "errors=panic" to the mount option of each partition (the ext3 or the ext4 file system) included in the system volume.

      Example: To set it in /etc/fstab (when /, /var, and /home exist in one system volume)

      LABEL=/     /     ext3 errors=panic 1 1
      LABEL=/boot /boot ext3 errors=panic 1 2
      LABEL=/var  /var  ext3 errors=panic 1 3
      LABEL=/home /home ext3 errors=panic 1 4

      However, an immediate cluster failover may not become available due to taking time for an I/O error to reach the file system. The regularly writing to the system volume enhances the detection frequency of I/O error.

    • Network settings

      In the guest OS in the cluster system, it is necessary to make network settings such as IP addresses for the public LAN and the administrative LAN.

      Implement these settings on the guest OS that you are going to run as a cluster.

  4. Installation of PRIMECLUSTER (Guest OS)

    For installing PRIMECLUSTER, an installation script (CLI Installer) is available.

    This script method installs PRIMECLUSTER node by node on systems that already have Linux(R) and related software installed. It is also utilized for installation on cluster management servers.

    See

    For details on the installation procedure, see the Installation Guide for PRIMECLUSTER.

  5. Checking and setting the kernel parameters

    Depending on the environment, the kernel parameters must be modified.

    Applicable nodes:

    All the nodes on which PRIMECLUSTER is to be installed

    Depending on the utilized products and components, different kernel parameters are required.

    Check PRIMECLUSTER Designsheets and modify the settings as necessary.

    See

    For details on the kernel parameters, see "3.1.7 Checking and Setting the Kernel Parameters."

  6. Setting the I/O fencing function of GDS

    When using the I/O fencing function, set up the I/O fencing function of GDS.

    Add the following line into the /etc/opt/FJSVsdx/sdx.cf file:

    SDX_VM_IO_FENCE=on
    Applicable nodes:

    All the nodes on which PRIMECLUSTER is to be installed.

  7. Setting up the /etc/hostid file

    Set hostid that is used with the I/O fencing function.

    According to the following steps, check whether setting up the /etc/hostid file is required, and then, set it up if needed.

    How to check
    Execute the hostid command and check the output.

    When the output is other than "00000000," setting up the /etc/hostid file is not necessary.

    # hostid
    a8c00101

    When the output is "00000000," follow the setting procedure below to set the host identifier (output of hostid) on all the nodes that configure the cluster. For the host identifier, specify the value unique to each node. Do not set 00000000 for the value.

    Setting procedure

    1. Create the /etc/hostid file.

      # touch /etc/hostid
    2. Create the following python script file.
      [Contents of the file to be created]

      #!/usr/bin/python
      from struct import pack
      filename = "/etc/hostid"
      hostid = pack("I",int("0x<hhhhhhhh>",16))
      open(filename, "wb").write(hostid)

      (<hhhhhhhh>: Describe the intended host identifier in base 16, 8 digit numbers.)

    3. Set the execute permissions to the created script file and then, execute it.

      # chmod +x <created script file name>
      # ./<created script file name>
    4. Execute the hostid command to check if the specified host identifier is obtained.

      # hostid
      hhhhhhhh

      (hhhhhhhh: host identifier that is specified in the script file)

  8. Configuring VMware vCenter Server

    When using VMware vCenter Server functional cooperation, configure VMware vCenter Server.

    For how to configure VMware vCenter Server, see the documentation published by VMware.

    Also take the following steps after configuring VMware vCenter Server.

    1. For VMware vCenter Server functional cooperation, add the roles to which the following authorities are applied to VMware vCenter Server:

      • Virtual machine-Interaction-Power-off

      • Virtual machine-Interaction-Power-on

      If the roles cannot be added, check the registered roles that have the above authorities.

    2. For VMware vCenter Server functional cooperation, create the user in VMware vCenter Server.

    3. Add the user created in step 2 to the authority of the virtual machine that is used as the cluster. Apply the roles that are added or checked in step 1 to this user.

    Note

    • If the route from the virtual machine to VMware vCenter Server is interrupted, the virtual machine cannot be forcibly stopped. In this case, configuring the route to VMware vCenter Server to be redundant is recommended.

    • Do not include "\" in the virtual machine name. If it is included, the virtual machine cannot be forcibly stopped normally.

  9. Setting up VMware vSphere HA

    Set up VMware vSphere HA to use the function of VMware vSphere HA.

    Refer to the document issued by VMware when setting up VMware vSphere HA.

    Note

    • Set "Restart VMs" for the host failure.

    • Set "Disable" for the Proactive HA failure recovery.

    • The recommended action for the Response for Host Isolation is "Power off and restart VMs." If any other actions are taken, userApplication may not fail over or may take longer time for failover.

Note

  • To activate the modified kernel parameters and the I/O fencing function of GDS, restart the guest OS after installation settings for related software is complete.

  • When using the VMware vCenter Server functional cooperation, do not include "\" in the virtual machine name. If it is included, the virtual machine cannot be forcibly stopped normally.

H.2.1.2 Installation and Environment Configuration of Applications

Install applications products to be operated on the PRIMECLUSTER system and configure the environment as necessary.

See

  • For details on environment setup, see manuals for each application.

  • For information on PRIMECLUSTER-related products supporting VMware, see the documentation for each product.