Top
PRIMECLUSTER  Installation and Administration Guide 4.5
FUJITSU Software

H.2.3 Building a Cluster

This section describes procedures for setting up a cluster with PRIMECLUSTER in a VMware environment.

H.2.3.1 Initial Setup of CF and CIP

Refer to "5.1.1 Setting Up CF and CIP" to set up CF and CIP on the guest OS.

H.2.3.2 Setting Up the Shutdown Facility (when using VMware vCenter Server Functional Cooperation)

For details on survival priority, see "5.1.2.1 Survival Priority."

In VMware environments, when a failure occurs in a guest OS, the virtual machine of the guest OS where a failure is detected is powered off forcibly by cooperating with VMware vCenter Server. By this process, an operation can be failed over.

This section explains the method for setting up the SA_vwvmr shutdown agent as the shutdown facility.

Note

Be sure to perform the following operations on all guest OSes (nodes).

  1. Encrypting the password

    Execute the sfcipher command to encrypt passwords for accessing VMware vCenter Server.

    For details on how to use the sfcipher command, see the manual page of "sfcipher."

    # sfcipher -c
    Enter User's Password:
    Re-enter User's Password:
    D0860AB04E1B8FA3
  2. Setting up the shutdown agent

    Specify the shutdown agent.

    Create /etc/opt/SMAW/SMAWsf/SA_vwvmr.cfg with the following contents on all guest OSes (nodes) of the cluster:

    # comment line
    CFName: cfname1
    VMName: vmname1
    vCenter_IP: ipaddress1
    vCenter_Port: port
    user: user
    passwd: passwd
    # comment line CFName: cfname2 VMName: vmname2 vCenter_IP: ipaddress2 vCenter_Port: port2 user: user passwd: passwd
    cfnameX            : Specify the CF node name.
    vmnameX : Specify the virtual machine name that controls the guest OS described in CFName.
    ipaddressX : Specify the IP address of VMware vCenter Server that manages the virtual machine. Available IP addresses are IPv4 and IPv6 addresses. IPv6 link local addresses are not available. When specifying the IPv6 address, enclose it in brackets "[ ]". (Example: [1080:2090:30a0:40b0:50c0:60d0:70e0:80f0])
    portX : Specify the port number of VMware vCenter Server. When using the default value (443), describe "vCenter_Port:". Do not specify the parameter.
    user : Specify the user of VMware vCenter Server created in "H.2.1.1 Installation and Configuration of Related Software." When logging in with single sign-on (SSO), specify user@SSO_domain_name. passwd : A login password of the account specified by "user". Specify the encrypted password encrypted in 1.

    Note

    • Do not change the order of each item.

    • If the virtual machine name (VMName:) includes a Japanese character, use the character code UTF-8 to describe the machine name.

    • One-byte space and a double-byte space is used as a different character. Use one-byte space when inserting a space in the file.

    • Only the line start with "#" is treated as a comment. When "#" is in the middle of a line, this "#" is treated as a part of the setting value.

      In the following example, "vm1 # node1's virtual machine." is used as the virtual machine name.

      ...
      VMName: vm1 # node1's virtual machine.
      ...
    • The contents of SA_vwvmr.cfg must be the same on all the guest OSes. If not, the shutdown facility may not work correctly.

    Example

    • Log in with single sign-on

      When the IP address of VMware vCenter Server that manages all the virtual machines is 10.20.30.40, the port numbers are the default value, the user who connects to VMware vCenter Server is Administrator, SSO domain name is vsphere.local, and the password encrypted in step "1. Encrypting the password" is D0860AB04E1B8FA3:

      ##
      ## node1's information.
      ##
      CFName: node1
      VMName: vm1
      vCenter_IP: 10.20.30.40
      vCenter_Port:
      user: Administrator@vsphere.local
      passwd: D0860AB04E1B8FA3
      ##
      ## node2's information.
      ##
      CFName: node2
      VMName: vm2
      vCenter_IP: 10.20.30.40
      vCenter_Port:
      user: Administrator@vsphere.local
      passwd: D0860AB04E1B8FA3
    • Log in without single sign-on.

      When the IP address of VMware vCenter Server that manages all the virtual machines is 10.20.30.40, the port numbers are the default value, the user who connects to VMware vCenter Server is root, and the password encrypted in step "1. Encrypting the password" is D0860AB04E1B8FA3:

      ##
      ## node1's information.
      ##
      CFName: node1
      VMName: vm1
      vCenter_IP: 10.20.30.40
      vCenter_Port:
      user: root
      passwd: D0860AB04E1B8FA3
      ##
      ## node2's information.
      ##
      CFName: node2
      VMName: vm2
      vCenter_IP: 10.20.30.40
      vCenter_Port:
      user: root
      passwd: D0860AB04E1B8FA3
  3. Setting up the shutdown daemon

    Create /etc/opt/SMAW/SMAWsf/rcsd.cfg with the following contents on all guest OSes (nodes) of the cluster:

    CFNameX,weight=weight,admIP=myadmIP:agent=SA_vwvmr,timeout=timeout
    CFNameX,weight=weight,admIP=myadmIP:agent=SA_vwvmr,timeout=timeout
    CFNameX        : CF node name of the cluster host. 
    weight         : Weight of the SF node. 
    myadmIP        : Specify the IP address of the administrative LAN for CFNameX. 
                     Available IP addresses are IPv4 and IPv6 addresses.
                     IPv6 link local addresses are not available.
                     When specifying the IPv6 address, enclose it in brackets "[ ]".
                     (Example: [1080:2090:30a0:40b0:50c0:60d0:70e0:80f0])
                     If you specify a host name, please make sure it is listed in /etc/hosts.
    timeout        : Specify the timeout duration (seconds) of the Shutdown Agent. 
                     Specify 45 for the value.
    

    Note

    The rcsd.cfg file must be the same on all guest OSes (nodes). Otherwise, operation errors might occur.

    Example

    Below is the setting examples:

    node1,weight=1,admIP=10.0.0.1:agent=SA_vwvmr,timeout=45
    node2,weight=1,admIP=10.0.0.2:agent=SA_vwvmr,timeout=45
  4. Starting the shutdown facility

    Check that the shutdown facility has started.

    # sdtool -s

    If the shutdown facility has already started, execute the following command to restart the shutdown facility.

    # sdtool -r

    If the shutdown facility is not started, execute the following command to start the shutdown facility.

    # sdtool -b
  5. Checking the status of the shutdown facility

    Check that the status of the shutdown facility is either "InitWorked" or "TestWorked." If the displayed status is "TestFailed" or "InitFailed," check the shutdown daemon settings for any mistakes.

    # sdtool -s

H.2.3.3 Setting Up the Shutdown Facility (when using I/O fencing function)

This section explains the method for setting up the SA_icmp shutdown agent as the shutdown facility.

Note

Be sure to perform the following operations on all guest OSes (nodes).

  1. Setting up the shutdown facility

    Specify the shutdown agent.

    Create /etc/opt/SMAW/SMAWsf/SA_icmp.cfg with the following contents on all guest OSes (nodes) of the cluster:

    TIME_OUT=value
    cfname:ip-address-of-node:NIC-name1,NIC-name2
    value              : Specify the interval (in seconds) for checking whether the node is
                         alive. The recommended value is "5" (s).
    cfname             : Specify the name of the CF node.
    ip-address-of-node : Specify the IP addresses of any one of the following networks
                         utilized for checking whether the cfname node is alive. 
                         Checking via multiple networks is also available. 
                         In this case, add a line for each utilized network.
                         To check LAN paths, we recommend that you use multiple ones to surely
                         determine an error.
                         However, if you prioritize to switch over automatically to 
                         surely determine an error, set only cluster interconnects to the 
                         LAN paths.
                         If only cluster interconnects are set to the LAN paths, the automatic 
                         switchover is available even though communication is disabled 
                         between cluster interconnects but available via other LAN (when you 
                         determined that the node in the communication destination is alive).
                         - Cluster interconnect (IP address of CIP)
                         - Administrative LAN
                         - Public LAN
                         Available IP addresses are IPv4 and IPv6 addresses.
                         IPv6 link local addresses are not available. 
                         When specifying the IPv6 address, enclose it in brackets "[ ]".
                         (Example: [1080:2090:30a0:40b0:50c0:60d0:70e0:80f0])
                         Enter the IP address for all guest OSes (nodes) that configure the
                         cluster system.
    NIC-nameX          : Specify the network interface of the local guest OS (node) utilized 
                         for checking whether the node defined by ip-address-of-node is alive. 
                         If there is more than one, delimit them with commas (",").

    Note

    Registering network interfaces

    • For duplicating by GLS, define all redundant network interfaces. (Example: eth0,eth1)

    • If you are bonding NICs, define the bonding device behind the IP address. (Example: bond0)

    • For registering the cluster interconnect, define all network interfaces that are used on all paths of the cluster interconnect. (Example: eth2,eth3)

    • Do not use the takeover IP address (takeover virtual Interface).

    Example

    Below indicates the setting example of clusters (consisted by 2 nodes) between guest OSes on multiple ESXi hosts.

    • When cluster interconnects (eth2,eth3) are set

      TIME_OUT=5
      node1:192.168.1.1:eth2,eth3
      node2:192.168.1.2:eth2,eth3
    • When the public LAN (duplicated (eth0,eth1) by GLS) and the administrative LAN (eth4) are set

      TIME_OUT=5
      node1:10.20.30.100:eth0,eth1
      node1:10.20.40.200:eth4
      node2:10.20.30.101:eth0,eth1
      node2:10.20.40.201:eth4
  2. Setting up the shutdown daemon

    Create /etc/opt/SMAW/SMAWsf/rcsd.cfg with the following contents on all guest OSes (nodes) of the cluster:

    CFNameX,weight=weight,admIP=myadmIP:agent=SA_icmp,timeout=timeout
    CFNameX,weight=weight,admIP=myadmIP:agent=SA_icmp,timeout=timeout
    CFNameX        : CF node name of the cluster host. 
    weight         : Weight of the SF node. 
                     Set 1 because this value is not effective with the I/O fencing function.
    myadmIP        : Specify the IP address of the administrative LAN for CFNameX. 
                     Available IP addresses are IPv4 and IPv6 addresses.
                     IPv6 link local addresses are not available.
                     When specifying the IPv6 address, enclose it in brackets "[ ]".
                     (Example: [1080:2090:30a0:40b0:50c0:60d0:70e0:80f0])
                     If you specify a host name, please make sure it is listed in /etc/hosts.
    timeout        : Specify the timeout duration (seconds) of the Shutdown Agent. 
                     Specify the following values.
                    (TIME_OUT + 2) X number of paths to be used for checking the survival
                     of a node, or 20 (specify the larger value)
                     TIME_OUT is the TIME_OUT value that is described in the SA_icmp.cfg.
    
                         - When checking the survival of a node on the 1 path
                           (either one of administrative LAN, public LAN, or cluster
                            interconnects)
                           (1) TIME_OUT is 18 or larger
                               TIME_OUT + 2
                           (2) TIME_OUT is less than 18
                               20
    
                         - When checking the survival of a node on the 2 paths
                           (either two of administrative LAN, public LAN, or cluster
                            interconnects)
                           (1) TIME_OUT is 8 or larger
                               (TIME_OUT + 2)X 2
                           (2) TIME_OUT is less than 8
                               20
    
                         - When checking the survival of a node on the 3 paths
                           (three of administrative LAN, multiple public LANs, or public
                            LAN, or cluster interconnects)
                           (1) TIME_OUT is 5 or larger
                               (TIME_OUT + 2)X 3
                           (2) TIME_OUT is less than 5
                               20

    Note

    The rcsd.cfg file must be the same on all guest OSes (nodes). Otherwise, operation errors might occur.

    Example

    Below indicates the setting example to check survival of a node by using administrative LAN and public LAN when TIME_OUT value described in the SA_icmp.cfg is 10, in a two-node configuration.

    node1,weight=1,admIP=192.168.100.1:agent=SA_icmp,timeout=24 (*)
    node2,weight=1,admIP=192.168.100.2:agent=SA_icmp,timeout=24 (*)
    timeout = (10 (TIMEOUT value) + 2) X 2(administrative LAN, public LAN) = 24

  3. Starting the shutdown facility

    Check that the shutdown facility has started.

    # sdtool -s

    If the shutdown facility has already started, execute the following command to restart the shutdown facility.

    # sdtool -r

    If the shutdown facility is not started, execute the following command to start the shutdown facility.

    # sdtool -b
  4. Checking the status of the shutdown facility

    Check that the status of the shutdown facility is either "InitWorked" or "TestWorked." If the displayed status is "TestFailed" or "InitFailed," check the shutdown daemon settings for any mistakes.

    # sdtool -s

H.2.3.4 Initial Setup of the Cluster Resource Management Facility

Refer to "5.1.3 Initial Setup of the Cluster Resource Management Facility" to set up the resource database managed by the cluster resource management facility (hereafter referred to as "CRM") on the guest OS.

H.2.3.5 Setting Up Fault Resource Identification and Operator Intervention Request

Refer to "5.2 Setting up Fault Resource Identification and Operator Intervention Request" to make the settings for identifying fault resources and for requesting operator intervention.