Top
PRIMECLUSTER  Installation and Administration Guide 4.5
FUJITSU Software

8.1.1 Adding Hardware

This section describes how to add hardware.

8.1.1.1 Adding a shared disk device

The following describes how to add a shared disk device.

Figure 8.1 Procedure to add a shared disk device

Information

You must stop RMS during performing "5. Change a cluster application."

However, you do not need to stop RMS if all the following conditions are met because performing "5. Change a cluster application" is not necessary under the condition:

  • The added shared disk device is registered with the existing class of GDS.

  • The added shared disk device is no used as Fsystem resource.

Operation Procedure:

  1. Add a shared disk device.

    See "12.2 Maintenance Flow" and ask field engineers to add a shared disk device.

  2. Change the device names set in resources of the shared disk device.

    Update the device names set in the resources of the existing shared disk device to the current device names.

    Execute the following command. For filepath, specify an empty file with absolute path.

    # /etc/opt/FJSVcluster/bin/clautoconfig -f filepath

    Note

    When SDX_UDEV_USE=off is described in the GDS configuration file /etc/opt/FJSVsdx/sdx.cf, do not execute the clautoconfig command.

  3. Add resources of the shared disk device.

    Register resources corresponding to the added shared disk device to the resource database.

    See

    To register resources, see "5.1.3.2 Registering Hardware Devices."

  4. Set up Gds resources.

    To use GDS, set up GDS and create Gds resources.

    If you register the added shared disk device with the existing class of GDS, you do not need to set Gds resources.

    See

    For information on how to set up GDS and create Gds resources, see "6.3 GDS Configuration Setup," "6.7.3.3 Preliminary Setup for Gds Resources," and "6.7.3.4 Setting Up Gds Resources."

  5. Change a cluster application.

    Change a cluster application to add the following resources related to the added shared disk device to the cluster application.

    • Fsystem resource

    • Gds resource

    See

    For information on how to change a cluster application, see "10.3 Changing a Cluster Application."

8.1.1.2 Adding a Network Interface Card Used for the Public LAN and the Administrative LAN

This section describes how to add a network interface card used for the public LAN and the Administrative LAN.

Figure 8.2 Procedure to add a network interface card

Operation Procedure:

  1. Add a network interface card.

    See "12.2 Maintenance Flow" and ask field engineers to add a network interface card.

  2. Add resources of the network interface card.

    Register resources corresponding to the added network interface card to the resource database.

    See

    To register resources, see "5.1.3.2 Registering Hardware Devices."

  3. Change a cluster application.

    Change a cluster application to add the following resources related to the added network interface card to the cluster application.

    • Takeover network resource

    • Gls resource

    See

    For information on how to change a cluster application, see "10.3 Changing a Cluster Application."

8.1.1.3 Adding Hardware by DR (Dynamic Reconfiguration)

This section explains the procedure for adding a system board by DR during PRIMECLUSTER system operation.

If a system board is added by DR, this might affect the PRIMECLUSTER monitoring facility resulting in node elimination.

If DR needs to be used, stop the cluster monitoring facility beforehand with the following procedure:

  1. Execute the "hvshut" command on each node to stop PRIMECLUSTER RMS as follows. Answer "yes," then only RMS will stop. The cluster application will remain running.

    # hvshut -L
                                WARNING
                                -------
    The '-L' option of the hvshut command will shut down the RMS
    software without bringing down any of the applications.
    In this situation, it would be possible to bring up the same
    application on another node in the cluster which *may* cause
    data corruption.
    
    Do you wish to proceed ? (yes = shut down RMS / no = leave RMS running).
    yes
    
    NOTICE: User has been warned of 'hvshut -L' and has elected to proceed.
    

    Add the following line to the end of the "/opt/SMAW/SMAWRrms/bin/hvenv.local" file on each node.

    export HV_RCSTART=0

    It is necessary to perform the procedure above so that RMS will not automatically start immediately after OS startup.

  2. Execute the "sdtool" command on each node to stop PRIMECLUSTER SF as follows.

    # sdtool -e
    LOG3.013806902801080028   11   6    30   4.5A00      SMAWsf           : RCSD returned a 
    successful exit code for this command
  3. Perform the following operation on each node to change the timeout value of PRIMECLUSTER CF:

    • Add the following line to the "/etc/default/cluster.config" file.

      CLUSTER_TIMEOUT "600"
    • Execute the following command.

      # cfset -r
    • Check whether or not the timeout value is valid.

      # cfset -g CLUSTER_TIMEOUT
      >From cfset configuration in CF module:
      Value for key: CLUSTER_TIMEOUT --->600
      #
  4. Use DR.

    See

    For DR operation, refer to the related hardware manual.

  5. Perform the following operation on each node to return the timeout value of PRIMECLUSTER CF to the default value:

    • Change the value of CLUSTER_TIMEOUT defined in "/etc/default/cluster.config" file earlier to 10.

      Before change

      CLUSTER_TIMEOUT "600"

      After change

      CLUSTER_TIMEOUT "10"
    • Execute a following command.

      # cfset -r
    • Check whether or not the timeout value is valid.

      # cfset -g CLUSTER_TIMEOUT
      >From cfset configuration in CF module:
      Value for key: CLUSTER_TIMEOUT --->10
      #
  6. Execute the "sdtool" command on each node to start the PRIMECLUSTER SF.

    # sdtool -b
  7. Check if PRIMECLUSTER SF is running. (The following indicates an output example of a two-node configuration)

    # sdtool -s
    Cluster Host Agent SA State Shut State Test State Init State ------------ ----- -------- ---------- ---------- ---------- node0 SA_mmbp.so Idle Unknown TestWorked InitWorked node0 SA_mmbr.so Idle Unknown TestWorked InitWorked node1 SA_mmbp.so Idle Unknown TestWorked InitWorked node1 SA_mmbr.so Idle Unknown TestWorked InitWorked
  8. Execute the "hvcm" command on each node to start PRIMECLUSTER RMS.

    # hvcm 
    Starting Reliant Monitor Services now
  9. RMS must be running on all the nodes. Check if each icon indicating the node state is green (Online) in the RMS main window of Cluster Admin.

    Finally, remove the following line from "/opt/SMAW/SMAWRrms/bin/hvenv.local" file on each node.

    export HV_RCSTART=0

Note

  • If you plan to use DR, be sure to verify a cluster system during cluster configuration using the above steps.

  • If a node failure (such as a node panic or reset) or a hang-up occurs due to hardware failure and so on during step 1 through 7, you need to follow the procedure below to start the cluster application, which was running on the node where DR is used, on a standby node.

    1. If a hang-up occurs, stop the failed node forcibly, and then check that the node is stopped.

    2. Mark the node DOWN by executing the "cftool" command on any of the nodes where a failure has not been occurred and specifying the node number and CF node name for failed nodes. However, if the state of the failed node is not LEFTCLUSTER, wait until the node becomes LEFTCLUSTER, and then execute the "cftool -k" command.

      # cftool -n
      Node  Number State         Os       Cpu
      node0 1       UP           Linux    EM64T
      node1 2       LEFTCLUSTER  Linux    EM64T
      # cftool -k
      This option will declare a node down. Declaring an operational
      node down can result in catastrophic consequences, including
      loss of data in the worst case.
      If you do not wish to declare a node down, quit this program now.
      
      Enter node number: 2
      Enter name for node #2: node1
      cftool(down): declaring node #2 (node1) down
      cftool(down): node node1 is down
      # cftool -n
      Node  Number State        Os        Cpu
      node0 1       UP          Linux     EM64T
      node1 2       DOWN        Linux     EM64T 
      #
    3. Perform Steps 5 through 9 on all the nodes where no failure occurred, and then start RMS. If the cluster application is in an active standby configuration, execute the "hvswitch -f " command to force the cluster application to go Online. For details on the "hvswitch" command, see the description of the -f option of the online manual page for the command.

      # hvswitch -f userApplication
      The use of the -f (force) flag could cause your data to be corrupted and could cause your node to be killed. Do not continue if the result 
      of this forced command is not clear.
      The use of force flag of hvswitch overrides the RMS internal security mechanism. In particular RMS does no longer prevent resources,
      which have been marked as "ClusterExclusive", from coming Online on more than one host in the cluster. It is recommended to double
      check the state of all affected resources before continuing.
      IMPORTANT: This command may kill nodes on which RMS is not running in order to reduce the risk of data corruption!
      Ensure that RMS is running on all other nodes. Or shut down OS of the node on which RMS is not running.
      Do you wish to proceed ? (default: no) [yes, no]:yes
      #
    4. After restoring the failed node, perform step 5 through 9 on the appropriate node to start RMS.