Top
PRIMECLUSTER  Installation and Administration Guide 4.5
FUJITSU Software

10.7 Adding file system to the shared disk by Dynamic Changing Configuration

This section describes add Fsystem resources without stopping jobs.

Note

In the dynamic changing configuration, RMS is stopped with the cluster application operating.

When RMS is disabled, a cluster application is not failed over if an error occurs in the cluster application. In this case, to minimize the shutdown time of RMS, check the following operation procedure carefully, then investigate and sort out the necessary operating steps.

Moreover, disable the failover report function or take another action if necessary when using middleware that notifies an error when RMS is stopped.

Operation flow

Operation Procedure:

  1. Check Cmdline Resources name and Online/Offline Script.

    Check the resource name of the Cmdline resource by "hvdisp -T gResource" command when the Cmdline resource is included in the cluster application.

    If the Cmdline resource name contains the resource name that starts with "RunScriptsAlways", the NULLDETECTOR flag is set to that resource.

    Example

    When the execution result of the hvdisp command is the following, it can be judged that the NULLDETECTOR flag is set to the Cmdline resource RunScriptsAlways001_Cmd_APP1 and the Cmdline resource RunScriptsAlways001_Cmd_APP2.

    # hvdisp -T gResource
    Local System: node01RMS Configuration: /opt/SMAW/SMAWRrms/build/config.us Resource Type HostName State StateDetails ----------------------------------------------------------------------------- RunScriptsAlways001_Cmd_APP2 gRes Online ManageProgram000_Cmd_APP2 gRes Online RunScriptsAlways001_Cmd_APP1 gRes Offline ManageProgram000_Cmd_APP1 gRes Offline

    It is necessary to add the processing described in "6.11.2.1.4 Notes When Setting the NULLDETECTOR Flag" to the Online/Offline scripts of the Cmdline resource when the NULLDETECTOR flag is enabled.

    Modify the script after stopping RMS according to the following procedure when the necessary processing is not included.

  2. Check userApplication Operational Node.

    Check that the standby userApplication operates in which node in the cluster (Which node is the operational node?) by the hvdisp -T userApplication command.

    Example

    When the execution result of the hvdisp command is the following, the operational node of app1 is node02 and the operational node of app2 is node01.

    # hvdisp -T userApplcation
    Local System:  node01RMS
    Configuration: /opt/SMAW/SMAWRrms/build/config.us
    
    Resource            Type    HostName            State        StateDetails
    -----------------------------------------------------------------------------
    app2                userApp                     Online
    app1                userApp                     Standby
    app1                userApp node02RMS           Online

    When determining the node that mounts the file system manually according to the following procedure, information of the operation node of the cluster application is necessary.

  3. Create File Systems Controlled by the Fsystem Resources.

    When the mount point controlled by the Fsystem resource is created on the new volume of GDS, create the file system after starting the volume of GDS on operating node.

    Information

    For details on starting the volume of GDS and creating file system, see "6.7.3.2 Setting Up Fsystem Resources."

  4. Check and mount newly created File System.

    On the operation node of userApplication that adds the Fsystem resources according to Step 2, mount the newly created file system and check that the mount is correctly done.

    Example

    According to the following Step 8, specify an example to add the following line to the /etc/fstab.pcl file.

    #RMS#/dev/sfdsk/class0001/dsk/volume0004 /mnt/swdsk4 ext3 noauto 0 0

    Execute the command below in the operational node to mount file system.

    # /sbin/mount -t ext3 /dev/sfdsk/class0001/dsk/volume0004 /mnt/swdsk4

    After mounting, execute the command below to check that if the mount point is displayed (if the file system is mounted).

    # /sbin/mount | /bin/grep "/mnt/swdsk4 "
    /dev/sfdsk/class0001/dsk/volume0004 on /mnt/swdsk4 type ext3 (rw)

    Additionally, check that the file system is not mounted on the standby node.

  5. Stop RMS.

    Execute the hvshut -L command on all the nodes to stop RMS when cluster application is still operating.

    Enter 'yes' in response to the warning message when the hvshut -L command is executed.

    # hvshut -L
                            WARNING
                            -------
    The '-L' option of the hvshut command will shut down the RMS
    software without bringing down any of the applications.
    In this situation, it would be possible to bring up the same
    application on another node in the cluster which *may* cause
    data corruption.
    
    Do you wish to proceed ? (yes = shut down RMS / no = leave RMS running).
    yes
  6. Check the stop of RMS.

    Execute the hvdisp -a command on all the nodes. If RMS has stopped, the command outputs the standard error output "hvdisp: RMS is not running".

    # hvdisp -a
    hvdisp: RMS is not running
  7. Modify the Online/Offline scripts of the Cmdline resources when NULLDETECTOR flag is enabled if necessary.

    As a result of the check of Step 1, if the correction is necessary for the Online/Offline scripts of the Cmdline resources when NULLDETECTOR flag is enabled, see "6.11.2.1.4 Notes When Setting the NULLDETECTOR Flag" to modify the scripts.

  8. Add Fsystem Resources to the Cluster System.

    Perform the following procedures that are described in "6.7.3.2 Setting Up Fsystem Resources."

    1. Defining mount point

    4. Tuning of file system

    6. Registering cluster application of Fsystem resources

    When the mount point controlled by the Fsystem resource is created on the new class of GDS, execute it based on the procedures described in "6.7.3.3 Preliminary Setup for Gds Resources" and "6.7.3.4 Setting Up Gds Resources."

  9. Perform Generate and Activate.

    For details of performing Generate and Activate, See the procedure of "6.7.4 Generate and Activate."

  10. Start RMS on all the nodes.

    Execute the hvcm - a command on any one node to start RMS on all the nodes.

    # hvcm -a
  11. Check the state of userApplications.

    Execute the hvdisp -a command on all the nodes, and check that the state of userApplication is Online on operational node and the state of userApplication is Offline or Standby on standby node according to Step 2.

    Note

    UserApplication will be Inconsistent state on either or all of the nodes after starting RMS in Step 10 when the mount of file system is not correctly operated according to Step 4. In this case, perform the following procedures.

    1. Execute the hvutil -f command on the standby node so that the state of userApplication on the standby node becomes Offline.

    2. When userApplication on the standby node is transited to Standby, execute the hvutil -s command on the standby node.

    3. Execute the hvswitch command on the operational node so that the state of userApplication on the operational node becomes Offline.