Top
PRIMECLUSTER  Installation and Administration Guide4.5
FUJITSU Software

10.4.1 Adding Fsystem Resource Dynamically

This section describes how to add Fsystem resources without stopping jobs.

Note

In the dynamic changing configuration, RMS is stopped with the cluster application operating.

When RMS is disabled, a cluster application is not failed over if an error occurs in the cluster application. In this case, to minimize the shutdown time of RMS, check the following operation procedure carefully, then investigate and sort out the necessary operating steps.

Moreover, disable the failover report function or take another action if necessary when using middleware that notifies an error when RMS is stopped.

Operation flow

Note

This procedure is necessary when performing the following operations:

  • Newly adding the ZFS storage pool

  • Newly adding the legacy ZFS file systems to the existing ZFS storage pool

  • Newly adding the UFS file systems

When newly adding the non-legacy file systems to the existing ZFS storage pool, the procedure explained in this section is unnecessary.

In this case, by setting the cluster application to the maintenance mode in advance, the ZFS file system can be added dynamically while RMS is in operation.

For how to use the maintenance mode, refer to "7.4 Using maintenance mode" in " PRIMECLUSTER Reliant Monitor Services (RMS) with Wizard Tools Configuration and Administration Guide."

Operation Procedure:

  1. Check Cmdline Resources Name and Online/Offline Script.

    Check the resource name of the Cmdline resource by "hvdisp -T gResource" command when the Cmdline resource is included in the cluster application.

    If the Cmdline resource name contains the resource name that starts with "RunScriptsAlways", the NULLDETECTOR flag is set to that resource.

    Example

    When the execution result of the hvdisp command is the following, it can be judged that the NULLDETECTOR flag is set to the Cmdline resource RunScriptsAlways001_Cmd_APP1 and the Cmdline resource RunScriptsAlways001_Cmd_APP2.

    # hvdisp -T gResource
    Local System: node01RMS Configuration: /opt/SMAW/SMAWRrms/build/config.us Resource Type HostName State StateDetails ----------------------------------------------------------------------------- RunScriptsAlways001_Cmd_APP2 gRes Online ManageProgram000_Cmd_APP2 gRes Online RunScriptsAlways001_Cmd_APP1 gRes Offline ManageProgram000_Cmd_APP1 gRes Offline

    It is necessary to add the processing described in "Notes When Setting the NULLDETECTOR Flag" to the Online/Offline scripts of the Cmdline resource when the NULLDETECTOR flag is enabled.

    Modify the script after stopping RMS according to the following procedure when the necessary processing is not included.

  2. Check userApplication Operational Node.

    Check that the standby userApplication operates in which node in the cluster (Which node is the operational node?) by the hvdisp -T userApplication command.

    Example

    When the execution result of the hvdisp command is the following, the operational node of app1 is node02 and the operational node of app2 is node01.

    # hvdisp -T userApplcation
    Local System:  node01RMS
    Configuration: /opt/SMAW/SMAWRrms/build/config.us
    
    Resource            Type    HostName            State        StateDetails
    -----------------------------------------------------------------------------
    app2                userApp                     Online
    app1                userApp                     Standby
    app1                userApp node02RMS           Online

    When determining the node that mounts the file system manually according to the following procedure, information of the operation node of the cluster application is necessary.

  3. Create File Systems Controlled by the Fsystem Resources.

    When the mount point controlled by the Fsystem resource is created on the new volume of GDS, create the file system after starting the volume of GDS on operating node.

    Information

    For details on the procedure for using ZFS as Fsystem resource, see "If using ZFS."

  4. Check and Mount Newly Created File System.

    On the operation node of userApplication that adds the Fsystem resources according to Step 2, mount the newly created file system and check that the mount is correctly done.

    Example

    If using ZFS

    Below is an example when adding the ZFS storage pool: app2 and also mounting the legacy ZFS file systems to /appdata.

    Execute the following command on the operational node to import the ZFS storage pool (when the ZFS storage pool: app2 is created on the volume of disk class class2 of GDS).

    # /usr/sbin/zpool import -d /dev/sfdsk/class2/dsk -R "/" app2

    Execute the following command on the operational node to mount the ZFS file system (if the ZFS file system of the non-legacy file systems is set to the ZFS storage pool, the ZFS file system is mounted when importing the ZFS storage pool. The following procedure is unnecessary).

    # /usr/sbin/mount -F zfs app2/mp /appdata

    After mounting, execute the following command to check that the ZFS storage pool and the mount point are displayed (the file system is mounted).

    # /usr/sbin/zfs list -r app2
    NAME         USED  AVAIL  REFER  MOUNTPOINT
    app2         148K   976M    31K  none
    app2/mp       31K   976M    31K  legacy
    # /usr/bin/df -k | /bin/grep "app2/mp "
    app2/mp               999424          31      999276     1%    /appdata

    Additionally, check that the file system is not mounted on the standby node.

    If using UFS

    Below is an example when mounting the UFS file systems on the GDS volume to /disk2.

    Execute the following command on the operational node to mount the UFS file systems.

    # /usr/sbin/mount -F ufs /dev/sfdsk/class0002/dsk/volume0001 /disk2

    After mounting, execute the following command to check that the mount point is displayed ( the file system is mounted).

    # /usr/bin/df -k | /bin/grep "/disk2"
    /dev/sfdsk/class0002/dsk/volume0001     999424    31     999276     1%    /disk2

    Additionally, check that the file system is not mounted on the standby node.

  5. Stop RMS.

    Execute the hvshut -L command on all nodes to stop RMS when cluster application is still operating.

    Enter 'yes' in response to the warning message when the hvshut -L command is executed.

    # hvshut -L
                            WARNING
                            -------
    The '-L' option of the hvshut command will shut down the RMS
    software without bringing down any of the applications.
    In this situation, it would be possible to bring up the same
    application on another node in the cluster which *may* cause
    data corruption.
    
    Do you wish to proceed ? (yes = shut down RMS / no = leave RMS running).
    yes

    See

    For details on the hvshut command, see the manual page of the "hvshut(1M)" command.

  6. Check the Stop of RMS.

    Execute the hvdisp -a command on all nodes. If RMS has stopped, the command outputs the standard error output "hvdisp: RMS is not running".

    # hvdisp -a
    hvdisp: RMS is not running
  7. Modify the Online/Offline Scripts of the Cmdline Resources when NULLDETECTOR Flag is Enabled if Necessary.

    As a result of the check of Step 1, if the correction is necessary for the Online/Offline scripts of the Cmdline resources when NULLDETECTOR flag is enabled, see "6.11.2.1.4 Notes When Setting the NULLDETECTOR Flag" to modify the scripts.

  8. Add Fsystem Resources to the Cluster System.

    Perform the following procedures that are described in "6.7.3.2 Setting Up Fsystem Resources".

    Example

    • When adding the ZFS storage pool: app2 and also mounting the legacy ZFS file systems to /appdata, write /etc/vfstab.pcl as follows.

      #RMS#app2 app2 /app2 zfs - - -
      #RMS#app2/mp app2/mp /appdata zfs - - -
    • When adding the UFS file systems, write the /etc/vfstab.pcl file as follows.

      #RMS#/dev/sfdsk/class0002/dsk/volume0001 /dev/sfdsk/class0002/rdsk/volume0001 /disk2 ufs - no -

    When the mount point controlled by the Fsystem resource is created on the new class of GDS, see "Creating Gds Resources."

  9. Add Fsystem Resources to the Cluster Application.

    When adding the Fsystem Resources added in Step 8 to the existing cluster application, see "10.3.1 Changing the Cluster Application Configuration" to delete the target cluster application, and then create a new cluster application again.

    When newly creating a cluster application, see "Creating Cluster Applications" to create a cluster application.

  10. Perform Generate and Activate of RMS Configuration.

    After the cluster application was registered to RMS Configuration in Step 9, if it is judged that RMS Configuration can be distributed, the following message is displayed.

    If there are neither other resource to be added nor cluster applications to be changed, click Yes.

  11. Start RMS on All Nodes.

    Execute the hvcm - a command on any one node to start RMS on all nodes.

    # hvcm -a
  12. Check the State of userApplications.

    Execute the hvdisp -a command on all nodes, and check that the state of userApplication is Online on operational node and the state of userApplication is Offline or Standby on standby node according to Step 2.

    Note

    UserApplication will be Inconsistent state on any of the nodes or both the nodes after starting RMS in Step 11 when the mount of file system is not correctly operated according to Step 4. In this case, perform the following procedures.

    1. Execute the hvutil -f command on the standby node so that the state of userApplication on the standby node becomes Offline.

    2. When userApplication on the standby node is transited to Standby, execute the hvutil -s command on the standby node.

    3. Execute the hvswitch command on the operational node so that the state of userApplication on the operational node becomes Offline.