Top
PRIMECLUSTER  Installation and Administration Guide4.5
FUJITSU Software

6.13.5 Maintaining File Systems Controlled by the Fsystem Resource

This section describes the procedure when maintaining file systems on a shared disk registered in the Fsystem resource.

Note

To mount a file system on a shared disk manually, mount it from any one of nodes configuring a cluster system.

If you mount file systems on shared disks from multiple cluster nodes at the same time, these file systems are destroyed. Perform the operation with careful attention.

  1. Stopping RMS on all cluster nodes

    Stop RMS on all cluster nodes.

    Example: Stopping RMS on all nodes configuring a cluster from any one of nodes with a command

    # /opt/SMAW/SMAWRrms/bin/hvshut -a
  2. Checking the mount state of a file system

    Check that a file system on a shared disk has not been mounted with the df command so that the file system cannot be mounted mistakenly from multiple cluster nodes.

    Example: Executing the df command

    # /usr/sbin/df -k                                                  
    Filesystem            kbytes    used   avail capacity  Mounted on  
    /dev/dsk/c0t0d0s0    6718025 4839652 1811193    73%    /           
    /proc                      0       0       0     0%    /proc       
    mnttab                     0       0       0     0%    /etc/mnttab 
    fd                         0       0       0     0%    /dev/fd     
    swap                 2244776      16 2244760     1%    /var/run    
    swap                 2251760    7000 2244760     1%    /tmp        

    If the file system has already mounted, a cluster application may be in operation or the file system has already been mounted manually.

    In this case, stop the cluster application and RMS, or unmount the target file system with the umount command.


    The following procedure is performed in any one of nodes configuring a cluster.

  3. Starting a GDS volume (only if necessary)

    If a file system or a file to be maintained exists in a volume managed by GDS, start the GDS volume in any one for nodes configuring a cluster.

    Example: When starting the volume volume0001 of the disk class class with a command

    # /usr/sbin/sdxvolume -N -c class -v volume0001
  4. Mounting and maintaining a file system

    If using ZFS

    When referring to a file on the ZFS file system, import the storage pool and mount the file system.

    1. Checking the ZFS storage pool controlled by a cluster

      See the /etc/vfstab.pcl file to check that the pool name of the ZFS file system exists.

      Example: when checking the contents of /etc/vfstab.pcl with the cat command

      # /usr/bin/cat /etc/vfstab.pcl                          
      # bdev cdev mountpoint fstype runlevel auto mount flags 
      #RMS#app app /app zfs - - -                             
      #RMS#app/mp1 app/mp1 /appdata1 zfs - - -                
      #RMS#app/mp2 app/mp2 /appdata2 zfs - - -                
    2. Importing the ZFS storage pool

      Import the ZFS storage pool with the zpool command.

      Example: when importing the ZFS storage pool "app" that is configured in the volume of the class class

      # /usr/sbin/zpool import -d /dev/sfdsk/class/dsk -R "/" app

      Note

      When the ZFS file system of the non-legacy file system has been set to the ZFS storage pool, the ZFS file system is mounted after this procedure.

    3. Mounting the ZFS file system defined as the legacy file system (only if necessary)

      Mount the ZFS file system in which "legacy" is set to the mountpoint property.

      Example: when mounting the storage pool app/mp1 to /appdata1

      # /usr/bin/mount -F zfs app/mp1 /appdata1
    4. Maintenance of a file (only if necessary)

      When the file that is used by an application is on the shared disk, refer to and update the file now.

    5. Unmounting the ZFS file system defined as the legacy file system (only if necessary)

      Unmount the ZFS file system that was mounted in step 4-3.

      Example: unmounting /appdata1

      # /usr/bin/umount /appdata1
    6. Exporting the ZFS storage pool

      Export the ZFS storage pool that was imported in step 4-2.

      Example: when exporting the ZFS storage pool "app"

      # /usr/sbin/zpool export app
    If using UFS

    When referring to a file on the UFS file system, mount the file system.

    1. Restoring the UFS file system (only if necessary)

      If it is necessary to restore the file system, restore it with the fsck command. If the target UFS file system exists in the volume that is managed by GDS, execute the fsck command on the node where the GDS volume was started in step 3.

      Example: when restoring the UFS file system of the GDS volume /dev/sfdsk/class/rdsk/volume0001

      # /usr/sbin/fsck -F ufs /dev/sfdsk/class/rdsk/volume0001
    2. Mounting the file system (only if necessary)

      When mounting the UFS file system, mount it with the mount command.

      The device name of the file system controlled by the Fsystem resource has been described in the /etc/vfstab.pcl file. Refer to the /etc/vfstab.pcl file to mount the file system.

      Example: when checking the contents of the /etc/vfstab.pcl file with the cat command

      # /usr/bin/cat /etc/vfstab.pcl 
      #RMS#/dev/sfdsk/class0001/dsk/volume0001 /dev/sfdsk/class0001/rdsk/volume0001 /disk1 ufs - no -

      Example: when mounting the file system of the mount point /disk1 controlled by the Fsystem resource

      # /usr/bin/mount -F ufs /dev/sfdsk/class0001/dsk/volume0001 /disk1
    3. Maintaining files (only if necessary)

      If files used by an operational application exist on a shared disk, refer to and update the files at this point.

    4. Unmounting the file system

      If you have mounted the file system in step 4-2, unmount it with the following procedure.

      Example: when unmounting the file system unmounted in /disk1

      # /usr/bin/umount /disk1
  5. Stopping the GDS volume (if step 3 was performed)

    Stop the GDS volume started in step 3.

    Example: when stopping the volume volume0001 of the disk class class with a command

    # /usr/sbin/sdxvolume -F -c class -v volume0001
  6. Starting RMS on all nodes

    Start RMS on all nodes.

    Example: when starting RMS on all nodes configuring a cluster from any one of nodes with a command

    # /opt/SMAW/SMAWRrms/bin/hvcm -a