This section explains monitoring targets, support configurations and required setup procedure before registering to Fsystem resources when using ZFS with PRIMECLUSTER.
Monitoring Facility
PRIMECLUSTER provides the following monitoring functions for ZFS file system which is configured on ZFS storage pool.
Monitoring of the ZFS storage pool status (the status displayed by the zpool list command)
Monitoring of the mount status of the ZFS file system created on the ZFS storage pool
Monitoring of the NFS share status for the ZFS file system created on the ZFS storage pool
Supported Configurations
The ZFS configurations supported with PRIMECLUSTER are as follows.
ZFS storage pool device
GDS physical special files (example: /dev/sfdsk/class/dsk/volume0001) only
ZFS file system type
Both non-legacy file system (*1) and legacy file system (*2) are supported.
Also, both file systems can exist together on one ZFS storage pool.
*1: This is the default file system for ZFS. It is mounted or unmounted when the ZFS storage pool is imported or exported.
*2: This is the file system which has the mountpoint property of "legacy." As with the UFS file system, it is managed using the mount/umount commands and /etc/vfstab.pcl.
Highest level ZFS file system
Make the highest level file system (the file system automatically created during the creation of the ZFS storage pool) a non-legacy file system.
Dataset
There are no restrictions on the type of dataset that can be created on the ZFS storage pool, but only file systems can be used for monitoring.
Notes on Using Legacy File System
Since there are the following disadvantages when using the legacy file system in PRIMECLUSTER, it is recommended that you use the non-legacy file system.
Since mounting and unmounting are not performed as a part of the ZFS storage pool control, as with the UFS file systems, resources are created, also mounting and unmounting are performed for each file system. As such, the Online/Offline process takes more time for legacy file systems than for non-legacy file systems.
Note
Create the ZFS storage pool and the ZFS legacy file systems created upon it as a single Fsystem resource.
The directory, which is the same name as the ZFS storage pool, cannot be used as a mountpoint of the legacy ZFS file system (For example, setting "legacy" to the mountpoint property of the ZFS storage pool "app1" and setting /etc/vfstab.pcl to mount it to "/app1").
Notes on Using Multiple File Systems in Combination
If you want to mount the legacy ZFS file system or UFS file system to the directory on the ZFS file system controlled by the Fsystem resource, use a legacy mount point.
Specifically, create a new dataset on the ZFS storage pool, and then make the dataset to be mounted as the legacy mount point. In addition, mount the legacy ZFS file system and UFS file system to the directory under the legacy mount point.
Example
The procedure for mounting the UFS file system on the ZFS storage pool "app1" is as follows:
Create the dataset "app1/zfsmnt" as the legacy ZFS file system.
Example of creating the dataset as the legacy ZFS file system
# zfs create app1/zfsmnt # zfs set mountpoint=legacy app1/zfsmnt
In order to mount the dataset "app1/zfsmnt" on "/zfsmnt" and mount the UFS file system on "/zfsmnt/ufsmnt", edit the "/etc/vfstab.pcl" file and add the mount information to it.
Example of how to describe "/etc/vfstab.pcl"
#RMS#app1/zfsmnt app1/zfsmnt /zfsmnt zfs - - - #RMS#/dev/sfdsk/class0001/dsk/volume0001 /dev/sfdsk/class0001/rdsk/volume0001 /zfsmnt/ufsmnt ufs - no -
Fsystem resource cannot be created with the following configurations (combination):
Configuration to mount the dataset of other storage pools or the UFS file system on the non-legacy ZFS file system which controlled by the Fsystem resource.
Example
The UFS file system controlled by the Fsystem resource cannot be mounted on the "/app/data" directory of the non-legacy ZFS file system imported by the Fsystem resource.
Configuration to import the ZFS storage pool to the mountpoint of the legacy ZFS file system which controlled as the Fsystem resource.
Configuration to import the ZFS storage pool to the mountpoint other than ZFS file system which controlled as the Fsystem resource.
Example
The mountpoint /mnt/data of the ZFS storage pool cannot be created on the UFS mountpoint /mnt controlled as Fsystem resource.
GDS Configuration Setup
See "6.3.2 GDS Configuration Setup" and "PRIMECLUSTER Global Disk Services Configuration and Administration Guide", create a shared disk.
The GDS physical special file which is the target on the node where the following operation is performed needed to be accessed.
Creating the ZFS storage pool
Perform the procedure up to "5. export of the ZFS storage pool" by one of the nodes which configure a cluster.
Create the ZFS storage pool by the zpool create command. The following example shows that the storage pool name is the app and the GDS physical special file (/dev/sfdsk/class/dsk/volume0001) is used.
# zpool create app /dev/sfdsk/class/dsk/volume0001 # zfs list -r app NAME USED AVAIL REFER MOUNTPOINT app 178K 129G 28.5K /app
For details on the command to use, see the Solaris ZFS management guide.
The highest level of the ZFS file system is automatically created if the ZFS storage pool is created as above.
Creating the ZFS file system
Create the ZFS file system by the zfs command. Three non-legacy files of app/home, app/config and app/data are created in the example below.
# zfs create app/home # zfs create app/config # zfs create app/data # zfs list -r app NAME USED AVAIL REFER MOUNTPOINT
When creating a non-legacy file system, set the mountpoint property to legacy. The following is the example of setting the file system app/data to legacy.
# zfs set mountpoint=legacy app/data
Information
It is not a problem even if the "-o mountpoint=legacy" is specified and the mountpoint property is set when creating the file system by the zfs create.
Prerequisites for Fsystem resources
See "Note" in "6.7.1.2 Creating Fsystem Resources" and "6.7.1.2.1 Prerequisites" to perform the prerequisites for registering the Fsystem resources.
For settings to share in NFS, see the procedure in "6.7.1.2.1 Prerequisites." Also, when sharing the non-legacy file systems in NFS, the sharenfs property of ZFS must be setup.
For information on how to set up the sharenfs property, see the manual ZFS (1M) of ZFS. The following is an example how to set the sarenfs (specifying on) to the file system app/home.
# zfs set sharenfs=on app/home
export of the ZFS storage pool
Export the ZFS storage pool created above by the zpool export command.
# zpool export app
If using ZFS in PRIMECLUSTER, there are the following notes on the operation.
Do not allocate a file to the import destination and the mount destination or mount the other file system. For the Online process of the ZFS resource, the ZFS storage pool is imported and the ZFS file system is mounted. Therefore, if a file is allocated to the directory of the import destination or the mountpoint of the file system and the other file system is mounted, there may be a case where the startup of the userApplication and Failover are failed.
When starting the userApplication, the ZFS storage pool need to be exported. After creating the ZFS storage pool and complete the settings, follow the step 5) "6.4.1.2 Setup Procedure" and export the ZFS storage pool. Moreover, when importing it automatically with the purpose of a backup, export the userApplication before startup. If it is imported, the startup of the userApplication is failed.
After creating the ZFS storage pool, do not access via the physical special file of GDS (/dev/sfdsk/class/dsk/volume and /dev/sfdsk/class/rdsk/volum)
In the non-legacy zfs file system, do not execute unshare command for the mountpoint of the dataset where "on" was set to the sharenfs property or the share.nfs property.
In addition, do not delete the ZFS sharing with zfs set -c. If those commands are executed, a resource error is detected and userApplication cannot be started. This is because the mountpoint is not published as NFS when the pool and the dataset start up.
If unshare command is executed mistakenly, stop RMS on all nodes, import the non-legacy ZFS file system manually, and then execute share command to the directory where unshared command was executed.
After that, delete the ZFS sharing, and then export the non-legacy ZFS file system.
If the ZFS sharing was deleted with zfs set -c mistakenly, stop RMS on all nodes, import the non-legacy ZFS file system manually, and then set "off" to the sharenfs property or the share.nfs property for the dataset where the ZFS sharing is to be deleted. After that, export the non-legacy ZFS file system.