There are the following two methods to mirror the system disk by GDS in a ZFS boot environment.
GDS mirroring
Method to configure a ZFS storage pool by one mirror volume of GDS.
GDS mirroring has the following features:
Redundancy of the system disk by GDS mirroring function.
Increasing availability of the system disk by the I/O timeout function of GDS.
It is possible to swap the system disk by the same method as the existing UFS boot environment.
ZFS mirroring
Method to configure ZFS storage pool by two single volumes (multiplicity of the mirror volumes is 1).
ZFS mirroring has the following features:
Redundancy of the system disk by the ZFS mirroring function.
Increasing availability of the system disk by the I/O timeout function of GDS.
Figure A.1 System disk mirroring in a ZFS boot environment
GDS: Global Disk Services
Note that the following when mirroring the system disk using GDS in a ZFS boot environment:
The number of disks
Mirroring by a total of two disks, system disk and mirror destination disk are supported.
Specify only one system disk as the installation destination at the time of OS installation.
The disk label type of the system disk
When using Solaris 11.1 or later on SPARC M12, SPARC M10 (XCP2230 or later) or SPARC T series (System Firmware 8.4 or later), OS can be installed in a disk with an EFI label. However, when using GDS, install OS in a disk with a VTOC label.
Before installing OS in a disk with a VTOC label, first execute the format -e command to apply a VTOC (SMI) label to the disk. For details, see the product note of SPARC M12/M10 or the manual of Oracle Solaris.
Partition configuration of the system disk
At least 20 MB of disk space for the private slice of GDS in the system disk is required. Therefore, set the ZFS root pool configuration as shown below at the time of OS installation.
Specify the slice (cXtXdXsY) not the entire system disk (cXtXdX or cXtXdXs2) as the OS installation destination.
*Specify Y for more than 0 and integer number other than 2 for less than 7. Normally it is 0.
The maximum size of the ZFS root pool must be less than (disk size -21) [MB].
Settings and cancelling the system disk mirroring
Settings and cancelling the system disk mirroring cannot be performed by GDS Management View in a ZFS boot environment.
See the following for setting and cancelling the system disk mirroring using commands.
For GDS mirroring: "7.1 System Disk Mirroring in a ZFS Boot Environment (GDS Mirroring)"
For ZFS mirroring: "7.2 System Disk Mirroring in a ZFS Boot Environment (ZFS Mirroring)"
Backing up and restoring the system disk
In the root class in a ZFS boot environment, backing up and restoring the system disk by snapshot using slice detachment and GDS Snapshot are not allowed.
Backing up and restoring the system disk in a ZFS boot environment must be performed by the ZFS snapshot function (such as zfs snapshot, zfs rollback, zfs send, and zfs receive commands). For details, see "6.2 Backing Up and Restoring a System Disk in a ZFS Boot Environment."
Creation of a boot environment
When the ZFS root pool is configured in GDS volumes, the following table shows whether a boot environment can be created or not.
Method of creating a boot environment | Solaris 11 or earlier | Solaris 11.1 or later |
---|---|---|
Creating Boot Environment (BE) with the beadm(1M) command | N | Y |
Creating a boot environment by Solaris Live Upgrade | N | N |
Y: possible, N: impossible
Operation of the system disk which is mirrored by GDS mirroring
After mirroring the system disk by GDS mirroring, ZFS recognizes a GDS volume as a virtual device which becomes a configuration element of the ZFS root pool. ZFS does not recognize the disk configuration or the slice configuration of GDS.
Therefore, if you change the disk configuration of GDS in the ZFS root pool by GDS operations such as swapping a physical disk or changing a group configuration, it does not affect ZFS.
Hence, after mirroring the system disk, you can manage or operate the system disk by the same method as the existing UFS boot environment.
Swapping the system disk which is mirrored by ZFS mirroring
Before swapping disks, detach a GDS volume on the disk to be swapped from the ZFS root pool and delete the disk from the root class.
After the disk is swapped, register the new disk to the root class and create a volume. Then attach the created volume to the ZFS root pool
[How to swap the system disk which is mirrored by ZFS mirroring]
1) Check the device name of the disk to be swapped.
In the DEVNAM field in the result output by executing the sdxinfo command, check the device name of the disk to be swapped.
If you know the device name of the disk to be swapped, skip this step.
Example: Swapping the disk of rootDisk0002
In the display example, the device name of rootDisk0002 is "c0t5000C5001D4809FFd0."
# sdxinfo -D
OBJ NAME TYPE CLASS GROUP DEVNAM DEVBLKS DEVCONNECT STATUS
------ ------------ ------ --------- --------- --------------------- --------- ---------- -------
disk rootDisk0001 mirror RootClass rootGroup c0t5000CCA0150FEA10d0 585912500 * ENABLE
disk rootDisk0002 mirror RootClass rootGroup c0t5000C5001D4809FFd0 585912500 * ENABLE
disk dataDisk0001 mirror RootClass dataGroup c0t5000CCA00AC1C874d0 585912500 * ENABLE
disk dataDisk0002 mirror RootClass dataGroup c0t5000CCA0150F96F0d0 585912500 * ENABLE |
2) Detach a volume on the disk from the ZFS root pool.
Example:
# zpool detach rpool /dev/sfdsk/RootClass/dsk/Volume2
3) Delete the disk from the root class.
Example:
# sdxvolume -F -c RootClass -v Volume2 # sdxvolume -R -c RootClass -v Volume2 # sdxgroup -R -c RootClass -g rootGroup # sdxdisk -R -c RootClass -d rootDisk0002
4) Identify the identifier specific to the hardware to which the disk to be swapped is connected.
To find the identifier specific to the hardware, check the Ap_Id field of the information in which the device name of the disk to be swapped is displayed.
In the following example, the identifier specific to the hardware of the physical disk "c0t5000C5001D4809FFd0" is "c5::w5000c5001d4809fd."
# cfgadm -av
Ap_Id Receptacle Occupant Condition Information
When Type Busy Phys_Id
...
c5::w5000c5001d4809fd,0 connected configured unknown \
Client Device: /dev/dsk/c0t5000C5001D4809FFd0s0(sd7)
unavailable disk-path n \
/devices/pci@400/pci@2/pci@0/pci@e/scsi@0/iport@4:scsi::w5000c5001d4809fd,0
...
5) Swap the disk.
6) Make the configuration of the new disk the same as the system disk by using an OS command such as format(1M).
7) Check the device name of the new disk.
For the information of the identifier specific to the hardware identified in step 4), check that "unconfigured" is displayed in the Occupant field and check the information in the "iport@X:scsi" format displayed in the Phys_Id field.
The device of the new disk is the one where "iport@X:scsi" that is checked above is displayed in the Phys_Id field and "configured" is displayed in the Occupant field.
In the following example, the device name of the new disk is "c0t5000C5001D4806BFd0."
# cfgadm -av
Ap_Id Receptacle Occupant Condition Information
When Type Busy Phys_Id
...
c5::w5000c5001d4809fd,0 connected unconfigured unknown (sd7)
unavailable disk-path n \
/devices/pci@400/pci@2/pci@0/pci@e/scsi@0/iport@4:scsi::w5000c5001d4809fd,0
c5:: w5000c5001d4806bd,0 connected configured unknown \
Client Device: /dev/dsk/c0t5000C5001D4806BFd0s0(sd8)
unavailable disk-path n \
/devices/pci@400/pci@2/pci@0/pci@e/scsi@0/iport@4:scsi::w5000c5001d4806bd,0
...
8) Register the new disk resources with PRIMECLUSTER (the cluster resource management facility).
If the system is not a cluster system, skip to step 9).
8-1) Check the node ID of the cluster managed by PRIMECLUSTER.
In the following example, the node ID is "0."
# /etc/opt/FJSVcluster/bin/clgetnode
RID 3
KEY TRC89
RNAME TRC89
NODEID 0
8-2) Check the PRIMECLUSTER resource ID of the device (resource ID before swapping disks).
Specify the device name checked in step 1) by the -k option of the clgetrid command and the node ID checked in step 8-1) by the -s option of the clgetrid command.
In the following example, the resource ID is "25."
# /etc/opt/FJSVcluster/sys/clgetrid -c DISK -k c0t5000C5001D4809FFd0 -s 0
25
8-3) Delete the disk resource of the original disk.
In the following example, delete the resource whose resource ID is "25."
# /etc/opt/FJSVcluster/bin/cldeldevice -r 25
8-4) Confirm that the resource of the original disk has been deleted.
Confirm that the deleted resource is not displayed.
# /etc/opt/FJSVcluster/sys/clgetrid -c DISK -k c0t5000C5001D4809FFd0 -s 0
8-5) Register the new disk with PRIMECLUSTER.
# /etc/opt/FJSVcluster/sys/clautoconfig -r
8-6) Confirm that the device name of the new disk is registered.
If the device name "c0t5000C5001D4806BFd0" is displayed, the new disk has successfully been registered with PRIMECLUSTER.
# /etc/opt/FJSVcluster/bin/clgettree
Cluster 1 cluster
Domain 2 cluster
Shared 7 SHD_cluster
SHD_DISK 3417 SHD_Disk3417 UNKNOWN
DISK 3418 c3t46554A4954535520333030303030383530303043d0 UNKNOWN TRC89
DISK 3477 c3t46554A4954535520333030303030383530303043d0 UNKNOWN ryuta
SHD_DISK 3419 SHD_Disk3419 UNKNOWN
DISK 3420 c3t46554A4954535520333030303030383530303042d0 UNKNOWN TRC89
DISK 3478 c3t46554A4954535520333030303030383530303042d0 UNKNOWN ryuta
Node 3 TRC89 ON
Ethernet 105 igb3 UNKNOWN
SDX_DC 3519 RootClassI UNKNOWN
DISK 71 c0t5000CCA0150FEA10d0 UNKNOWN
DISK 72 c0t5000CCA00AC1C874d0 UNKNOWN
DISK 73 c0t5000CCA0150F96F0d0 UNKNOWN
DISK 3418 c3t46554A4954535520333030303030383530303043d0 UNKNOWN
DISK 3420 c3t46554A4954535520333030303030383530303042d0 UNKNOWN
DISK 3518 c0t5000C5001D4806BFd0 UNKNOWN
Node 5 ryuta ON
Ethernet 106 igb3 UNKNOWN
DISK 101 c0t5000C5001D480F6Fd0 UNKNOWN
DISK 3477 c3t46554A4954535520333030303030383530303043d0 UNKNOWN
DISK 3478 c3t46554A4954535520333030303030383530303042d0 UNKNOWN
9) Reload the GDS disk information.
# sdxinfo -x Refresh
10) Register the new disk to the root class to create a volume.
Example:
# sdxdisk -M -c RootClass -a type=root -d c0t5000C5001D4806BFd0=rootDisk0002:keep # sdxdisk -C -c RootClass -g rootGroup -d rootDisk0002 -v 0=Volume2:on
11) Attach the volume on the new disk to the ZFS root pool.
Example:
# zpool attach rpool /dev/sfdsk/RootClass/dsk/Volume1 \
/dev/sfdsk/RootClass/dsk/Volume2
After executing the zpool attach command, the resynchronization process of ZFS is performed. This time, the OS message (SUNW-MSG-ID: ZFS-8000-QJ) may be output on the console, but it does not affect the system.
During the resynchronization process, "DEGRADED" may be displayed in the state field output by the zpool status command. If "ONLINE" is displayed in the state field after completing the resynchronization process, there is no problem.
12) After the resynchronization process is completed by ZFS, install the boot block to the volume on the swapped disk.
For the following environments, do not perform this step:
For the environment where Solaris 10 is used and the kernel patch 144500-19 or later is applied
For the environment where Solaris 11 11/11 or later is used
Example:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk \
/dev/sfdsk/RootClass/rdsk/Volume2
Note
The swapped disk is not set into the boot-device property of OBP automatically. Because the swapped disk is set into the boot-device property when the system is restarted, restart the system as soon as possible after executing step 7.