This section describes how to set a system disk mirroring (ZFS mirroring) in a ZFS boot environment with the following configurations.
GDS: Global Disk Services
Replace a physical slice name, a volume name, a pool name and so on with the names used as the actual system in the subsequent procedures.
System disk mirroring settings of ZFS mirroring is performed according to the following procedure.
GDS: Global Disk Services
A reboot is required in step 16) of [8] in the above figure. Other settings are executable without stopping the service and the application programs.
In step 6) of [2] and in step 12) of [6] in the above figure, resynchronization process by ZFS is performed. In this case, the operation is also executable without stopping the service and the application programs.
However, if higher safety is required, you should stop the service and the application programs and make backups of system disks before setting.
The required time depends on the performance of servers or disks, the size of slices and file systems, and so on. It also varies from system to system.
1) Set the tuning parameter of GDS.
Apply this setting only when the system disk is connected to a 12Gbps SAS card in SAN Boot environment.
1-1) Set the tuning parameter.
Add "SDX_BOOT_PROPERTY=off" to the /etc/opt/FJSVsdx/sdx.cf file.
# vi /etc/opt/FJSVsdx/sdx.cf
...
SDX_BOOT_PROPERTY=off
1-2) Check that the tuning parameter is set properly.
# grep SDX_BOOT_PROPERTY /etc/opt/FJSVsdx/sdx.cf
SDX_BOOT_PROPERTY=off
2) Create a slice to the mirror disk.
Use an OS command such as format(1M) to create the slice configuration of the mirror destination disk same as the system disk.
Make a note of the slice configuration on paper or a file. It will be necessary when cancelling a system disk mirroring.
3) Check the path names of the boot device for SAN Boot environment.
Check the device path name of a slice, configuring a ZFS root pool, where the operating system has been installed and the device path name of the slice created in step 2).
Make a note of the checked device path names on paper or a file. It will be necessary when starting a system or unmirroring a system disk.
For details on how to check the device path names, see SAN Boot manual.
4) Register the mirror destination disk to a root class.
# sdxdisk -M -c System -a type=root -d c1t0d0=Root2:keep (*1) (*2) (*3)
(*1) Root class name
(*2) Mirror disk
(*3) SDX disk name of the mirror disk
5) Connect the mirror disk to a group.
# sdxdisk -C -c System -g Group2 -d Root2 -v 0=Volume2:on (*1) (*2) (*3) (*4) (*5) (*6)
(*1) Root class name
(*2) Group name
(*3) SDX disk name of the mirror disk
(*4) Slice number created in step 2) (c1t0d0s0 in this example.)
(*5) Volume name corresponding to slice (*4)
(*6) JRM mode of volume (*5) (It is normally on)
6) Attach a volume on the mirror destination disk to the ZFS root pool.
After executing the zpool attach command, the resynchronization process of ZFS is performed. This time, the OS message (SUNW-MSG-ID: ZFS-8000-QJ) may be output on the console, but it does not affect the system.
# zpool attach rpool c0t0d0s0 /dev/sfdsk/System/dsk/Volume2 (*1) (*2) (*3) (*4)
(*1) ZFS root pool name (checked by the zpool status command)
(*2) Slice (to configure the ZFS root pool) where the OS has been installed
(*3) Root class name
(*4) Volume name of volume created in step 5)
7) Check the status of the ZFS root pool.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t0d0s0 ONLINE 0 0 0 (*2) /dev/sfdsk/System/dsk/Volume2 ONLINE 0 0 0 (*3)
Confirm that the following are displayed:
ONLINE is displayed in the state field.
*During the resynchronization process, "DEGRADED" may be displayed in the state field. If "ONLINE" is displayed in the state field after completing the resynchronization process, there is no problem.
"resilvered" or "resilver completed" is displayed in the scrub or scan field.
*During the resynchronization process, "resilver in progress" is displayed in the scrub or scan field.
*When the system is rebooted during the resynchronization process, the process is stopped and "none requested" is displayed in the scrub or scan field. In this case, re-execute the resynchronization process by using the zpool scrub command.
Slice (*2) where the OS has been installed and the volume (*3) attached in step 6) is displayed in the config field.
8) Install the boot block to a volume on the mirror disk.
For the following environments, do not perform this procedure:
For the environment where Solaris 10 is used and the kernel patch 144500-19 or later is applied
For the environment where Solaris 11 11/11 or later is used
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk \ /dev/sfdsk/System/rdsk/Volume2 (*1) (*2)
(*1) Root class name
(*2) Volume name of a volume attached in step 6).
9) Detach the original system disk from the ZFS root pool.
# zpool detach rpool c0t0d0s0 (*1) (*2)
(*1) ZFS root pool name (checked by the zpool status command.)
(*2) Slice where the OS has been installed.
10) Register the original system disk to a root class.
# sdxdisk -M -c System -a type=root -d c0t0d0=Root1:keep (*1) (*2) (*3)
(*1) Root class name
(*2) Original system disk
(*3) SDX disk name of the original system disk
11) Connect the original system disk to a group.
# sdxdisk -C -c System -g Group1 -d Root1 -v 0=Volume1:on (*1) (*2) (*3) (*4) (*5) (*6)
(*1) Root class name
(*2) Group name
(*3) SDX disk name of the original system disk
(*4) Slice number created in step 2) (c0t0d0s0 in this example)
(*5) Volume name corresponding to slice (*4)
(*6) JRM mode of volume (*5) (It is normally on.)
12) Attach a volume on the original system disk to the ZFS root pool.
After executing the zpool attach command, the resynchronization process of ZFS is performed. This time, the OS message (SUNW-MSG-ID: ZFS-8000-QJ) may be output on the console, but it does not affect the system.
# zpool attach rpool /dev/sfdsk/System/dsk/Volume2 /dev/sfdsk/System/dsk/Volume1 (*1) (*2) (*3) (*4)
(*1) ZFS root pool name (checked by the zpool status command.)
(*2) Volume name of a volume to configure the ZFS root pool
(*3) Root class name
(*4) Volume name of a volume created in step 11)
13) Check the status of the ZFS root pool.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 /dev/sfdsk/System/dsk/Volume2 ONLINE 0 0 0 (*2) /dev/sfdsk/System/dsk/Volume1 ONLINE 0 0 0 (*3)
Confirm that the following are displayed:
ONLINE is displayed in the state field.
*During the resynchronization process, "DEGRADED" may be displayed in the state field. If "ONLINE" is displayed in the state field after completing the resynchronization process, there is no problem.
"resilvered" or "resilver completed" is displayed in the scrub or scan field.
*During the resynchronization process, "resilver in progress" is displayed in the scrub or scan field.
*When the system is rebooted during the resynchronization process, the process is stopped and "none requested" is displayed in the scrub or scan field. In this case, re-execute the resynchronization process by using the zpool scrub command.
Volume (*2) attached in step 6) and volume (*3) attached in step 12) are displayed in the config field.
14) Install the boot block to a volume on the original system disk.
For the following environments, do not perform this procedure:
For the environment where Solaris 10 is used and the kernel patch 144500-19 or later is applied
For the environment where Solaris 11 11/11 or later is used
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk \ /dev/sfdsk/System/rdsk/Volume1 (*1) (*2)
(*1) Root class name
(*2) Volume name of the volume attached in step 12)
15) Confirm that mirroring has been performed normally.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 /dev/sfdsk/System/dsk/Volume2 ONLINE 0 0 0 (*2) /dev/sfdsk/System/dsk/Volume1 ONLINE 0 0 0 (*3)
Confirm that the following are displayed:
ONLINE is displayed in the state field.
Volume (*2) attached in step 6) and volume (*3) attached in step 12) are displayed in the config field
# sdxinfo -S -c System (*3) root class name OBJ CLASS GROUP DISK VOLUME STATUS ------ ------- ------- ------- ------- -------- slice System Group2 Root2 Volume2 ACTIVE slice System Group1 Root1 Volume1 ACTIVE
Confirm that the following is displayed:
The information of the original system disk (Root1 in this example) and the mirror disk (Root2) in this example) are displayed.
16) Restart the system.
You can start the system from either the original system disk or the mirror disk.
The procedure to start the system from the mirror disk is shown below.
The procedure is different for the following four cases:
The disk connected to a 12Gbps SAS card in SAN Boot environment
The disk of Expansion File Unit connected to a 6Gbps SAS card, or the internal disks of SPARC M12/M10 and SPARC T4-4/T4-2/T4-1/T3-4/T3-2/T3-1 servers
SAN Boot environment
The other cases
[The disk connected to a 12Gbps SAS card in SAN Boot environment]
16a-1) Check the physical disk name registered in the root class.
# sdxinfo -D -c System (*1) OBJ NAME TYPE CLASS GROUP DEVNAM ... ----- ------- ------ ------- ------- ------- ... disk Root1 mirror System Group1 c0t600000E00D0000000001060300040000d0... disk Root2 mirror System Group1 c0t600000E00D28000000280CC600000000d0... ...
(*1) Root class name
16a-2) Check the partition of slices to configure a volume for the ZFS root pool.
# ls -l /dev/dsk | grep c0t600000E00D0000000001060300040000d0s0 (*1) (*2) lrwxrwxrwx 1 root root 64 Feb 2 13:57 c0t600000E00D0000000001060300040000d0s0 -> ../../devices/scsi_vhci/disk@g600000e00d0000000001060300040000d0:a (*3) # ls -l /dev/dsk | grep c0t600000E00D28000000280CC600000000d0s0
(*1) (*2) lrwxrwxrwx 1 root root 64 Feb 2 13:57 c0t600000E00D28000000280CC600000000d0s0 -> ../../devices/scsi_vhci/disk@g600000e00d28000000280cc600000000:a (*3)
(*1) Physical disk name checked in step 16a-1)
(*2) The slice number of the slice created in step 2) (s0 in this example)
(*3) Partition of a slice
16a-3) Check the boot disk information.
# prtconf -v /dev/rdsk/c0t600000E00D28000000280CC600000000d0s0 ... disk, instance #0 ... Paths from multipath bus adapters: Path 33: /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/iport@f0/disk@w500000e0da0cc620,0 lsc#3 (online) (*1) (*2) ... # prtconf -v /dev/rdsk/c0t600000E00D0000000001060300040000d0s0
... disk, instance #0 ... Paths from multipath bus adapters: Path 33: /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/iport@f0/disk@w500000e0d0460306,0 lsc#3 (online) (*1) (*2) ...
(*1) Disk node name (disk@w500000e0da0cc620,0 and disk@w500000e0d0460306,0 in this example)
(*2) SAS Address (500000e0da0cc620 and 500000e0d0460306 in this example)
16a-4) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
16a-5) Check the device path of SAS HBA that is connected to the disk.
ok probe-scsi-all
/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0 (*1)
FCode Version 1.00.65, MPT Version 2.05, Firmware Version 4.00.00.00 Target a Unit 0 Disk TOSHIBA MBF2300RC 3706 585937500 Blocks, 300 GB SASDeviceName 50000393d82956d4 SASAddress 500000e0d0460306 PhyNum 0 (*2) Target b Unit 0 Disk TOSHIBA MBF2300RC 3706 585937500 Blocks, 300 GB SASDeviceName 50000393d828bbfc SASAddress 500000e0da0cc620 PhyNum b (*2) ... ok
(*1) Device path
(*2) SAS Address checked in *2 in step 16a-3)
16a-6) Set the boot-device property.
ok setenv boot-device "/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w 500000e0da0cc620,0:a
/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0d0460306,0:a" (*3)
(*3) The boot device can be created by the concatenated value of the device path (*1) checked in step 16a-5), the disk node name (*1) checked in step 16a-3), and the partition :a of the slice checked in step 16a-2).
16a-7) Set the multipath-boot? property.
ok setenv multipath-boot? true
16a-8) Start the system.
ok boot
[The disk of Expansion File Unit connected to a 6Gbps SAS card, or the internal disks of SPARC M12/M10 and SPARC T4-4/T4-2/T4-1/T3-4/T3-2/T3-1 servers]
16b-1) Check the partition of slices to configure a volume.
[Solaris 11.3 or earlier]
# ls -l /dev/dsk | grep c0t5000CCA0150FEA10d0s0 (*1) lrwxrwxrwx 1 root root 48 Apr 25 13:46 c0t5000CCA0150FEA10d0s0 -> ../../devices/scsi_vhci/disk@g5000cca0150fea10:a (*2)
(*1) Slice (created in step 2) to configure a volume.
(*2) Partition of a slice
[Solaris 11.4 or later]
# ls -l /dev/dsk | grep c0t50000397683346A9d0s0
(*1) lrwxrwxrwx 1 root root 48 Jul 27 19:23 c0t50000397683346A9d0s0 -> ../../devices/scsi_vhci/disk@g50000397683346a9:a (*2)
(*1) Slice (created in step 2) to configure a volume
(*2) Partition of a slice
16b-2) Check the boot disk information.
[Solaris 11.3 or earlier]
Check the obp-path parameter.
# prtconf -v /dev/rdsk/c0t5000CCA0150FEA10d0s0 disk, instance #0 Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: ... name='obp-path' type=string items=1 value='/pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000cca0150fea11,0' (*3) ... Device Minor Nodes: ...
(*3) Parameter of the obp-path
[Solaris 11.4 or later]
Check the ddi-boot-path parameter.
# prtconf -v /dev/rdsk/c0t50000397683346A9d0s2 disk, instance ... Device Hold: ... Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: Path 1: /pci@8000/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000397683346aa,0 mpt_sas#2 (online) name='ddi-boot-path' type=string items=1 value='/pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w50000397683346aa,0' (*3) ... Device Minor Nodes: ...
(*3) Parameter of the ddi-boot-path
16b-3) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
16b-4) Start the system in the OpenBoot environment.
[Solaris 11.3 or earlier]
ok boot /pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000cca0150fea11,0:a (*4)
(*4) The device name that obp-path in step 16b-2) and the partition in step 16b-1) are concatenated.
[Solaris 11.4 or later]
ok boot /pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w50000397683346aa,0:a (*4)
(*4) The device name that ddi-boot-path in step 16b-2) and the partition in step 16b-1) are concatenated.
[SAN Boot environment]
16c-1) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
16c-2) Start the system in the OpenBoot environment.
ok boot /pci@5,700000/QLGC,qlc@0/fp@0,0/disk@w210000e0004101da,0 (*1)
(*1) Device path name of the slice (created in step 2) which configures a volume.
[The other cases]
16d-1) Check the device path of slice to configure a volume.
# ls -l /dev/dsk | grep c1t0d0s0 (*1) lrwxrwxrwx 1 root root 63 Aug 20 12:00 c1t0d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a (*2)
(*1) Slice (created in step 1) to configure a volume
(*2) Device path of a slice
16d-2) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
16d-3) Check the device name in the OpenBoot environment and start the system corresponding to the confirmation result.
ok show-devs /pci@1f,4000/scsi@3 (*3) ... /pci@1f,4000/scsi@3/disk ...
(*3) The path here is the path that the last element path (/sd@1.0:a in this example) is excluded from the path (*2) displayed in step 16d-1).
Start the system by a method corresponding to the above confirmation result.
When the device name "disk" is not output to the last element path displayed in show-devs:
ok boot /pci@1f,4000/scsi@3/sd@1,0:a (*4)
(*4) Path (*2) displayed in step 16d-1)
When the device name "disk" is output to the last element path displayed in show-devs:
ok boot /pci@1f,4000/scsi@3/disk@1,0:a (*5)
(*5) The specified path here is the one that the part before @ of the last element path (*2) (sd in this example) displayed in step 16d-1) is replaced to disk.