This section describes how to cancel a system disk mirroring of ZFS mirroring in a ZFS boot environment with the following configurations.
GDS: Global Disk Services
Replace a physical slice name, a volume name, a pool name and so on with the names used as the actual system in the subsequent procedures.
System disk mirroring of ZFS mirroring is cancelled according to the following procedure.
GDS: Global Disk Services
A reboot is required in step 19) of [8] in the above figure. Other settings are executable without stopping the service and the application programs.
In step 10) of [4] in the above figure, resynchronization process by ZFS is performed. In this case, the operation is also executable without stopping the service and the application programs.
However, if higher safety is required, you should stop the service and the application programs and make backups of system disks before the cancellation.
The required time depends on the performance of servers or disks, the size of slices and file systems, and so on. It also varies from system to system.
1) Determine a disk to be used as a system disk after the mirroring cancellation. Then, check the physical slice number, SDX disk name, and group name of the disk.
Check the volume attached to the pool and a disk in which a volume exists. Either disk can be a system disk after the mirroring cancellation.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 /dev/sfdsk/System/dsk/Volume2 ONLINE 0 0 0 (*2) /dev/sfdsk/System/dsk/Volume1 ONLINE 0 0 0 (*3)
(*2) Volume name of a volume attached in a pool
(*3) Volume name of a volume attached in a pool
Check the details of the volume information.
# sdxinfo -e long -o Volume2 OBJ NAME TYPE CLASS GROUP ... SNUM PJRM ------ ------- ------ ------- ------- ... ---- ---- volume Volume2 mirror System Group2 ... 0 * (*4)
(*4) Slice number of a slice to configure a volume
# sdxinfo -e long -o Volume1 OBJ NAME TYPE CLASS GROUP ... SNUM PJRM ------ ------- ------ ------- ------- ... ---- ---- volume Volume1 mirror System Group1 ... 0 * (*5)
(*5) Slice number of a slice to configure a volume
# sdxinfo -D -c System (*6) root class name OBJ NAME TYPE CLASS GROUP DEVNAM ... ----- ------- ------ ------- ------- ------- ... disk Root2 mirror System Group2 c1t0d0 ... disk Root1 mirror System Group1 c0t0d0 ...
# sdxinfo -S -c System (*7) root class name OBJ CLASS GROUP DISK VOLUME STATUS ------ ------- ------- ------- ------- -------- slice System Group2 Root2 Volume2 ACTIVE slice System Group1 Root1 Volume1 ACTIVE
In this example, the disk c1t0d0 will become a system disk after the mirroring cancellation.
The physical slice number is 0, the SDX disk name is Root2, the group name is Group2, and the volume name is Volume2.
2) Detach a volume on a disk used as a system disk from the ZFS root pool after the mirroring cancellation.
# zpool detach rpool /dev/sfdsk/System/dsk/Volume2 (*1) (*2) (*3)
(*1) ZFS root pool name (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name
3) Confirm that mirroring has been cancelled.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 /dev/sfdsk/System/dsk/Volume1 ONLINE 0 0 0 (*2)
Confirm that the following are displayed:
ONLINE is displayed in the state field.
Only the volume (*2) is displayed in the config field.
4) Stop the volume.
# sdxvolume -F -c System -v Volume2 (*1) (*2)
(*1) Root class name
(*2) Volume name.
5) Delete the volume.
# sdxvolume -R -c System -v Volume2 (*1) (*2)
(*1) Root class name
(*2) Volume name
6) Delete the group.
# sdxgroup -R -c System -g Group2 (*1) (*2)
(*1) Root class name
(*2) Group name of a group that the system disk is connected
7) Delete a disk from a root class.
# sdxdisk -R -c System -d Root2 (*1) (*2)
(*1) Root class name
(*2) SDX disk name
8) Initialize a disk in step 7).
# dd if=/dev/zero of=/dev/rdsk/c1t0d0s0 bs=1024k (*1) (*2)
(*1) The disk deleted from a root class in step 7)
(*2) Physical slice number (checked in step 1) of a volume to configure the ZFS root pool
# format -e
Select the disk (c1t0d0) in (*1). If the enquiry: "Disk not labeled. Label it now?" is displayed, enter "y".
9) Create a slice to a disk in step 8).
Use an OS command such as format(1M) to create the slice configuration of the mirror destination disk same as the system disk.
Information
If the slice configuration of a system disk is not recorded on paper and so on for mirroring settings of a system disk, make the slice configuration same as the disk (Root1 in this example) connected to a group of the system disk. Start the system from OS installation CD and check the slice configuration of a disk connected to a group by using the format(1M) command or the prtvtoc(1M) command.
10) Attach the slice created in step 9) to the ZFS root pool.
After executing the zpool attach command, the resynchronization process of ZFS is performed. This time, the OS message (SUNW-MSG-ID: ZFS-8000-QJ) may be output on the console, but it does not affect the system.
# zpool attach rpool /dev/sfdsk/System/dsk/Volume1 c1t0d0s0 (*1) (*2) (*3) (*4)
(*1) ZFS root pool name (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name of a volume to configure the ZFS root pool.
(*4) Slice created in step 9)
11) Check the status of the ZFS root pool.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 /dev/sfdsk/System/dsk/Volume1 ONLINE 0 0 0 (*2) c1t0d0s0 ONLINE 0 0 0 (*3)
Confirm that the following are displayed:
ONLINE is displayed in the state field.
*During the resynchronization process, "DEGRADED" may be displayed in the state field. If "ONLINE" is displayed in the state field after completing the resynchronization process, there is no problem.
"resilvered" or "resilver completed" is displayed in the scrub or scan field.
*During the resynchronization process, "resilver in progress" is displayed in the scrub or scan field.
Volume (*2) and slice (*3) attached in step 10) are displayed in the config field.
12) Install the boot block to a slice attached in step 10).
For the following environments, do not perform this procedure:
For the environment where Solaris 10 is used and the kernel patch 144500-19 or later is applied
For the environment where Solaris 11 11/11 or later is used
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk \ /dev/rdsk/c1t0d0s0 (*1) slice attached in step 10)
13) Detach the volume attached to the ZFS root pool.
# zpool detach rpool /dev/sfdsk/System/dsk/Volume1 (*1) (*2) (*3)
(*1) ZFS root pool name (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name
14) Confirm that mirroring has been cancelled.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 (*2)
Confirm that the following are displayed:
ONLINE is displayed in the state field.
Only the slice (*2) attached in step 10) is displayed in the config field.
15) Stop the volume.
# sdxvolume -F -c System -v Volume1 (*1) (*2)
(*1) Root class name
(*2) Volume name
16) Delete the volume.
# sdxvolume -R -c System -v Volume1 (*1) (*2)
(*1) Root class name
(*2) Volume name
17) Delete the group.
# sdxgroup -R -c System -g Group1 (*1) (*2)
(*1) Root class name
(*2) Group name
18) Delete a disk from a root class.
# sdxdisk -R -c System -d Root1 (*1) (*2)
(*1) Root class name
(*2) SDX disk name
19) Set boot-device property in OpenBoot.
The procedure is different for the following four cases:
The disk connected to a 12Gbps SAS card in SAN Boot environment
The disk of Expansion File Unit connected to a 6Gbps SAS card, or the internal disks of SPARC M12/M10 and SPARC T4-4/T4-2/T4-1/T3-4/T3-2/T3-1 servers
SAN Boot environment
The other cases
[The disk connected to a 12Gbps SAS card in SAN Boot environment]
19a-1) Delete the tuning parameter.
Delete "SDX_BOOT_PROPERTY=off" to the /etc/opt/FJSVsdx/sdx.cf file.
# vi /etc/opt/FJSVsdx/sdx.cf
...
SDX_BOOT_PROPERTY=off <-Delete this line.
19a-2) Check that the tuning parameter is deleted properly.
# grep SDX_BOOT_PROPERTY /etc/opt/FJSVsdx/sdx.cf
#
19a-3) Check the partition of the slice to configure a volume for the ZFS root pool.
# ls -l /dev/dsk | grep c0t600000E00D28000000280CC600000000d0s0 (*1) lrwxrwxrwx 1 root root 64 Feb 2 13:57 c0t600000E00D28000000280CC600000000d0s0 -> ../../devices/scsi_vhci/disk@g600000e00d28000000280cc600000000:a (*2)
(*1) Slice to configure a volume for the ZFS root pool.
(*2) Partition of slices
19a-4) Check the boot disk information.
# prtconf -v /dev/rdsk/c0t600000E00D28000000280CC600000000d0s0 disk, instance #0 Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: Path 33: /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/iport@f0/disk@w500000e0da0cc620,0 lsc#3(online) (*1) (*2) ...
(*1) Disk node name (disk@w500000e0da0cc620,0 in this example)
(*2) SAS Address (500000e0da0cc620 in this example)
19a-5) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
19a-6) Check the device path of SAS HBA that is connected to the disk.
ok probe-scsi-all
/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0 (*1)
FCode Version 1.00.65, MPT Version 2.05, Firmware Version 4.00.00.00 Target a Unit 0 Disk TOSHIBA MBF2300RC 3706 585937500 Blocks, 300 GB SASDeviceName 50000393d82956d4 SASAddress 500000e0d0460306 PhyNum 0 (*2) Target b Unit 0 Disk TOSHIBA MBF2300RC 3706 585937500 Blocks, 300 GB SASDeviceName 50000393d828bbfc SASAddress 500000e0da0cc620 PhyNum b ... ok
(*1) Device path
(*2) SAS Address checked in *2 in step 19a-4)
19a-7) Set the boot-device property in the OpenBoot environment.
ok setenv boot-device /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0da0cc620:a (*4)
(*4) Concatenated value of the device path (*1) checked in step 19a-6), the disk node name (*1) checked in step 19a-4), and the partition of the slice (*2) checked in step 19a-3)
19a-8) Set the multipath-boot? property in the OpenBoot environment.
ok setenv multipath-boot? false
19a-9) Start the system.
ok boot
[The disk of Expansion File Unit connected to a 6Gbps SAS card, or the internal disks of SPARC M12/M10 and SPARC T4-4/T4-2/T4-1/T3-4/T3-2/T3-1 servers]
19b-1) Confirm that the partition of the slice to configure the ZFS root pool.
[Example for Solaris 11.3 or earlier]
# ls -l /dev/dsk | grep c0t5000CCA0150FEA10d0s0 (*1) lrwxrwxrwx 1 root root 48 Apr 25 13:46 c0t5000CCA0150FEA10d0s0 -> ../../devices/scsi_vhci/disk@g5000cca0150fea10:a (*2)
(*1) Slice attached in step 10)
(*2) Partition of a slice
[Example for Solaris 11.4 or later]
# ls -l /dev/dsk | grep c0t50000397683346A9d0s0 (*1) lrwxrwxrwx 1 root root 48 Jul 27 19:23 c0t50000397683346A9d0s0 -> ../../devices/scsi_vhci/disk@g50000397683346a9:a (*2)
(*1) Slice attached in step 10)
(*2) Partition of a slice
19b-2) Confirm the obp-path parameter of the boot disk.
[Solaris 11.3 or earlier]
Check the obp-path parameter.
# prtconf -v /dev/rdsk/c0t5000CCA0150FEA10d0s0 disk, instance #0 Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: ... name='obp-path' type=string items=1 value='/pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000cca0150fea11,0' (*3) ... Device Minor Nodes: ...
(*3) Parameter of the obp-path
[Solaris 11.4 or later]
Check the ddi-boot-path parameter.
# prtconf -v /dev/rdsk/c0t50000397683346A9d0s2 disk, instance ... Device Hold: ... Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: Path 1: /pci@8000/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000397683346aa,0 mpt_sas#2 (online) name='ddi-boot-path' type=string items=1 value='/pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w50000397683346aa,0' (*3) ... Device Minor Nodes: ...
(*3) Parameter of the ddi-boot-path
19b-3) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
19b-4) Set the boot-device property.
[Solaris 11.3 or earlier]
ok setenv boot-device /pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000cca0150fea11,0:a (*4)
(*4) Device name that the obt-path parameter (*3) checked in step 19b-2) and the partition (*2) checked in step 19b-1) are combined
[Solaris 11.4 or later]
ok setenv boot-device /pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w50000397683346aa,0:a (*4)
(*4) Device name that the ddi-boot-path parameter (*3) checked in step 19b-2) and the partition (*2) checked in step 19b-1) are combined
19b-5) Start the system.
ok boot
[SAN Boot environment]
19c-1) Among the device paths checked in step 2) in "7.2.1.2 Procedure for System Disk Mirroring Settings (ZFS Boot Environment: ZFS Mirroring)," set the device path of the slice connected in step 10) of this section to boot-device property.
For how to set the boot-device property, see SAN Boot manual.
[The other cases]
19d-1) Check the device path of slice to configure the ZFS root pool.
# ls -l /dev/dsk | grep c1t0d0s0 (*1) lrwxrwxrwx 1 root root 63 Aug 20 12:00 c1t0d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a (*2)
(*1) Slice attached in step 10)
(*2) Device path of a slice
19d-2) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
19d-3) Check the device name in the OpenBoot environment and set boot-device property by a method corresponding to a confirmation result.
ok show-devs /pci@1f,4000/scsi@3 (*3) ... /pci@1f,4000/scsi@3/disk ...
(*3) The path here is the path that the last element path (/sd@0,0:a in this example) was excluded from the path (*2) displayed in step 19d-1).
Set the boot-device property by a method corresponding to the above confirmation result.
When the device name "disk" is not output to the last element path displayed in show-devs:
ok setenv boot-device /pci@1f,4000/scsi@3/sd@1,0:a (*4)
(*4) Path (*2) displayed in step 19d-1)
When the device name "disk" is output to the last element path displayed in show-devs:
ok setenv boot-device /pci@1f,4000/scsi@3/disk@1,0:a (*5)
(*5) The path is the path that the part before @ of the last element of (*2) (sd in this example) displayed in step 19d-1) is replaced to disk.
19d-4) Start the system.
ok boot