This section describes how to cancel a system disk mirroring of GDS mirroring in a ZFS boot environment with the following configurations.
The following configurations are examples when the slice for the ZFS root pool and the slice for data exist on the system disk, and the disk for dump exists other than the system disk.
If the slice for data does not exist on the system disk or even if a disk for dump does not exist, you can cancel a system disk mirroring with this method.
GDS: Global Disk Services
Replace a physical slice name, a volume name, a pool name and so on with the names used as the actual system in the subsequent procedures.
System disk mirroring of GDS mirroring is cancelled according to the following procedure.
GDS: Global Disk Services
A reboot is required in step 9) of [5] in the above figure. Other settings are executable without stopping the service and the application programs.
In step 6) of [3] in the above figure, resynchronization process by ZFS is performed. In this case, the operation is also executable without stopping the service and the application programs.
However, if higher security is required, you should stop the service and the application programs and make backups of system disks before unmirroring.
The required time depends on the performance of servers or disks, the size of slices and file systems, and so on. It also varies from system to system.
This section describes how to cancel a system disk mirroring when the slice for the ZFS root pool and the slice for data exist on the system disk, and the disk for dump exists other than the system disk shown as the figure in "7.1.2 System Disk Mirroring Cancellation in a ZFS Boot Environment (GDS Mirroring)."
If the slice for data does not exist on the system disk or even if the disk for dump does not exist, you do not have to cancel a system disk mirroring for the non-existent slices and disk.
1) Determine disks to be used as a system disk and as a disk for dump after the mirroring cancellation. Then, check the physical slice number, SDX disk name, and group name of the disk.
Among the mirrored system disks, the disk which is not the current boot device must be a system disk after the mirroring cancellation.
Any one of the mirrored disks for dump must be the disk for dump after the mirroring cancellation.
The confirmation method of the current boot device is different for the following four cases:
The disk connected to a 12Gbps SAS card in SAN Boot environment
The disk of Expansion File Unit connected to a 6Gbps SAS card, or the internal disks of SPARC M12/M10 and SPARC T4-4/T4-2/T4-1/T3-4/T3-2/T3-1 servers
SAN Boot environment
The other cases
[The disk connected to a 12Gbps SAS card in SAN Boot environment]
1a-1) Check the physical disk name registered in the root class.
# sdxinfo -D -c System (*1) OBJ NAME TYPE CLASS GROUP DEVNAM ... ----- ------- ------ ------- ------- ------- ... disk Root1 mirror System Group1 c0t600000E00D0000000001060300040000d0... (*2) disk Root2 mirror System Group1 c0t600000E00D28000000280CC600000000d0... ... (*2)
(*1) Root class name
(*2) Device name
1a-2) Check the information of the device registered in the root class ((*2) of 1a-1).
# prtconf -v /dev/rdsk/c0t600000E00D0000000001060300040000d0s0
... disk, instance #0 ... Paths from multipath bus adapters: Path 33: /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/iport@f0/disk@w500000e0d0460306,0 lsc#3 (online) (*1) (*2) ... # prtconf -v /dev/rdsk/c0t600000E00D28000000280CC600000000d0s0 ... disk, instance #0 ... Paths from multipath bus adapters: Path 33: /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/iport@f0/disk@w500000e0da0cc620,0 lsc#3 (online) (*1) (*2) ...
(*1) Disk node name (disk@w500000e0d0460306,0 and disk@w500000e0da0cc620,0 in this example)
(*2) SAS Address (500000e0d0460306 and 500000e0da0cc620 in this example)
1a-3) Check the bootpath parameter.
# prtconf -pv | grep bootpath bootpath: ' /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0d0460306,0:a '
The boot device is the physical disk of which information checked in step 1a-2) matches the bootpath parameter checked in step 1a-3) (c0t600000E00D0000000001060300040000d0 in this example).
Set the disk that is connected to the same group as the group connected to the boot device (c0t600000E00D28000000280CC600000000d0 in this example) as the system disk after unmirroring the system disk.
The SDX disk name is Root2 and the group name is Group1.
[The disk of Expansion File Unit connected to a 6Gbps SAS card, or the internal disks of SPARC M12/M10 and SPARC T4-4/T4-2/T4-1/T3-4/T3-2/T3-1 servers]
1b-1) Check the physical disk name registered in the root class.
[Example for Solaris 11.3 or earlier]
# sdxinfo -D -c System (*1) root class name OBJ NAME TYPE CLASS GROUP DEVNAM ... ----- ------- ------ ------- ------- ------- ... disk Root1 mirror System Group1 c0t5000CCA0150FEA10d0 ... disk Root2 mirror System Group1 c0t5000C5001D4809FFd0 ... ...
[Example for Solaris 11.4 or later]
# sdxinfo -D -c System (*1) root class name OBJ NAME TYPE CLASS GROUP DEVNAM ... ----- ------- ------ ------- ------- ------- ... disk Root1 mirror System Group1 c0t5000039768334825d0 ... disk Root2 mirror System Group1 c0t50000397683346A9d0 ... ...
1b-2) Check the information of the physical disk registered in the root class.
[Solaris 11.3 or earlier]
Check the obp-path parameter.
# prtconf -v /dev/rdsk/c0t5000CCA0150FEA10d0s2 disk, instance #0 Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: ... name='obp-path' type=string items=1 value='/pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000cca0150fea11,0' ... Device Minor Nodes: ... # prtconf -v /dev/rdsk/c0t5000C5001D4809FFd0s2 disk, instance #6 Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: ... name='obp-path' type=string items=1 value='/pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000c5001d4809fd,0' ... Device Minor Nodes: ...
[Solaris 11.4 or later]
Check the ddi-boot-path parameter.
# prtconf -v /dev/rdsk/c0t5000039768334825d0s2 disk, instance ... Device Hold: ... Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: Path 1: /pci@8000/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w5000039768334826,0 mpt_sas#2 (online) name='ddi-boot-path' type=string items=1 value='/pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w5000039768334826,0' ... Device Minor Nodes: ... # prtconf -v /dev/rdsk/c0t50000397683346A9d0s2 disk, instance ... Device Hold: ... Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: Path 2: /pci@8000/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000397683346aa,0 mpt_sas#2 (online) name='ddi-boot-path' type=string items=1 value='/pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w50000397683346aa,0' ... Device Minor Nodes: ...
1b-3) Check the bootpath parameter.
[Example for Solaris 11.3 or earlier]
# prtconf -pv | grep bootpath bootpath: '/pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000cca0150fea11,0:a'
[Example for Solaris 11.4 or later]
# prtconf -pv | grep bootpath bootpath: '/pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w5000039768334826,0:a'
The boot device is the physical disk (c0t5000CCA0150FEA10d0 in this example for Solaris 11.3 earlier, c0t5000039768334825d0 for Solaris 11.4 or later) of which information (obp-path parameter for Solaris 11.3 or earlier and ddi-boot-path parameter for Solaris 11.4 or later) checked in step 1b-2) matches the bootpath parameter checked in step 1b-3).
Set the disk (c0t5000C5001D4809FFd0 in this example for Solaris 11.3 or earlier, c0t50000397683346A9d0 in the example for Solaris 11.4 or later) that is connected to the same group as the group to which the boot device is connected as the system disk after unmirroring the system disk.
SDX disk name is Root2 and the group name is Group1.
[SAN Boot environment]
1c-1) Check the current boot device.
# prtconf -pv | grep bootpath
bootpath: '/pci@1e,600000/fibre-channel@2/disk@10,0:a'
The disk among slices which is checked in step 3) in "7.1.1.2 Procedure for System Disk Mirroring Settings (ZFS Boot Environment: GDS Mirroring)" and not the boot device, must be a system disk after the mirroring cancellation.
[The other cases]
1d-1) Check the current boot device.
# prtconf -pv | grep bootpath
bootpath: '/pci@1f,4000/scsi@3/disk@0,0:a'
Specify the path by replacing the part before @ (disk in this example) of the last element of bootpath device name to ".*" and check the current boot device.
# ls -lat /dev/dsk | grep "/pci@1f,4000/scsi@3/.*@0,0:a"
lrwxrwxrwx 1 root root 63 Aug 20 12:00 c0t0d0s0 ->
../../devices/pci@1f,4000/scsi@3/sd@0,0:a
In this example, c0t0d0s0 is the current boot device and 0 of s0 is the physical slice number.
# sdxinfo -D -c System (*1) root class name OBJ NAME TYPE CLASS GROUP DEVNAM ... ----- ------- ------ ------- ------- ------- ... disk Root1 mirror System Group1 c0t0d0 ... disk Root2 mirror System Group1 c1t0d0 ... ...
In this example, make the disk c1t0d0 that is connected to the same group as the boot device to the system disk after the mirroring cancellation.
SDX disk name is Root2 and the group name is Group1.
Information
If you want to use the current boot device as a system disk after the mirroring cancellation, start the system from the other disk and cancel the system disk mirroring according to this procedure. The method of specifying a boot device when the system is booted, see the step 9) in "7.1.1.2 Procedure for System Disk Mirroring Settings (ZFS Boot Environment: GDS Mirroring)."
2) Disconnect the disks which are used as a system disk and a disk for dump after the mirroring cancellation from the group.
2-1) Disconnect the disk which is used as a system disk after the mirroring cancellation from a group.
# sdxdisk -D -c System -g Group1 -d Root2 (*1) (*2) (*3)
(*1) Root class name
(*2) Group name (checked in step 1) of a group that a system disk is connected
(*3) SDX disk name (checked in step 1) of a disk to be used as a system disk after the mirroring cancellation
2-2) Disconnect the disk which is used as a disk for dump after the mirroring cancellation from a group.
# sdxdisk -D -c System -g Group2 -d Root4 (*1) (*2) (*3)
(*1) Root class name
(*2) Group name of a group that a disk for dump is connected
(*3) SDX disk name of a disk to be used as a disk for dump after the mirroring cancellation.
3) Delete the disks, which were disconnected from a group in step 2), from the root class.
3-1) Delete the disk to be used as a system disk after the mirroring cancellation from the root class.
# sdxdisk -R -c System -d Root2 (*1) (*2)
(*1) Root class name
(*2) SDX disk name of disk disconnected from the group in step 2-1)
3-2) Delete the disk to be used as a disk for dump after the mirroring cancellation from the root class.
# sdxdisk -R -c System -d Root4 (*1) (*2)
(*1) Root class name
(*2) SDX disk name of a disk disconnected from the group in step 2-2)
4) Initialize the disks deleted from the root class in step 3).
4-1) Initialize the disk to be used as a system disk after the mirroring cancellation.
# dd if=/dev/zero of=/dev/rdsk/c1t0d0s0 bs=1024k (*1) (*2)
(*1) Disk deleted from the root class in step 3-1)
(*2) Physical slice number (checked in step 1) of a volume to configure the ZFS root pool
# dd if=/dev/zero of=/dev/rdsk/c1t0d0s1 bs=1024k (*3) (*4)
(*3) Disk deleted from the root class in step 3-1)
(*4) Physical slice number of a volume to configure the ZFS storage pool for data
# format -e
Select the disk (c1t0d0) in (*1). If the enquiry: "Disk not labeled. Label it now?" is displayed, enter "y".
4-2) Initialize a disk to be used as a disk for dump after the mirroring cancellation.
# dd if=/dev/zero of=/dev/rdsk/c1t0d1s0 bs=1024k (*1) (*2)
(*1) Disk deleted from the root class in step 3-2)
(*2) Physical slice number of a volume to configure the ZFS root pool for dump
# format -e
Select the disk (c1t0d1) in (*1). If the enquiry: "Disk not labeled. Label it now?" is displayed, enter "y".
5) Create slices to disks initialized in step 4).
Use an OS command such as format(1M) to create the slice configuration same as the original system disk and the disk for dump.
Information
If the slice configuration is not recorded on paper and so on for mirroring settings, make the slice configuration same as the disk (Root1 and Root3 in this example) connected in a group. Start the system from OS installation CD and check the slice configuration of a disk connected in a group by using the format(1M) command or the prtvtoc(1M) command.
6) Attach the slices created in step 5) to the ZFS root pool and the ZFS storage pools.
After executing the zpool attach command, the resynchronization process of ZFS is performed. This time, the OS message (SUNW-MSG-ID: ZFS-8000-QJ) may be output on the console, but it does not affect the system.
6-1) Attach a slice to the ZFS root pool.
# zpool attach rpool /dev/sfdsk/System/dsk/rpool c1t0d0s0 (*1) (*2) (*3) (*4)
(*1) ZFS root pool name (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name of volume to configure the ZFS root pool
(*4) Slice for the ZFS root pool created in step 5)
6-2) Attach a slice to the ZFS storage pool for data.
# zpool attach datapool /dev/sfdsk/System/dsk/datapool c1t0d0s1 (*1) (*2) (*3) (*4)
(*1) Pool name of the ZFS storage pool for data (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name of a volume to configure the ZFS storage pool for data
(*4) Slice for data created in step 5)
6-3) Attach a slice to the ZFS storage pool for dump
# zpool attach dumppool /dev/sfdsk/System/dsk/dumppool c1t0d1s0 (*1) (*2) (*3) (*4)
(*1) Pool name of the ZFS storage pool for dump (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name of a volume to configure the ZFS storage pool for dump
(*4) Slice for dump created in step 5)
7) Check the statuses of the ZFS root pool and the ZFS storage pools.
The following explains how to confirm the status of the ZFS root pool.
Confirm the status of the ZFS storage pools with the same method as well.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 /dev/sfdsk/System/dsk/rpool ONLINE 0 0 0 (*2) c1t0d0s0 ONLINE 0 0 0 (*3)
Confirm that the following are displayed:
"ONLINE" is displayed in the state field.
*During the resynchronization process, "DEGRADED" may be displayed in the state field. If "ONLINE" is displayed in the state field after completing the resynchronization process, there is no problem.
"resilvered" or "resilver completed" is displayed in the scrub or scan field.
*During the resynchronization process, "resilver in progress" is displayed in the scrub or scan field.
Volume (*2) and slice (*3) attached in step 6) are displayed in the config field.
8) Install the boot block to the slice attached to the ZFS root pool in step 6-1).
For the following environments, do not perform this procedure:
For the environment where Solaris 10 is used and the kernel patch 144500-19 or later is applied
For the environment where Solaris 11 11/11 or later is used
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk \ /dev/rdsk/c1t0d0s0 (*1) slice attached to ZFS root pool in step 6-1)
9) Set boot-device property in OpenBoot.
The procedure to set the boot-device property in OpenBoot is different for the following four cases:
The disk connected to a 12Gbps SAS card in SAN Boot environment
The disk of Expansion File Unit connected to a 6Gbps SAS card, or the internal disks of SPARC M12/M10 and SPARC T4-4/T4-2/T4-1/T3-4/T3-2/T3-1 servers
SAN Boot environment
The other cases
[The disk connected to a 12Gbps SAS card in SAN Boot environment]
9a-1) Check the partition of the slice that configures the volume for the ZFS root pool.
# ls -l /dev/dsk | grep c0t600000E00D28000000280CC600000000d0s0 (*1) lrwxrwxrwx 1 root root 64 Feb 2 13:57 c0t600000E00D28000000280CC600000000d0s0 -> ../../devices/scsi_vhci/disk@g600000e00d28000000280cc600000000:a (*2)
(*1) Slice that configures the volume for the ZFS root pool
(*2) Partition of slice
9a-2) Check the boot disk information.
# prtconf -v /dev/rdsk/c0t600000E00D28000000280CC600000000d0s0 disk, instance #0 Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: Path 33: /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/iport@f0/disk@w500000e0da0cc620,0 lsc#3(online) (*1) (*2) ...
(*1) Disk node name (disk@w500000e0da0cc620,0 in this example)
(*2) SAS Address (500000e0da0cc620 in this example)
9a-3) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
9a-4) Check the device path of SAS HBA that is connected to the disk.
ok probe-scsi-all
/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0 (*1)
FCode Version 1.00.65, MPT Version 2.05, Firmware Version 4.00.00.00 Target a Unit 0 Disk TOSHIBA MBF2300RC 3706 585937500 Blocks, 300 GB SASDeviceName 50000393d82956d4 SASAddress 500000e0d0460306 PhyNum 0 (*2) Target b Unit 0 Disk TOSHIBA MBF2300RC 3706 585937500 Blocks, 300 GB SASDeviceName 50000393d828bbfc SASAddress 500000e0da0cc620 PhyNum b ... ok
(*1) Device path
(*2) SAS Address checked in *2 in step 9a-2)
9a-5) Set the boot-device property.
ok setenv boot-device /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0da0cc620,0:a (*3)
(*3) The boot device can be created by the concatenated value of the device path (*1) checked in step 9a-4), the disk node name (*1) checked in step 9a-2), and the partition of the slice checked in step 9a-1)
9a-6) Set the multipath-boot? property.
ok setenv multipath-boot? false
9a-7) Start the system.
ok boot
[The disk of Expansion File Unit connected to a 6Gbps SAS card, or the internal disks of SPARC M12/M10 and SPARC T4-4/T4-2/T4-1/T3-4/T3-2/T3-1 servers]
9b-1) Check the partition of slices attached to the ZFS root pool in step 6-1).
[Solaris 11.3 or earlier]
# ls -l /dev/dsk | grep c0t5000C5001D4809FFd0s0 (*1) lrwxrwxrwx 1 root root 48 Apr 25 13:46 c0t5000C5001D4809FFd0s0 -> ../../devices/scsi_vhci/disk@g5000c5001de809ff:a (*2)
(*1) Slices attached to the ZFS root pool in step 6-1)
(*2) Partition of slice
[Solaris 11.4 or later]
# ls -l /dev/dsk | grep c0t50000397683346A9d0s0 (*1) lrwxrwxrwx 1 root root 48 Jul 27 19:23 c0t50000397683346A9d0s0 -> ../../devices/scsi_vhci/disk@g50000397683346a9:a (*2)
(*1) Slices attached to the ZFS root pool in step 6-1)
(*2) Partition of slice
9b-2) Check the information of a disk used as a system disk after the mirroring cancellation.
[Solaris 11.3 or earlier]
Check the obp-path parameter.
# prtconf -v /dev/rdsk/c0t5000C5001D4809FFd0s0 disk, instance #6 Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: ... name='obp-path' type=string items=1 value='/pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000c5001d4809fd,0' (*3) ... Device Minor Nodes: ...
(*3) obp-path parameter
[Solaris 11.4 or later]
Check the ddi-boot-path parameter.
# prtconf -v /dev/rdsk/c0t50000397683346A9d0s2 disk, instance ... Device Hold: ... Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: Path 1: /pci@8000/pci@4/pci@0/pci@0/scsi@0/iport@f/disk@w50000397683346aa,0 mpt_sas#2 (online) name='ddi-boot-path' type=string items=1 value='/pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w50000397683346aa,0' (*3) ... Device Minor Nodes: ...
(*3) ddi-boot-path parameter
9b-3) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
9b-4) Set boot-device property.
[Solaris 11.3 or earlier]
ok setenv boot-device /pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000c5001d4809fd,0:a (*4)
(*4) The device name obp-oath in step 9a-2) and the partition in step 9a-1) are concatenated
[Solaris 11.4 or later]
ok setenv boot-device /pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w50000397683346aa,0:a (*4)
(*4) The device name ddi-boot-path in step 9a-2) and the partition in step 9a-1) are concatenated
9b-5) Start the system.
ok boot
9b-6) Set boot-device property of OpenBoot.
[Solaris 11.3 or earlier]
# eeprom boot-device=/pci@400/pci@2/pci@0/pci@e/scsi@0/disk@w5000c5001d4809fd,0:a (*6)
(*6) Value set in step 9b-4)
[Solaris 11.4 or later]
# eeprom boot-device=/pci@8000/pci@4/pci@0/pci@0/scsi@0/disk@w50000397683346aa,0:a (*6)
(*6) Value set in step 9b-4)
[SAN Boot environment]
9c-1) Among the device paths checked in step 2) in "7.1.1.2 Procedure for System Disk Mirroring Settings (ZFS Boot Environment: GDS Mirroring)," set the device path of the slice attached to the ZFS root pool in step 6-1) of this section to boot-device property.
For how to set the boot-device property, see SAN Boot manual.
[The other cases]
9d-1) Check the device path of slice attached to the ZFS root pool in step 6-1).
# ls -l /dev/dsk | grep c1t0d0s0 (*1) lrwxrwxrwx 1 root root 63 Aug 20 12:00 c1t0d0s0 -> ../../devices/pci@1f,4000/scsi@3/sd@1,0:a (*2)
(*1) Slices attached to the ZFS root pool in step 6-1)
9d-2) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
9d-3) Check the device name in the OpenBoot environment and set boot-device property by a method corresponding to the confirmation result.
ok show-devs /pci@1f,4000/scsi@3 (*3) ... /pci@1f,4000/scsi@3/disk ...
(*3) The path here is the path that the last element path (/sd@0,0:a in this example) is excluded from the path (*2) displayed in step 9d-1)
Set boot-device property corresponding to the above confirmation result.
When the device name "disk" is not output to the last element path displayed in show-devs:
ok setenv boot-device /pci@1f,4000/scsi@3/sd@1,0:a (*4)
(*4) Displayed path (*2) in step 9d-1)
When the device name "disk" is output to the last element path displayed in show-devs:
ok setenv boot-device /pci@1f,4000/scsi@3/disk@1,0:a (*5)
(*5) The specified path here is the one that the part before @ of the last element path (*2) (sd in this example) displayed in step 9d-1) is replaced to disk
9d-4) Start the system.
ok boot
9d-5) Set the boot-device property of OpenBoot.
# eeprom boot-device=/pci@1f,4000/scsi@3/sd@1,0:a (*6)
(*6) Value set in step 9d-3)
10) Detach the volume from the ZFS root pool and the ZFS storage pool.
10-1) Detach the volume from the ZFS root pool.
# zpool detach rpool /dev/sfdsk/System/dsk/rpool (*1) (*2) (*3)
(*1) ZFS root pool name (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name for the ZFS root pool
10-2) Detach the volume from the ZFS storage pool for data.
# zpool detach datapool /dev/sfdsk/System/dsk/datapool (*1) (*2) (*3)
(*1) Pool name of the ZFS storage pool for data (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name for data
10-3) Detach the volume from the ZFS storage pool for dump
# zpool detach dumppool /dev/sfdsk/System/dsk/dumppool (*1) (*2) (*3)
(*1) Pool name of the ZFS storage pool for dump (checked by the zpool status command.)
(*2) Root class name
(*3) Volume name for dump
11) Confirm that mirroring has been cancelled.
The following explains how to confirm the status of the ZFS root pool.
Confirm the status of the ZFS storage pools with same method as well.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 (*2)
Check that each status is properly displayed as below:
ONLINE is displayed in the state field.
Only a slice attached in step 6) is displayed in the config field.
12) Stop the volume.
# sdxvolume -F -c System -v rpool,datapool,dumppool
(*1) (*2)
(*1) Root class name
(*2) Volume names of the volumes for the ZFS root pool, data, and, dump
13) Delete the volumes.
13-1) Delete a volume for the ZFS root pool.
# sdxvolume -R -c System -v rpool (*1) (*2)
(*1) Root class name
(*2) Volume name for the ZFS root pool
13-2) Delete a volume for data.
# sdxvolume -R -c System -v datapool (*1) (*2)
(*1) Root class name
(*2) Volume name for data
13-3) Delete a volume for dump
# sdxvolume -R -c System -v dumppool (*1) (*2)
(*1) Root class name
(*2) Volume name for dump
14) Delete the group.
14-1) Delete the group that a system disk was connected.
# sdxgroup -R -c System -g Group1 (*1) (*2)
(*1) Root class name
(*2) Group name of group that a system disk is connected
14-2) Delete the group that a disk for dump was connected.
# sdxgroup -R -c System -g Group2 (*1) (*2)
(*1) Root class name
(*2) Group name of a group that a disk for dump was connected
15) Delete a disk from the root class.
15-1) Delete the disk that was connected to the group deleted in step 14-1) from the root class.
# sdxdisk -R -c System -d Root1 (*1) (*2)
(*1) Root class name
(*2) SDX disk name of the disk that was connected to the group deleted in step 14-1)
15-2) Delete the disk that was connected to the group deleted in step 14-2) from the root class.
# sdxdisk -R -c System -d Root3 (*1) (*2)
(*1) Root class name
(*2) SDX disk name of the disk that was connected to the group deleted in step 14-2)
16) Delete the tuning parameter of GDS.
16-1) Delete the tuning parameter.
Delete "SDX_BOOT_PROPERTY=off" from the /etc/opt/FJSVsdx/sdx.cf file.
# vi /etc/opt/FJSVsdx/sdx.cf
...
SDX_BOOT_PROPERTY=off <-Delete this line.
16-2) Check that the tuning parameter is deleted properly.
# grep SDX_BOOT_PROPERTY /etc/opt/FJSVsdx/sdx.cf
#
16-3) Restart the system.
# shutdown -y -g0 -i6