This section describes system disk mirroring settings (GDS mirroring) in a ZFS boot environment with the following configurations.
The following configurations are examples when the slice for the ZFS root pool and the slice for data exist on the system disk, and the disk for dump exists other than the system disk.
If the slice for data does not exist on the system disk or even if the disk for dump does not exist, you can set a system disk mirroring with this method.
GDS: Global Disk Services
Replace a physical slice name, a volume name, a pool name and so on with the names used as the actual system in the subsequent procedures.
System disk mirroring settings of GDS mirroring are performed according to the following procedure.
GDS: Global Disk Services
A reboot is required in step 9) of [5] in the above figure. Other settings are executable without stopping the service and the application programs.
In step 6) of [2] in the above figure, resynchronization process by ZFS and in step 12) of [6] in the above figure synchronization copying by GDS is performed. Also, the operation is executable without stopping the service and the application programs.
However, if higher safety is required, you should stop the service and the application programs and make backups of system disks before setting.
The required time depends on the performance of servers or disks, the size of slices and file systems, and so on. It also varies from system to system.
This section describes how to set a system disk mirroring when the slice for the ZFS root pool and the slice for data exist on the system disk, and the disk for dump exists other than the system disk shown as the figure in "7.1.1 System Disk Mirroring Settings in a ZFS Boot Environment (GDS Mirroring)."
If the slice for data does not exist on the system disk or even if the disk for dump does not exist, you do not have to set a system disk mirroring for the non-existent slices and disk.
1) Set the tuning parameter of GDS.
Apply this setting only when the system disk is connected to a 12Gbps SAS card in SAN Boot environment.
1-1) Set the tuning parameter.
Add "SDX_BOOT_PROPERTY=off" to the /etc/opt/FJSVsdx/sdx.cf file.
# vi /etc/opt/FJSVsdx/sdx.cf
...
SDX_BOOT_PROPERTY=off
1-2) Check that the tuning parameter is set properly.
# grep SDX_BOOT_PROPERTY /etc/opt/FJSVsdx/sdx.cf
SDX_BOOT_PROPERTY=off
2) Create a slice in the mirror disk.
Use an OS command such as format(1M) to create the slice configuration of the mirror disk same as the system disk.
Make a note of the slice configuration on paper or a file. It will be necessary when unmirroring a system disk.
3) Check the path names of the boot device for SAN Boot environment.
Check the device path name of a slice (to configure a ZFS root pool) where the operating system has been installed and the device path name of the slice created in step 2).
Make a note of the checked device path names on paper or a file. It will be necessary when starting a system or unmirroring a system disk.
For details on how to check the device path names, see SAN Boot manual.
4) Register the mirror disk to a root class.
4-1) Register the mirror disk of a system disk to the root class.
# sdxdisk -M -c System -a type=root -d c1t0d0=Root2:keep (*1) (*2) (*3)
(*1) Root class name
(*2) Mirror disk of system disk
(*3) SDX disk name of mirror disk of system disk
4-2) Register the mirror disk of a disk for dump to the root class.
# sdxdisk -M -c System -a type=root -d c1t0d1=Root4:keep (*1) (*2) (*3)
(*1) Root class name
(*2) Mirror disk of disk for dump
(*3) SDX disk name of mirror disk of disk for dump
5) Connect the mirror disk to a group.
5-1) Connect the mirror disk of a system disk to a group.
# sdxdisk -C -c System -g Group1 -d Root2 -v 0=rpool:on,1=datapool:on (*1) (*2) (*3) (*4) (*5)
(*1) Root class name
(*2) Group name of group of system disk
(*3) SDX disk name of mirror disk of system disk
(*4) Specify the setting for the slice (c1t0d0s0 in this example), which is to be the mirror destination of the slice for the ZFS root pool, and also created in step 2), in the num=volume:jrm format. Specify the slice number, volume name, JRM mode (it is normally on) for num, volume, jrm respectively.
(*5) Specify the setting for the slice (c1t0d0s1 in this example), which is to be the mirror destination of the slice for data on the system disk, and also created in step 2), in the num=volume:jrm format. Specify the slice number, volume name, JRM mode (it is normally on) for num, volume, jrm respectively.
5-2) Connect the mirror disk of a disk for dump to a group.
# sdxdisk -C -c System -g Group2 -d Root4 -v 0=dumppool:on (*1) (*2) (*3) (*4)
(*1) Root class name
(*2) Group name of group for dump
(*3) SDX disk name of mirror disk of disk for dump
(*4) Specify the setting for the slice (c1t0d1s0 in this example), which is to be the mirror destination of the slice of a disk for dump, and also created in step 2), in the num=volume:jrm format. Specify the slice number, volume name, JRM mode (it is normally on) for num, volume, jrm respectively.
6) Attach volumes on the mirror disks to the ZFS root pool and the ZFS storage pools.
After executing the zpool attach command, the resynchronization process of ZFS is performed. This time, the OS message (SUNW-MSG-ID: ZFS-8000-QJ) may be output on the console, but it does not affect the system.
6-1) Attach a volume to the ZFS root pool.
# zpool attach rpool c0t0d0s0 /dev/sfdsk/System/dsk/rpool (*1) (*2) (*3) (*4)
(*1) ZFS root pool name (checked by the zpool status command)
(*2) Slice where the OS has been installed (slice to configure the ZFS root pool)
(*3) Root class name
(*4) Volume name of the volume for the ZFS root pool created in step 5-1)
6-2) Attach a volume to the ZFS storage pool for data.
# zpool attach datapool c0t0d0s1 /dev/sfdsk/System/dsk/datapool (*1) (*2) (*3) (*4)
(*1) Pool name of the ZFS storage pool for data (checked by the zpool status command.)
(*2) Slice for data on the system disk (slice to configure the ZFS storage pool)
(*3) Root class name
(*4) Volume name of a volume for data created in step 5-1)
6-3) Attach a volume to the ZFS storage pool for dump
# zpool attach dumppool c0t0d1s0 /dev/sfdsk/System/dsk/dumppool (*1) (*2) (*3) (*4)
(*1) Pool name of the ZFS storage pool for dump (checked by the zpool status command.)
(*2) Slice of disk for dump
(*3) Root class name
(*4) Volume name of the volume for dump created in step 5-2)
7) Check the status of the ZFS root pool and the ZFS storage pools.
The following explains how to check the status of the ZFS root pool.
Confirm the status of the ZFS storage pools with the same method as well.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c0t0d0s0 ONLINE 0 0 0 (*2) /dev/sfdsk/System/dsk/rpool ONLINE 0 0 0 (*3)
Confirm that the following are displayed:
ONLINE is displayed in the state field.
*During the resynchronization process, "DEGRADED" may be displayed in the state field. If "ONLINE" is displayed in the state field after completing the resynchronization process, there is no problem.
"resilvered" or "resilver completed" is displayed in the scrub or scan field.
*During the resynchronization process, "resilver in progress" is displayed in the scrub or scan field.
*When the system is rebooted during the resynchronization process, the process is stopped and "none requested" is displayed in the scrub or scan field. In this case, re-execute the resynchronization process by using the zpool scrub command.
The slice (*2) where the OS has been installed and the volume (*3) attached in step 5) are displayed in the config field.
8) Install the boot block to a volume on the mirror disk.
For the following environments, do not perform this procedure:
For the environment where Solaris 10 is used and the kernel patch 144500-19 or later is applied
For the environment where Solaris 11 11/11 or later is used
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk \ /dev/sfdsk/System/rdsk/rpool (*1) (*2)
(*1) Root class name
(*2) Volume name of the volume attached to the ZFS root pool in step 6-1)
9) Restart the system.
The procedure to reboot from a boot disk is different for the following two cases:
The disk connected to a 12Gbps SAS card in SAN Boot environment
The other cases
[The disk connected to a 12Gbps SAS card in SAN Boot environment]
9a-1) Check the partition of the slice to configure a volume for the ZFS root pool.
# ls -l /dev/dsk | grep c0t600000E00D0000000001060300040000d0s0 (*1) lrwxrwxrwx 1 root root 64 Feb 2 13:57 c0t600000E00D0000000001060300040000d0s0 -> ../../devices/scsi_vhci/disk@g600000e00d28000000280cc600000000:a (*2)
(*1) Slice (created in step 2) to configure a volume for the ZFS root pool.
(*2) Partition of slices
9a-2) Check the boot disk information.
# prtconf -v /dev/rdsk/c0t600000E00D0000000001060300040000d0s0 disk, instance #0 Driver properties: ... Hardware properties: ... Paths from multipath bus adapters: Path 33: /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/iport@f0/disk@w500000e0d0460306,0 lsc#3(online) (*1) (*2) ...
(*1) Disk node name (disk@w500000e0d0460306,0 in this example)
(*2) SAS Address (500000e0d0460306 in this example)
9a-3) Enter the OpenBoot environment.
# shutdown -y -g0 -i0
9a-4) Check the device path of SAS HBA that is connected to the disk.
ok probe-scsi-all
/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0 (*1)
FCode Version 1.00.65, MPT Version 2.05, Firmware Version 4.00.00.00 Target a Unit 0 Disk TOSHIBA MBF2300RC 3706 585937500 Blocks, 300 GB SASDeviceName 50000393d82956d4 SASAddress 500000e0d0460306 PhyNum 0 (*2) Target b Unit 0 Disk TOSHIBA MBF2300RC 3706 585937500 Blocks, 300 GB SASDeviceName 50000393d828bbfc SASAddress 500000e0da0cc620 PhyNum b ... ok
(*1) Device path
(*2) SAS Address checked in *2 in step 9a-2)
9a-5) Start the system in the OpenBoot environment.
ok boot /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0d0460306,0:a (*4)
(*4) Concatenated value of the device path (*1) checked in step 9a-4), the disk node name (*1) checked in step 9a-2), and the partition of the slice (*2) checked in step 9a-1)
[The other cases]
9b-1) Restart the system.
# shutdown -y -g0 -i6
10) Detach the original system disk and the original disk for dump from the ZFS root pool or the ZFS storage pool.
10-1) Detach the slice from the ZFS root pool.
# zpool detach rpool c0t0d0s0 (*1) (*2)
(*1) ZFS root pool name (checked by the zpool status command)
(*2) Slice where the OS has been installed
10-2) Detach the slice from the ZFS storage pool for data.
# zpool detach datapool c0t0d0s1 (*1) (*2)
(*1) Pool name of the ZFS storage pool for data (checked by the zpool status command)
(*2) Slice for data on the original system disk.
10-3) Detach the slice from the ZFS storage pool for dump.
# zpool detach dumppool c0t0d1s0 (*1) (*2)
(*1) Pool name of the ZFS storage pool for dump (checked by the zpool status command)
(*2) Slice of the original disk for dump
11) Register the original system disk and the original disk for dump to the root class.
11-1) Register the original system disk to the root class.
# sdxdisk -M -c System -d c0t0d0=Root1 (*1) (*2) (*3)
(*1) Root class name
(*2) Original system disk
(*3) SDX disk name of the original system disk
11-2) Register the original disk for dump to the root class.
# sdxdisk -M -c System -d c0t0d1=Root3 (*1) (*2) (*3)
(*1) Root class name
(*2) Original disk for dump
(*3) SDX disk name of original disk for dump
12) Connect the original system disk and the original disk for dump to the groups created in step 5).
12-1) Connect the original system disk to the group.
# sdxdisk -C -c System -g Group1 -d Root1 (*1) (*2) (*3)
(*1) Root class name
(*2) Group name of the group of the system disk created in step 5-1)
(*3) SDX disk name of original system disk
12-2) Connect the original disk for dump for a group.
# sdxdisk -C -c System -g Group2 -d Root3 (*1) (*2) (*3)
(*1) Root class name
(*2) Group name of the group of the disk for dump created in step 5-2)
(*3) SDX disk name of the original disk for dump
13) Confirm that the disk has been mirrored normally.
The following explains how to confirm the status of the ZFS root pool.
Confirm the status of the ZFS storage pools with the same method as well.
# zpool status rpool (*1) ZFS root pool name pool: rpool state: ONLINE scan: resilvered ... config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 /dev/sfdsk/System/dsk/rpool ONLINE 0 0 0 (*2)
Confirm that the following are displayed:
ONLINE is displayed in the state field.
Only the volume (*2) attached in step 6) is displayed in the config field.
# sdxinfo -S -c System (*3) root class name OBJ CLASS GROUP DISK VOLUME STATUS ------ ------- ------- ------- --------- ------- slice System Group1 Root1 rpool ACTIVE slice System Group1 Root2 rpool ACTIVE slice System Group1 Root1 datapool ACTIVE slice System Group1 Root2 datapool ACTIVE slice System Group2 Root3 dumppool ACTIVE slice System Group2 Root4 dumppool ACTIVE
Confirm that the following are displayed:
The information of the original system disk (Root1 in this example) and the mirror disk (Root2 in this example) are displayed.
The information of the original disk for dump (Root3 in this example) and the mirror disk (Root4 in this example) are displayed.
ACTIVE is displayed in STATUS.
*After performing step 12) is completed, the synchronization copying will be in process. During the process, COPY is displayed in the STATUS field for slices on the original system disk (Root1 in this example) and on the original disk for dump (Root3 in this example).
14) Set the mirror disk for the environment variable of OBP (boot-device or multipath-boot?).
Apply this setting only when the system disk is connected to a 12Gbps SAS card in SAN Boot environment.
14-1) Check the boot-device property of OBP of the mirror source disk.
# eeprom boot-device
boot-device=/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0d0460300,0:a (*1)
(*1) Boot device of the mirror source disk
14-2) Set the boot device of the mirror destination disk for the boot-device property of OBP.
# eeprom boot-device="/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0d0460300,0:a
(*1)
/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0d0460306,0:a"
(*2)
(*1) Boot device checked in step 14-1)
(*2) Boot device checked in step 9a)
14-3) Check the multipath-boot? property of OBP.
# eeprom multipath-boot?
multipath-boot?=false
14-4) Set "true" for the multipath-boot? property of OBP.
# eeprom multipath-boot?=true
14-5) Check that "true" is set for the multipath-boot? property of OBP.
# eeprom
...
boot-device=/pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0d0460300,0:a /pci@8100/pci@4/pci@0/pci@0/LSI,sas@0/disk@w500000e0d0460306,0:a
...
multipath-boot?=true