This sub-section describes the method of backing up data from and restoring data back to mirror volumes in the primary domain through a backup server in another domain.
GDS: Global Disk Services
GDS Snapshot: Global Disk Services Snapshot
Information
In the primary domain, SynfinityDisk and SynfinityCluster are also available instead of GDS and PRIMECLUSTER.
Note
Physical Device Name
Different physical device names (such as c1t1d1) may be assigned to the identical physical disk in the primary domain and the backup server.
Data in a slice temporarily detached from a volume can be backed up to tape during the service operation.
To secure consistency of data in a detached slice, the services must be stopped temporarily when detaching the slice.
Information
Consistency of Snapshot Data
When detaching a slice while the services are operating, data consistency must be secured through the method specific to that software, such as a file system and a database system, which manages volume data. For details, see "A.2.29 Ensuring Consistency of Snapshot Data."
If volume data is damaged, it can be restored from tape.
Data can be restored while service is stopped and the application volume is not in use.
Information
In this configuration access cannot be gained from backup server Node3 to disk c2t1d1. Therefore, after data is restored from tape back to c1t1d1 while c2t1d1 is detached temporarily, resynchronization copying from c1t1d1 to c2t1d1 must be performed by reattaching c2t1d1. When access can be gained from Node3 to both c1t1d1 and c2t1d1, it is not required that c2t1d1 be detached temporarily since data can be restored from tape back to both c1t1d1 and c2t1d1. For details on this restore method, see "6.8.1 Backing Up and Restoring a Logical Volume with No Replication."
Note
Automatic Resource Registration
If the backup server resides in a cluster domain (called a backup domain), those disks that are registered as resources in the primary domain or are to be registered with a shadow class in the backup domain may not be involved in the resource registration in the backup domain. In the backup domain, those relevant disks must be described in the Excluded Device List prior to executing the automatic resource registration. For details on the automatic resource registration, see "PRIMECLUSTER Installation and Administration Guide."
1) Creating an application volume
Create a mirror volume used for the services on disks c1t1d1 and c2t1d1. The following settings are necessary on Node1 or Node2 in the primary domain.
1-1) Registering disks
Register disks c1t1d1 and c2t1d1 with shared class Class1 that is shared on Node1 and Node2, and name them Disk1 and Disk2 respectively.
# sdxdisk -M -c Class1 -a type=shared,scope=Node1:Node2 -d c1t1d1=Disk1,c2t1d1=Disk2 |
1-2) Creating a mirror group
Connect Disk1 and Disk2 to mirror group Group1.
# sdxdisk -C -c Class1 -g Group1 -d Disk1,Disk2 |
1-3) Creating a mirror volume
Create mirror volume Volume1 to mirror group Group1.
# sdxvolume -M -c Class1 -g Group1 -v Volume1 -s 1048576 |
2) Detaching the backup target slice
Temporarily detach the slice on Disk1 that is the backup target, among slices in application volume Volume1. The following procedure must be performed on Node1 or Node2 in the primary domain.
Information
The following example secures data consistency by stopping the services when a slice is detached. Steps 2-1) and 2-3) are not required if your software, such as a file system and a database system, that manages volume data provides functionality ensuring data consistency or repairing consistency for a detached slice. Alternatively, data consistency must be secured with the method specific to that software. For details, see "A.2.29 Ensuring Consistency of Snapshot Data."
2-1) Stopping the services
To secure consistency of data in a detached slice, exit all applications accessing application volume Volume1 on Node1 and Node2.
When Volume1 is used as a file system, it should be unmounted.
2-2) Detaching the slice
Temporarily detach the slice on disk Disk1 from Volume1. To write-lock the detached slice, set the access mode of the slice to ro (read only).
# sdxslice -M -c Class1 -d Disk1 -v Volume1 -a jrm=off,mode=ro |
Note
Just Resynchronization Mode for Slice
On backup server Node3, data may be written from Node3 into Disk1 when data in Disk1 is backed up to tape. GDS in the primary domain cannot recognize the write occurrence from Node3. Consequently, if the JRM mode of the detached slice is "on", the portions updated from Node3 may not be involved in resynchronization copying performed when the slice is reattached. If this happens, synchronization of Volume1 is no longer ensured. For this reason, the JRM mode of a detached slice must be set to off in advance.
2-3) Resuming the services
When the file system was unmounted in step 2-1), mount it again.
Resume the application stopped in step 2-1).
3) Viewing the configuration of the application volume
On Node1 or Node2 in the primary domain, view the configuration of application volume Volume1 that is the backup target.
Check the underlined parts.
# sdxinfo -c Class1
OBJ NAME TYPE SCOPE SPARE
------ ------- -------- ----------- -----
class Class1 shared Node1:Node2 0 OBJ NAME TYPE CLASS GROUP DEVNAM DEVBLKS DEVCONNECT STATUS
------ ------- ------ ------- ------- ------- -------- ---------------- -------
disk Disk1 mirror Class1 Group1 c1t1d1 8380800 Node1:Node2 ENABLE
disk Disk2 mirror Class1 Group1 c2t1d1 8380800 Node1:Node2 ENABLE OBJ NAME CLASS DISKS BLKS FREEBLKS SPARE ------ ------- ------- ------------------- -------- -------- ----- group Group1 Class1 Disk1:Disk2 8290304 7176192 0 OBJ NAME CLASS GROUP SKIP JRM 1STBLK LASTBLK BLOCKS STATUS ------ ------- ------- ------- ---- --- ------- -------- -------- -------- volume * Class1 Group1 * * 0 65535 65536 PRIVATE volume Volume1 Class1 Group1 off on 65536 1114111 1048576 ACTIVE volume * Class1 Group1 * * 1114112 8290303 7176192 FREE OBJ CLASS GROUP DISK VOLUME STATUS ------ ------- ------- ------- ------- -------- slice Class1 Group1 Disk1 Volume1 TEMP slice Class1 Group1 Disk2 Volume1 ACTIVE |
4) Creating a shadow volume for backup
Create a volume for backup (shadow volume) to disk c1t1d1 on backup server Node3. The following settings are necessary on backup server Node3.
Note
Application volume data may be damaged if data is written into a shadow volume in incorrect configuration. Be sure to confirm that the shadow volume configuration is correct in step 4-4).
4-1) Registering a shadow disk
Register disk c1t1d1 with shadow class Class2, and name it Disk1.
# sdxshadowdisk -M -c Class2 -d c1t1d1=Disk1 |
Point
The disk name must correspond to the disk name assigned to disk c1t1d1 in step 1-1). The disk names assigned in 1-1) can be viewed in the NAME field for disk information displayed with the sdxinfo command in step 3).
The class can be assigned any name.
4-2) Creating a shadow group
Connect shadow disk Disk1 to mirror type shadow group Group1.
# sdxshadowdisk -C -c Class2 -g Group1 -d Disk1 |
4-3) Creating a shadow volume
Create shadow volume Volume1 to shadow group Group1.
# sdxshadowvolume -M -c Class2 -g Group1 -v Volume1 -s 1048576 |
Point
The volume must be created in the size corresponding to the volume size in step 1-3). The size of a volume created in step 1-3) can be viewed in the BLOCKS field for volume information displayed with the sdxinfo command in step 3).
If there are multiple volumes, the corresponding shadow volumes must be created in the order of ascending values (first block numbers) in the 1STBLK field for volume information displayed with the sdxinfo command in step 3).
The volume can be assigned any name.
4-4) Viewing the configuration of the shadow volume
Using the sdxinfo command, confirm that the group configuration and the volume configuration are correct based on group information in the DISKS field, volume information in the 1STBLK field and in the BLOCKS field and so on.
Check the underlined parts.
# sdxinfo -c Class2
OBJ NAME TYPE SCOPE SPARE
------ ------- -------- ----------- -----
class Class2 local Node3 0 OBJ NAME TYPE CLASS GROUP DEVNAM DEVBLKS DEVCONNECT STATUS ------ ------- ------ ------- ------- ------- -------- ---------------- ------- disk Disk1 mirror Class2 Group1 c1t1d1 8380800 Node3 ENABLE OBJ NAME CLASS DISKS BLKS FREEBLKS SPARE
------ ------- ------- ------------------- -------- -------- -----
group Group1 Class2 Disk1 8290304 7176192 0 OBJ NAME CLASS GROUP SKIP JRM 1STBLK LASTBLK BLOCKS STATUS ------ ------- ------- ------- ---- --- ------- -------- -------- -------- volume * Class2 Group1 * * 0 65535 65536 PRIVATE volume Volume1 Class2 Group1 off off 65536 1114111 1048576 ACTIVE volume * Class2 Group1 * * 1114112 8290303 7176192 FREE OBJ CLASS GROUP DISK VOLUME STATUS
------ ------- ------- ------- ------- --------
slice Class2 Group1 Disk1 Volume1 ACTIVE |
5) Backing up to tape
On backup server Node3, back up data in the shadow volume to tape. The following shows examples of backing up data in shadow volume Volume1 to a tape medium of tape device /dev/rmt/0.
See
For details on the backup method, see the manuals of file systems to be backed up and used command.
Information
In a GFS Shared File System
Back up through the method as described in step 5a).
5a) When backing up data held in a raw device with the dd(1M) command
# dd if=/dev/sfdsk/Class2/rdsk/Volume1 of=/dev/rmt/0 bs=32768 |
5b) When backing up a ufs file system with the tar(1) command
5b-1) Activate shadow volume Volume1 in the read and write access mode (rw).
# sdxshadowvolume -F -c Class2 -v Volume1 # sdxshadowvolume -N -c Class2 -v Volume1 -e mode=rw |
5b-2) Check and repair consistency of the ufs file system on shadow volume Volume1.
If the file system was unmounted when the slice was detached in step 2), this step can be skipped.
# fsck -F ufs -y /dev/sfdsk/Class2/rdsk/Volume1 |
5b-3) Mount the ufs file system on shadow volume Volume1 on /mnt1, a temporary mount point, in the read only mode.
# mkdir /mnt1 # mount -F ufs -o ro /dev/sfdsk/Class2/dsk/Volume1 /mnt1 |
5b-4) Back up data held in the file system to tape.
# cd /mnt1 |
5b-5) Unmount the file system mounted in step 5b-3).
# cd / # umount /mnt1 |
5c) When backing up a ufs file system with the ufsdump(1M) command
5c-1) Activate shadow volume Volume1 in the read and write access mode (rw).
# sdxshadowvolume -F -c Class2 -v Volume1 # sdxshadowvolume -N -c Class2 -v Volume1 -e mode=rw |
5c-2) Check and repair consistency of the ufs file system on shadow volume Volume1.
If the file system was unmounted when the slice was detached in step 2), this step can be skipped.
# fsck -F ufs -y /dev/sfdsk/Class2/rdsk/Volume1 |
5c-3) Back up data held in the file system to tape.
# ufsdump 0ucf /dev/rmt/0 /dev/sfdsk/Class2/rdsk/Volume1 |
6) Removing the shadow volume
After the backup process is complete, remove the shadow volume to prevent improper access to it. The following procedure must be performed on backup server Node3.
6-1) Stopping the shadow volume
Stop shadow volume Volume1.
# sdxshadowvolume -F -c Class2 -v Volume1 |
6-2) Removing the shadow volume
Remove shadow volume Volume1.
# sdxshadowvolume -R -c Class2 -v Volume1 |
6-3) Removing the shadow group
Remove shadow group Group1.
# sdxshadowgroup -R -c Class2 -g Group1 |
6-4) Removing the shadow disk
Remove shadow disk Disk1.
# sdxshadowdisk -R -c Class2 -d Disk1 |
7) Reattaching the backup target slice
Reattach the slice temporarily detached from the application volume back to it. The following procedure must be performed on Node1 or Node2 in the primary domain.
7-1) Reattaching the backup target slice
Reattach slice Volume1.Disk1 temporarily detached from application volume Volume1 in step 2-2).
# sdxslice -R -c Class1 -d Disk1 -v Volume1 |
After returning from the command, synchronization copying from the slice on Disk1 of volume Volume1 to the slice on Disk2 is executed.
7-2) Viewing the copy status
The status of synchronization copying can be viewed using the sdxinfo -S command. The copy destination slice is in the COPY status if copying is in progress and it will be in the ACTIVE status after the copy process ends normally (note, however, that it will be in the STOP status when Volume1 is in the STOP status).
# sdxinfo -S -c Class1 -o Volume1
OBJ CLASS GROUP DISK VOLUME STATUS
------ ------- ------- ------- ------- --------
slice Class1 Group1 Disk1 Volume1 ACTIVE
slice Class1 Group1 Disk2 Volume1 COPY |
Information
If access can be gained from backup server Node3 to all of the disks constituting Volume1 (c1t1d1 and c2t1d1) data can be restored from tape back to both c1t1d1 and c2t1d1 on Node3. Under these circumstances, detaching a slice should not be performed as described in step 10).
8) Stopping the services
Exit all applications using application volume Volume1 on Node1 and Node2 in the primary domain.
When Volume1 is used as a file system, it should be unmounted.
9) Stopping the application volume
To write-lock volume Volume1, inactivate Volume1 on Node1 and Node2 in the primary domain. Execute the following command on Node1 or Node2.
# sdxvolume -F -c Class1 -v Volume1 -e allnodes |
10) Detaching any nonrelevant slice from the application volume
Temporarily detach the slice on any disk (Disk2) other than Disk1 that is the restore target from Volume1, among slices in application volume Volume1. Execute the following command on Node1 or Node2 in the primary domain.
# sdxslice -M -c Class1 -d Disk2 -v Volume1 -a jrm=off |
Note
Just Resynchronization Mode for Slice
On backup server Node3, after data is restored from tape back to Disk1, the slice on Disk2 is supposed to be reattached to application volume Volume1 in the primary domain. At this point the entire volume data must be copied to the attached slice. For this reason, the JRM mode of a detached slice must be set to off in advance.
Information
If access can be gained from backup server Node3 to all of the disks constituting Volume1 (c1t1d1 and c2t1d1) this procedure (detaching a slice) should not be performed.
11) Viewing the configuration and status of the application volume
On Node1 or Node2 in the primary domain, view the configuration and status of application volume Volume1 that is the restore target. Confirm that Volume1 is in STOP status and that only restore target slice Volume1.Disk1 is in STOP status among the slices constituting the volume and the other slices are in TEMP or TEMP-STOP status. If the volume or slice status is invalid, repair it referencing to "F.1.3 Volume Status Abnormality" and "F.1.1 Slice Status Abnormality."
Check the underlined parts.
# sdxinfo -c Class1
OBJ NAME TYPE SCOPE SPARE
------ ------- -------- ----------- -----
class Class1 shared Node1:Node2 0 OBJ NAME TYPE CLASS GROUP DEVNAM DEVBLKS DEVCONNECT STATUS
------ ------- ------ ------- ------- ------- -------- ---------------- -------
disk Disk1 mirror Class1 Group1 c1t1d1 8380800 Node1:Node2 ENABLE
disk Disk2 mirror Class1 Group1 c2t1d1 8380800 Node1:Node2 ENABLE OBJ NAME CLASS DISKS BLKS FREEBLKS SPARE ------ ------- ------- ------------------- -------- -------- ----- group Group1 Class1 Disk1:Disk2 8290304 7176192 0 OBJ NAME CLASS GROUP SKIP JRM 1STBLK LASTBLK BLOCKS STATUS ------ ------- ------- ------- ---- --- ------- -------- -------- -------- volume * Class1 Group1 * * 0 65535 65536 PRIVATE volume Volume1 Class1 Group1 off on 65536 1114111 1048576 STOP volume * Class1 Group1 * * 1114112 8290303 7176192 FREE OBJ CLASS GROUP DISK VOLUME STATUS ------ ------- ------- ------- ------- -------- slice Class1 Group1 Disk1 Volume1 STOP slice Class1 Group1 Disk2 Volume1 TEMP |
Information
If access can be gained from backup server Node3 to all of the disks constituting Volume1 (c1t1d1 and c2t1d1) you must confirm that all of the slices of Volume1 are in STOP status.
12) Creating a shadow volume for restoration
On backup server Node3, create a volume for restoration (shadow volume) on disk c1t1d1. The following settings are necessary on backup server Node3. A shadow volume for restoration and a shadow volume for backup are common. If one already exists, this procedure is not required.
Note
Application volume data may be damaged if data is written into a shadow volume in incorrect configuration. Be sure to confirm that the shadow volume configuration is correct in step 12-5).
Information
If access can be gained from backup server Node3 to all of the disks constituting Volume1 (c1t1d1 and c2t1d1) a shadow volume for restoration must be created in the same configuration as Volume1. Under these circumstances, those shadow volumes for restoration and backup are not common.
12-1) Registering a shadow disk
Register disk c1t1d1 with shadow class Class2, and name it Disk1.
# sdxshadowdisk -M -c Class2 -d c1t1d1=Disk1 |
Point
The disk name must correspond to the disk name assigned to c1t1d1 in step 1-1). The disk names assigned in 1-1) can be viewed in the NAME field for disk information displayed with the sdxinfo command in step 11).
The class can be assigned any name.
Information
If access can be gained from backup server Node3 to all of the disks constituting Volume1 (c1t1d1 and c2t1d1) all of those disks (c1t1d1 and c2t1d1) must be registered with a shadow class.
12-2) Creating a shadow group
Connect shadow disk Disk1 to mirror type shadow group Group1.
# sdxshadowdisk -C -c Class2 -g Group1 -d Disk1 |
Information
If access can be gained from backup server Node3 to all of the disks constituting Volume1 (c1t1d1 and c2t1d1) all of those disks (c1t1d1 and c2t1d1) must be connected to a shadow group.
12-3) Creating a shadow volume
Create shadow volume Volume1 to shadow group Group1.
# sdxshadowvolume -M -c Class2 -g Group1 -v Volume1 -s 1048576 |
Point
The volume must be created in the size corresponding to the volume size in step 1-3). The size of a volume created in step 1-3) can be viewed in the BLOCKS field for volume information displayed with the sdxinfo command in step 11).
If there are multiple volumes, the corresponding shadow volumes must be created in the order of ascending values (first block numbers) in the 1STBLK field for volume information displayed with the sdxinfo command in step 11).
The volume can be assigned any name.
12-4) Setting the access mode of the shadow volume
Activate shadow volume Volume1 in the read and write access mode (rw).
# sdxshadowvolume -F -c Class2 -v Volume1 |
12-5) Viewing the configuration of the shadow volume
Using the sdxinfo command, confirm that the group configuration and the volume configuration are correct based on group information in the DISKS field, volume information in the 1STBLK field and in the BLOCKS field and so on.
Check the underlined parts.
# sdxinfo -c Class2 OBJ NAME TYPE CLASS GROUP DEVNAM DEVBLKS DEVCONNECT STATUS
------ ------- ------ ------- ------- ------- -------- ---------------- -------
disk Disk1 mirror Class2 Group1 c1t1d1 8380800 Node3 ENABLE OBJ NAME CLASS DISKS BLKS FREEBLKS SPARE
------ ------- ------- ------------------- -------- -------- -----
group Group1 Class2 Disk1 8290304 7176192 0 OBJ NAME CLASS GROUP SKIP JRM 1STBLK LASTBLK BLOCKS STATUS ------ ------- ------- ------- ---- --- ------- -------- -------- -------- volume * Class2 Group1 * * 0 65535 65536 PRIVATE volume Volume1 Class2 Group1 off off 65536 1114111 1048576 ACTIVE volume * Class2 Group1 * * 1114112 8290303 7176192 FREE OBJ CLASS GROUP DISK VOLUME STATUS
------ ------- ------- ------- ------- --------
slice Class2 Group1 Disk1 Volume1 ACTIVE |
13) Restoring from tape
On backup server Node3, restore shadow volume data from tape to which it was backed up in step 5). In the following examples, restore data held in shadow volume Volume1 from a tape medium of tape device /dev/rmt/0.
See
For details on the restore method, see the manuals of file systems to be restored and used commands.
Information
In a GFS Shared File System
Restore through the method as described in step 13a).
13a) When restoring data held in a raw device with the dd(1M) command
# dd if=/dev/rmt/0 of=/dev/sfdsk/Class2/rdsk/Volume1 bs=32768 |
13b) When restoring a ufs file system with the tar(1) command
13b-1) Create a ufs file system to shadow volume Volume1.
# newfs /dev/sfdsk/Class2/rdsk/Volume1 |
13b-2) Mount the ufs file system on shadow volume Volume1 on /mnt1, a temporary mount point.
# mkdir /mnt1 |
13b-3) Restore data held in the file system from tape.
# cd /mnt1 |
13b-4) Unmount the file system mounted in step 13b-3).
# cd / # umount /mnt1 |
13c) When restoring a ufs file system with the ufsrestore(1M) command
13c-1) Create a ufs file system to shadow volume Volume1.
# newfs /dev/sfdsk/Class2/rdsk/Volume1 |
13c-2) Mount the ufs file system on shadow volume Volume1 on /mnt1, a temporary mount point.
# mkdir /mnt1 # mount -F ufs /dev/sfdsk/Class2/dsk/Volume1 /mnt1 |
13c-3) Restore data held in the file system from tape.
# cd /mnt1 |
13c-4) Delete the temporary file created by the ufsrestore(1M) command.
# rm /mnt1/restoresymtable |
13c-5) Unmount the file system mounted in step 13c-2).
# cd / |
14) Removing the shadow volume
After the restore process is complete, remove the shadow volume to prevent improper access to it. The following procedure must be performed on backup server Node3.
14-1) Stopping the shadow volume
Stop shadow volume Volume1.
# sdxshadowvolume -F -c Class2 -v Volume1 |
14-2) Removing the shadow volume
Remove shadow volume Volume1.
# sdxshadowvolume -R -c Class2 -v Volume1 |
14-3) Removing the shadow group
Remove shadow group Group1.
# sdxshadowgroup -R -c Class2 -g Group1 |
14-4) Removing the shadow disk
Remove shadow disk Disk1.
# sdxshadowdisk -R -c Class2 -d Disk1 |
Information
If access can be gained from backup server Node3 to all of the disks constituting Volume1 (c1t1d1 and c2t1d1) all of the disks registered with shadow class Class2 in step 12) (c1t1d1 and c2t1d1) must be removed.
15) Resuming the services and reattaching the slice to the application volume
Resume service in the primary domain. The following procedure should be performed on the node that runs the services.
Information
In the following example resuming the services is put above resynchronizing the application volume. Through this procedure the service is resumed first and then resynchronization of the volume is secured during the services operation. If resynchronizing the volume should be put above resuming the services, the procedure should be followed in the order of steps 15-1), 15-3), 15-4) (confirming that the synchronization copying is complete), and 15-2).
15-1) Activating the application volume
Activate application volume Volume1.
# sdxvolume -N -c Class1 -v Volume1 |
15-2) Resuming the services
When the file system on application volume Volume1 was unmounted in step 8), mount it again.
Start the applications using Volume1.
15-4) Reattaching the slice to the application volume
Reattach slice Volume1.Disk2 that was temporarily detached from application volume Volume1 in step 10) back to Volume1.
# sdxslice -R -c Class1 -d Disk2 -v Volume1 |
After returning from the command, synchronization copying from the slice on Disk1 of volume Volume1 to the slice on Disk2 is executed.
15-5) Viewing the copy status
The status of synchronization copying can be viewed using the sdxinfo -S command. The copy destination slice is in COPY status if copying is in progress and it will be in ACTIVE status after the copy process ends normally (note, however, that it will be in STOP status when Volume1 is in STOP status).
# sdxinfo -S -c Class1 -o Volume1 |