Use the swsrpmake command to perform snapshot replication.
Refer to "6.1.1 Snapshot Replication Processing" for an explanation of snapshot replication.
The operation status of a physical copy can be checked by executing the swsrpstat command.
QuickOPC Type Replication
Execute QuickOPC replication by specifying the -T option in the swsrpmake command.
If no OPC session exists when the swsrpmake command is executed, the command starts snapshot processing (OPC physical copying), and tracks processing from the source volume to the destination volume.
To check the execution status of physical copying, use the swsrpstat command in the same way as for an ordinary snapshot replication.
After snapshot processing (OPC physical copy) is complete, only tracking processing is active.
To check the tracking status, execute the swsrpstat command with the -L option.
Entering the swsrpmake command with the -T option specified during tracking processing performs the physical copying of only the data that has been generated since the previous snapshot processing. This means that physical copying can be accomplished in a shorter period of time.
When you want to perform a restoration while tracking processing is being executed, you need to perform a restoration by OPC (to achieve this, you need to execute the swsrpmake command without the -T option). QuickOPC cannot be executed in the reverse direction while tracking processing is being executed. The replication using QuickOPC is done as follows:
[backup] swsrpmake -T <original volume name> <replica volume name> [restore] swsrpmake <replica volume name> <original volume name> |
Although a restoration is executed with OPC, only the data that has been updated since the previous replication (it can be obtained from the Update column of swsrpstat) is copied.
Therefore, in replication using QuickOPC, not only a physical backup but also restoration is completed in a short period of time.
The restore execution status can be checked by executing the swsrpstat command with the -E option specified.
SnapOPC Type Replication
Execute SnapOPC type replications with the -C option specified in the swsrpmake command.
When the swsrpmake command is executed, a SnapOPC session is set up between the copy source volume and the copy destination volume.
# /opt/FJSVswsrp/bin/swsrpmake -C /dev/dsk/c1t0d0s1 /dev/dsk/c1t0d11s1 FROM=/dev/dsk/c1t0d0s1@SV1,TO=/dev/dsk/c1t0d11s1@SV1 swsrpmake completed #
Unlike normal OPCs and QuickOPCs, SnapOPCs do not copy all of the data from the source volume, but instead copy only the data that has been updated on the source or destination since SnapOPC started. This kind of copy processing is referred to as "Copy-on-Write".
Note: The units for host I/O and storage device copies are different, and therefore data copies also occur when the copy destination is updated.
The status of SnapOPC sessions can be checked using the swsrpstat command.
The following example shows the execution of the swsrpstat command immediately after a SnapOPC snapshot has started. While SnapOPC is being performed, "copy-on-write" is displayed in the Status column, and the amount of data updated since the last copy was created is displayed in the Update column as a percentage.
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t0d0s1 Server Original-Volume Replica-Volume Direction Status Execute Trk Update Rcv Split Xfer Snap-Gen SV1 /dev/dsk/c1t0d0s1@SV1 /dev/dsk/c1t0d11s1@SV1 regular copy-on-write ---- off 0% ---- ---- ---- ---- #
If the swsrpmake command is executed again during SnapOPC processing, the SnapOPC session that has already been set up is cancelled, and a new session is set up.
Note
If the physical capacity of the copy destination volume is insufficient, the SnapOPC execution status changes to error suspend status ("failed"), and replication volume cannot be used.
The SnapOPC execution status can be checked in the swsrpstat command output result Status column.
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t0d0s1 Server Original-Volume Replica-Volume Direction Status Execute Trk Update Rcv Split Xfer Snap-Gen SV1 /dev/dsk/c1t0d0s1@SV1 /dev/dsk/c1t0d11s1@SV1 regular failed ---- off ---- ---- ---- ---- ---- #
When the SnapOPC execution status is error suspend status ("failed"), refer to "8.4.2.3 Troubleshooting When Lack of Free Physical Space Has Occurred in Copy Destination Volume" and take appropriate action.
Perform restorations from the copy destination volume by running an OPC using the swsrpmake command.
# /opt/FJSVswsrp/bin/swsrpmake /dev/dsk/c1t0d11s1 /dev/dsk/c1t0d0s1 FROM=/dev/dsk/c1t0d11s1@SV1,TO=/dev/dsk/c1t0d0s1@SV1 swsrpmake completed # |
When restorations are executed, the SnapOPC session from the source volume to the destination volume is maintained as is, and a normal OPC from the replication destination volume to the replication source volume is started. At this point, the time taken to restore the physical copy is reduced, because only data that has been updated since the last copy is restored.
The execution status of restorations can be checked by specifying the -E option with the swsrpstat command.
# /opt/FJSVswsrp/bin/swsrpstat -E /dev/dsk/c1t0d0s1 Server Original-Volume Replica-Volume Direction Status Execute SV1 /dev/dsk/c1t0d0s1@SV1 /dev/dsk/c1t0d11s1@SV1 reverse snap 80% # |
Note
If a SnapOPC is being performed between the source volume and the destination volume, restorations to volumes other than the source volume cannot be executed. To restore to a volume other than the source volume, operating system copy functions (such as the cp command or the copy command) must be used.
Additionally, if SnapOPCs are being performed to multiple copy destination volumes, restoration cannot be performed.
In this case, restoration using an OPC can be performed by cancelling the other SnapOPCs. However, the backup data on the copy destination volumes whose SnapOPC sessions were cancelled is lost.
To perform a restoration while still maintaining all SnapOPC sessions, operating system copy functions (such as the cp command or the copy command) must be used for the restoration.
However, if restoration is performed using operating system functions, the amount of updated data on the source volume increases, and there is a risk that the capacity of the SnapOPC volume is insufficient.
SnapOPC+ Type Replication
Execute the swsrpmake command using the -P option to perform SnapOPC+ replication. This sets a SnapOPC+ session between the copy source volume and the copy destination volume. After the session is set, copy-on-write is performed between the copy source volume and the copy destination volume.
An execution example of the swsrpmake command with the -P option is shown below.
# /opt/FJSVswsrp/bin/swsrpmake -P /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 FROM=/dev/dsk/c1t1d0s3@SV1,TO=/dev/dsk/c1t1d1s3@SV1 swsrpmake completed #
At this time, the (logically copied) copy destination volume is saved as a snap generation number.
The next time this command is executed with a different copy destination volume for the same copy source volume, the copy-on-write processing being executed between the copy source volume and the previous generation of the copy destination volume is stopped. Then, a SnapOPC+ session is set between the copy source volume and the newly specified copy destination volume, and copy-on-write is performed.
An execution example of the swsrpmake command with the -P option for the newly specified copy destination volume is shown below.
# /opt/FJSVswsrp/bin/swsrpmake -P /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 FROM=/dev/dsk/c1t1d0s3@SV1,TO=/dev/dsk/c1t1d2s3@SV1 swsrpmake completed #
This time, the (logically copied) copy destination volume is saved as snap generation number 2.
Similarly, each time there is a new copy destination volume, a snap generation number is assigned.
Note
If an earlier snap generation (other than the oldest snap generation) is specified as the copy destination volume when the swsrpmake command is executed, the command terminates with an error. If the oldest snap generation is specified as the copy destination volume, that snap generation is automatically discarded and a replica is created as the newest snap generation. In this case, subsequent snap generations (second, third) are assigned a snap generation number that is one generation prior (second generation => first generation, and third generation => second generation).
The operation status of SnapOPC+ replication can be checked by executing the swsrpstat command with the -L option.
For the most recent snap generation, "copy-on-write(active)" is displayed in the Status column. For past snap generations, "copy-on-write(inactive)" is displayed. In the Update column, the amount of data that has finished being updated after replication creation, is displayed as a percentage. In the Snap-Gen column, the snap generation number is displayed.
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t1d0s3 Server Original-Volume Replica-Volume Direction Status Execute Trk Update Rcv Split Xfer Snap-Gen SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 regular copy-on-write(inactive) ---- off 0% ---- ---- ---- 1 SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 regular copy-on-write(active) ---- off 5% ---- ---- ---- 2 #
Note
If the physical capacity of the copy destination volume is insufficient, the SnapOPC+ execution status changes to error suspend status ("failed"), and the execution status of SnapOPC+ that was executed before it also changes to error suspend status ("failed"). Replication volume of error suspend status ("failed") cannot be used.
The SnapOPC+ execution status can be checked in the swsrpstat command output result Status column.
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t1d0s3 Server Original-Volume Replica-Volume Direction Status Execute Trk Update Rcv Split Xfer Snap-Gen SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 regular failed ---- off ---- ---- ---- ---- ---- SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 regular failed ---- off ---- ---- ---- ---- ---- #
When the SnapOPC+ execution status is error suspend status ("failed"), refer to "8.4.2.3 Troubleshooting When Lack of Free Physical Space Has Occurred in Copy Destination Volume" and take appropriate action.
To restore from the copy destination volume, execute the swsrpmake command to start OPC.
# /opt/FJSVswsrp/bin/swsrpmake /dev/dsk/c1t1d2s3@SV1 /dev/dsk/c1t1d0s3@SV1 FROM=/dev/dsk/c1t1d2s3@SV1,TO=/dev/dsk/c1t1d0s3@SV1 swsrpmake completed #
The SnapOPC+ session from the replication source volume to the replication destination volume is maintained even if the replication creation command is executed.
Execution of restoration while maintaining the SnapOPC+ session reduces the physical copying time, because physical copying is performed only for data updated after the replica creation.
To check the restoration execution status, execute the swsrpstat command with the -E option.
# /opt/FJSVswsrp/bin/swsrpstat -E /dev/dsk/c1t1d0s3 Server Original-Volume Replica-Volume Direction Status Execute SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 ---- ---- ---- SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 reverse snap 80% #
Note
Restoration may cause a capacity shortage of the copy destination volume, due to updates issued to the most recent snap data generation where the copy-on-write status is active. Make sure that there is enough free space in the copy destination volume usage area before performing restoration.
The most recent snap generation is the data written to the replication source volume by the restoration, updated by the previously existing data. The update amount to the most recent snap generation generated by the restoration is the total of the Copy usage amount for the restoration target snap generation and subsequent snap generations except for the most recent snap generation.
An example of how to check when restoring from snap generation (Snap-Gen) 2 is displayed below.
Use the procedure below to check that there is enough free space in the copy destination volume usage area:
Execute the swsrpstat command to check the device name of the restoration target and subsequent snap generations, except for the most recent snap generation (Snap-Gen 4 data in the example below).
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t1d0s3 Server Original-Volume Replica-Volume Direction Status Execute Trk Update Rcv Split Xfer Snap-Gen SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 regular copy-on-write(inactive) ---- off 8% ---- ---- ---- 1 SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 regular copy-on-write(inactive) ---- off 12% ---- ---- ---- 2 SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d3s3@SV1 regular copy-on-write(inactive) ---- off 0% ---- ---- ---- 3 SV1 /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d4s3@SV1 regular copy-on-write(active) ---- off 3% ---- ---- ---- 4
In this example, /dev/dsk/c1t1d2s3 and /dev/dsk/c1t1d3s3 are targeted.
Calculate the total usage capacity of the devices that was checked in step 1.
If the copy destination volume is TPV
Use Storage Cruiser or ETERNUS Web GUI to calculate the total usage capacity of the devices in step 1.
If you use Storage Cruiser, the used capacity can be checked in the Used Capacity column in the Volume tab of the Thin Provisioning Details screen on Web Console. Refer to" Display Thin Provisioning Pool" in the Web Console Guide for information on how to check the used capacity.
If the copy destination volume is FPV
Use Storage Cruiser or ETERNUS Web GUI to calculate the total usage capacity of the devices in step 1.
If you use Storage Cruiser, the used capacity can be checked in the Used Capacity column of the FTV screen on Web Console. Refer to" Display FTV" in the Web Console Guide for information on how to check the used capacity.
If the copy destination volume is SDV
Execute the swstsdv command with the "stat" subcommand to calculate the total used capacity of the devices in step 1.
If SDP is used, also add the SDP usage capacity.
[/dev/dsk/c1t1d2s3 disk usage]
# /opt/FJSVswsts/bin/swstsdv stat /dev/dsk/c1t1d2s3 BoxID = 00E4000M3#####E450S20A####KD4030639004## LUN = 110 (0x6E) Rate Logical(sector) Physical(sector) Used(sector) Copy(sector) Host(sector) Pool(sector) 100% 8388608 1048576 1048576 1048384 192 640
[/dev/dsk/c1t1d3s3 disk usage]
# /opt/FJSVswsts/bin/swstsdv stat /dev/dsk/c1t1d3s3 BoxID = 00E4000M3#####E450S20A####KD4030639004## LUN = 111 (0x6F) Rate Logical(sector) Physical(sector) Used(sector) Copy(sector) Host(sector) Pool(sector) 4% 8388608 1048576 46928 16 46912 0
In this example, the quantity updated by the restoration is 1049040 (1048384+640+16) sectors.
Check the free capacity of the copy destination volume.
If the copy destination volume is TPV
Use Storage Cruiser or ETERNUS Web GUI to check the free capacity of the Thin Provisioning Pool.
If you use Storage Cruiser, the free capacity can be checked in the Total Capacity column and the Used Capacity column of the Thin Provisioning Overview screen on Web Console. Refer to" Display Thin Provisioning Pool" in the Web Console Guide for information on how to check the capacity.
If the copy destination volume is FTV
Use Storage Cruiser or ETERNUS Web GUI to check the free capacity of the Tier pool.
If you use Storage Cruiser, the free capacity can be checked in the Total Capacity column and the Used Capacity column of the Tier pool detail screen that is displayed by selecting the target Tier pool in the Tier pool Overview screen on Web Console. Refer to" Display Tier Pool" in the Web Console Guide for information how to check the capacity.
If the copy destination volume is SDV
Execute the swstsdv command with the "poolstat" subcommand to check the total capacity and the used capacity of SDP.
If SDV is not encrypted, check the total capacity and the used capacity where [Pool-Type] is "Normal".
If SDV is encrypted, check the total capacity and the used capacity where [Pool-Type] is "Encrypted".
# /opt/FJSVswsts/bin/swstsdv poolstat -G /dev/dsk/c1t1d0s3
BoxID = 00E4000M3#####E450S20A####KD4030639004##
Pool-Type Rate Total(sector) Used(sector) Copy(sector) Host(sector) Free(sector)
Normal 10% 20971520 2097152 0 2097152 18874368
Encrypted 0% 20971520 0 0 0 20971520
The disk usage in this example is 15% =~ (2097152 + 1049040) / 20971520 x 100
If the update amount with a restore is less than the free capacity of the copy destination volume, a restoration is possible. However, in order to safely perform a restoration, it is recommended that you increase the free capacity with a disk expansion if the disk usage after the restoration is predicted to exceed 70 %.
Since the required physical capacity is increased depending on the restoration, the capacity of the copy destination volume may be insufficient. Therefore, in order to prevent the physical capacity of the copy destination volume from becoming insufficient, refer to "6.2.3.3 Monitoring Usage of Copy Destination Volume" to review how to monitor the used capacity of the copy destination volume. In addition, consider increasing the number of disks as required.
Note
If SnapOPC+ is being performed between the replication source volume and the replication destination volume, restoration cannot be performed to a volume other than the replication source volume.
Point
As a precaution against hardware malfunctions with SnapOPC+, it is recommended to operate it in conjunction with making full copies using OPC/QuickOPC/EC(REC).
An example of performing QuickOPC on Sundays and SnapOPC+ on Mondays to Saturdays is displayed below.
Concurrent OPC function is one of ETERNUS Disk Storage system's function which creates snapshots of the multiple logical volumes simultaneously. This function allows backup the replication of the database consisting of multiple volumes in a consistent state. This function is available to use in one of these replications; OPC, QuickOPC, SnapOPC, or SnapOPC+.
The following diagram shows the operation in ETERNUS Disk Storage system.
Note
If you force-quit the swsrpmake command with task manager or Ctrl-C, the session may remain defined Concurrent OPC. In this case, execute the command again, or stop the session with the swsrpcancel command.
The maximum number of pairs that can simultaneously implement Concurrent OPC depends on the specifications of the ETERNUS Disk storage system.