Top
ETERNUS SF AdvancedCopy Manager V15.1 Operation Guide
ETERNUS

6.5.1 Executing snapshot replication

Use swsrpmake (Replication creation command) to perform snapshot replication.
Refer to "6.1.1 Snapshot replication processing" for an explanation of snapshot replication.

The operation status of a physical copy can be checked by executing swsrpstat (Operation status display command).


QuickOPC type replication

Execute QuickOPC replication by specifying the -T option in swsrpmake (Replication creation command).
If no OPC session exists when swsrpmake (Replication creation command) is executed, the command starts snapshot processing (OPC physical copying), and tracks processing from the source volume to the destination volume.

Figure 6.16 When replication creation command is executed (first time)

To check the execution status of physical copying, use swsrpstat (Operation status display command) in the same way as for an ordinary snapshot replication.
After snapshot processing (OPC physical copy) is complete, only tracking processing is active.
To check the tracking status, use swsrpstat (Operation status display command) with the -L option.

Figure 6.17 When snapshot processing is completed

Entering swsrpmake (Replication creation command) with the -T option specified during tracking processing performs the physical copying of only the data that has been generated since the previous snapshot processing. This means that physical copying can be accomplished in a shorter period of time.

Figure 6.18 When replication creation command is executed (second time)

When you want to perform a restoration while tracking processing is being executed, you need to perform a restoration by OPC (to achieve this, you need to execute swsrpmake (Replication creation command) without the -T option). QuickOPC cannot be executed in the reverse direction while tracking processing is being executed. The replication using QuickOPC is done as follows:

[backup]
swsrpmake -T <original volume name> <replica volume name>

[restore]
swsrpmake <replica volume name> <original volume name>

Although a restoration is executed with OPC, only the data that has been updated since the previous replication (it can be obtained from the Update field of swsrpstat) is copied.
Therefore, in replication using QuickOPC, not only a physical backup but also restoration is completed in a short period of time.
The restore execution status can be checked by executing swsrpstat (Operation status display command) with the -E option specified.


SnapOPC type replication

Execute SnapOPC type replications with the -C option specified in swsrpmake (Replication creation command).
When swsrpmake (Replication creation command) is executed, a SnapOPC session will be set up between the copy source volume and the copy destination volume.

Example
# /opt/FJSVswsrp/bin/swsrpmake -C /dev/dsk/c1t0d0s1 /dev/dsk/c1t0d11s1
FROM=/dev/dsk/c1t0d0s1@SV1,TO=/dev/dsk/c1t0d11s1@SV1 swsrpmake completed
#

Figure 6.19 When the replication creation command is executed

Unlike normal OPCs and QuickOPCs, SnapOPCs do not copy all of the data from the source volume, but instead copy only the data that has been updated on the source or destination since SnapOPC started. This kind of copy processing is referred to as "Copy-on-Write".

Figure 6.20 When the copy source volume is updated

Figure 6.21 When the copy destination volume is updated

Note: The units for host I/O and storage device copies are different (512 bytes for host I/O and 8 kilobytes for storage device copies), and therefore data copies also occur when the copy destination is updated.

The status of SnapOPC sessions can be checked using swsrpstat (Operation status display command).
The following example shows the execution of swsrpstat (Operation status display command) immediately after a SnapOPC snapshot has started. While SnapOPC is being performed, "copy-on-write" is displayed in the Status field, and the amount of data updated since the last copy was created is displayed in the Update field as a percentage.

Example
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t0d0s1
Server Original-Volume       Replica-Volume         Direction Status        Execute Trk  Update Rcv  Split Xfer Snap-Gen
SV1    /dev/dsk/c1t0d1s1@SV1 /dev/dsk/c1t0d11s1@SV1 regular   copy-on-write ----    off  0%     ---- ----  ---- ----
#

If swsrpmake (Replication creation command) is executed again during SnapOPC processing, the SnapOPC session that has already been set up will be cancelled, and a new session will be set up.

Note

When there is insufficient Snap Data Volume or Snap Data Pool capacity, the SnapOPC+ execution status changes to error suspend status ("failed"), and replication volume cannot be used.
The SnapOPC execution status can be checked in swsrpstat (Operation status display command) output result Status column.

Example
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t0d0s1
Server Original-Volume       Replica-Volume         Direction Status Execute Trk  Update Rcv  Split Xfer Snap-Gen
SV1    /dev/dsk/c1t0d1s1@SV1 /dev/dsk/c1t0d11s1@SV1 regular   failed ----    off  ----   ---- ----  ---- ----
#

When the SnapOPC execution status is error suspend status ("failed"), refer to "7.4.2.3 Troubleshooting when a lack of free space has occurred in the Snap Data Volume or Snap Data Pool".

Perform restorations from Snap Data Volume by running an OPC using swsrpmake (Replication creation command).

# /opt/FJSVswsrp/bin/swsrpmake /dev/dsk/c1t0d11s1 /dev/dsk/c1t0d0s1
FROM=/dev/dsk/c1t0d11s1@SV1,TO=/dev/dsk/c1t0d0s1@SV1 swsrpmake completed
#

When restorations are executed, the SnapOPC session from the source volume to the destination volume is maintained as is, and a normal OPC from the replication destination volume to the replication source volume is started. At this point, the time taken to restore the physical copy is reduced, because only data that has been updated since the last copy is restored.

Figure 6.22 When restoration is executed

The execution status of restorations can be checked by specifying the -E option with swsrpstat (Operation status display command).

# /opt/FJSVswsrp/bin/swsrpstat -E /dev/dsk/c1t0d0s1
Server Original-Volume       Replica-Volume         Direction Status Execute
SV1    /dev/dsk/c1t0d1s1@SV1 /dev/dsk/c1t0d11s1@SV1 reverse   snap   80%    
#

Note

If a SnapOPC is being performed between the source volume and the destination volume, restorations to volumes other than the source volume cannot be executed. To restore to a volume other than the source volume, operating system copy functions (such as the cp command or the copy command) must be used.

Figure 6.23 When restoring to a volume other than the copy source volume

Additionally, if SnapOPCs are being performed to multiple copy destination volumes, restoration cannot be performed.

Figure 6.24 When SnapOPC is performed for multiple copy destination volumes

In this case, restoration using an OPC can be performed by cancelling the other SnapOPCs. However, the backup data on the copy destination volumes whose SnapOPC sessions were cancelled will be lost.

Figure 6.25 When SnapOPC session cancelled to perform restoration

To perform a restoration while still maintaining all SnapOPC sessions, operating system copy functions (such as the cp command or the copy command) must be used for the restoration.

However, if restoration is performed using operating system functions, the amount of updated data on the source volume will increase, and there is a risk that the capacity of the SnapOPC volume will be insufficient.

Figure 6.26 When performing restoration without cancelling SnapOPC session


SnapOPC+ type replication

Execute swsrpmake (Replication creation command) using the -P option to perform SnapOPC+ replication. This sets a SnapOPC+ session between the copy source volume and the copy destination volume. After the session is set, copy-on-write is performed between the copy source volume and the copy destination volume.

An example of executing swsrpmake (Replication creation command) using the -P option is shown below.

Execution example
# /opt/FJSVswsrp/bin/swsrpmake -P /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1
FROM=/dev/dsk/c1t1d0s3@SV1,TO=/dev/dsk/c1t1d1s3@SV1 swsrpmake completed
#

At this time, the (logically copied) copy destination volume is saved as a snap generation number.
The next time this command is executed with a different copy destination volume for the same copy source volume, the copy-on-write processing being executed between the copy source volume and the previous generation of the copy destination volume is stopped. Then, a SnapOPC+ session is set between the copy source volume and the newly specified copy destination volume, and copy-on-write is performed.
An example of executing swsrpmake (Replication creation command) using the -P option for the newly specified copy destination volume is shown below.

Execution example
# /opt/FJSVswsrp/bin/swsrpmake -P /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1
FROM=/dev/dsk/c1t1d0s3@SV1,TO=/dev/dsk/c1t1d2s3@SV1 swsrpmake completed
#

This time, the (logically copied) copy destination volume is saved as snap generation number 2.
Similarly, each time there is a new copy destination volume, a snap generation number is assigned.

Note

If an earlier snap generation (other than the oldest snap generation) is specified as the copy destination volume when swsrpmake (Replication creation command) is executed, the command terminates with an error. If the oldest snap generation is specified as the copy destination volume, that snap generation is automatically discarded and a replica is created as the newest snap generation. In this case, subsequent snap generations (second, third) are assigned a snap generation number that is one generation prior (second generation => first generation, and third generation => second generation).

Figure 6.27 When the oldest snap generation number is specified as the replication volume

The operation status of SnapOPC+ replication can be checked by executing swsrpstat (Operation status display command) with the -L option.
For the most recent snap generation, "copy-on-write(active)" is displayed in the Status field. For past snap generations, "copy-on-write(inactive)" is displayed. In the Update field, the amount of data that has finished being updated after replication creation, is displayed as a percentage. In the Snap-Gen field, the snap generation number is displayed.

Execution example
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t1d0s3
Server Original-Volume       Replica-Volume        Direction Status                  Execute Trk  Update Rcv  Split Xfer Snap-Gen
SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 regular   copy-on-write(inactive) ----    off  0%     ---- ----  ---- 1
SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 regular   copy-on-write(active)   ----    off  5%     ---- ----  ---- 2
#

Note

When there is insufficient Snap Data Volume or Snap Data Pool capacity, the SnapOPC+ execution status changes to error suspend status ("failed"), and the execution status of SnapOPC+ that was executed before it will also change to error suspend status ("failed"). Replication volume of error suspend status ("failed") cannot be used.

The SnapOPC+ execution status can be checked in swsrpstat (Operation status display command) output result Status field.

Execution example
# /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t1d0s3
Server Original-Volume       Replica-Volume        Direction Status Execute Trk Update Rcv  Split Xfer Snap-Gen
SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 regular   failed ----    off ----   ---- ----  ---- ----
SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 regular   failed ----    off ----   ---- ----  ---- ----
#

When the SnapOPC+ execution status is error suspend status ("failed"), refer to "7.4.2.3 Troubleshooting when a lack of free space has occurred in the Snap Data Volume or Snap Data Pool".

To restore from a Snap Data Volume, use swsrpmake (Replication creation command) to start OPC.

# /opt/FJSVswsrp/bin/swsrpmake /dev/dsk/c1t1d2s3@SV1 /dev/dsk/c1t1d0s3@SV1
FROM=/dev/dsk/c1t1d2s3@SV1,TO=/dev/dsk/c1t1d0s3@SV1 swsrpmake completed
#

The SnapOPC+ session from the replication source volume to the replication destination volume is maintained even if the replication creation command is executed.

Execution of restoration while maintaining the SnapOPC+ session reduces the physical copying time, because physical copying is performed only for data updated after the replica creation.

Figure 6.28 Restoration with SnapOPC+ session maintained

To check the restoration execution status, execute swsrpstat (Operation status display command) with the -E option.

# /opt/FJSVswsrp/bin/swsrpstat -E /dev/dsk/c1t1d0s3
Server Original-Volume       Replica-Volume        Direction Status Execute
SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 ----      ----   ----
SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 reverse   snap   80%
#

Note

Restoration may cause a Snap Data Pool to run low on free disk space, due to updates issued to the most recent snap data generation where the copy-on-write status is active. Make sure that there is enough free space in the Snap Data Pool usage area before performing restoration.

The most recent snap generation is the data written to the replication source volume by the restoration, updated by the previously existing data. The update amount to the most recent snap generation generated by the restoration is the total of the Copy usage amount for the restoration target snap generation and subsequent snap generations except for the most recent snap generation.


An example of how to calculate the update amount when restoring from snap generation (Snap-Gen) 2 is displayed below.

Use the procedure below to check the update amount for restoration:

  1. Use swsrpstat (Operation status display command) to check the device name of the restoration target and subsequent snap generations, except for the most recent snap generation (Snap-Gen 4 data in the example below).

    # /opt/FJSVswsrp/bin/swsrpstat -L /dev/dsk/c1t1d0s3
    Server Original-Volume       Replica-Volume        Direction Status                  Execute Trk  Update Rcv  Split Xfer Snap-Gen
    SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d1s3@SV1 regular   copy-on-write(inactive) ----    off  8%     ---- ----  ---- 1
    SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d2s3@SV1 regular   copy-on-write(inactive) ----    off  12%    ---- ----  ---- 2
    SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d3s3@SV1 regular   copy-on-write(inactive) ----    off  0%     ---- ----  ---- 3
    SV1    /dev/dsk/c1t1d0s3@SV1 /dev/dsk/c1t1d4s3@SV1 regular   copy-on-write(active)   ----    off  3%     ---- ----  ---- 4

    In this example, /dev/dsk/c1t1d2s3 and /dev/dsk/c1t1d3s3 are targeted.

  2. Use swstsdv (Snap Data Volume operation/reference command) with the "stat" subcommand to find the total copy usage amount for the device in the previous step.

    If a Snap Data Pool is used, also add the Snap Data Pool usage capacity.

    [/dev/dsk/c1t1d2s3 disk usage]

    # /opt/FJSVswsts/bin/swstsdv stat /dev/dsk/c1t1d2s3
    BoxID = 00E4000M3#####E450S20A####KD4030639004##
    LUN = 110 (0x6E)
    Rate Logical(sector) Physical(sector) Used(sector) Copy(sector) Host(sector) Pool(sector)
    100% 8388608         1048576          1048576      1048384      192          640

    [/dev/dsk/c1t1d3s3 disk usage]

    # /opt/FJSVswsts/bin/swstsdv stat /dev/dsk/c1t1d3s3
    BoxID = 00E4000M3#####E450S20A####KD4030639004##
    LUN = 111 (0x6F)
    Rate Logical(sector) Physical(sector) Used(sector) Copy(sector) Host(sector) Pool(sector)
    4%   8388608         1048576          46928        16           46912        0

    In this example, the quantity updated by the restoration is 1049040 (1048384+640+16) sectors.


To check the Snap Data Pool total capacity and the usage area capacity, use swstsdv (Snap Data Volume operation/reference command) with the "poolstat" subcommand.

If the Snap Data Volume is not encrypted, then check the capacity of the usage area and the total capacity where Pool-Type is Normal. Otherwise, check the capacity of the usage area and the total capacity where Pool-Type is encrypted.

# /opt/FJSVswsts/bin/swstsdv poolstat -G /dev/dsk/c1t1d0s3
BoxID = 00E4000M3#####E450S20A####KD4030639004##
Pool-Type Rate Total(sector) Used(sector) Copy(sector) Host(sector) Free(sector)
Normal    10%  20971520      2097152      0            2097152      18874368
Encrypted 0%   20971520      0            0            0            20971520

The disk usage in this example is 15% =~ (2097152 + 1049040) / 20971520 x 100


If the value obtained by adding the size of the Snap Data Pool usage area to the restoration update amount is less than the total capacity, then restoration is possible. However, in order to safely perform restoration, it is recommended to extend the Snap Data Pool if the disk usage after restoration is predicted to exceed 70%.

In addition, if the disk usage is expected to exceed 50%, then consider extending the Snap Data Pool after restoration and increasing the monitoring frequency of the Snap Data Pool.

For details on Snap Data Pool monitoring, refer to "6.2.3.4 Snap Data Volume/Snap Data Pool monitoring".

Note

If SnapOPC+ is being performed between the replication source volume and the replication destination volume, restoration cannot be performed to a volume other than the replication source volume.

Point

As a precaution against hardware malfunctions with SnapOPC+, it is recommended to operate it in conjunction with making full copies using OPC/QuickOPC/EC(REC).
An example of performing QuickOPC on Sundays and SnapOPC+ on Mondays to Saturdays is displayed below.

Figure 6.29 Example of operation using SnapOPC+ and QuickOPC


6.5.1.1 Concurrent OPC functions

Concurrent OPC function is one of ETERNUS Disk Storage system's function which creates snapshots of the multiple logical volumes simultaneously. This function allows backup the replication of the database consisting of multiple volumes in a consistent state. This function is available to use in one of these replications; OPC, QuickOPC, SnapOPC, or SnapOPC+.

The following diagram shows the operation in ETERNUS Disk Storage system.

Figure 6.30 Operations in ETERNUS Disk Storage system

Note