This section describes the procedure for backing up Systemwalker Operation Manager resources in a cluster system.
Backing up resources while daemons are running
Execute the mpbko backup command with the -SN option specified to back up the Systemwalker Operation Manager resources on a cluster system while daemons are running.
# /opt/systemwalker/bin/mpbko -b backup directory -SN
In the following example, resources are backed up to the "/var/tmp/OMGRBack" directory.
# /opt/systemwalker/bin/mpbko -b /var/tmp/OMGRBack -SN
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpbko command.
Note
On dual node standby configuration, information on the active subsystems is backed up but information on the standby subsystems is not backed up. You need to make a backup of information on the nodes of each active subsystem as required.
Backing up resources while daemons are not running
Perform the following steps to back up the Systemwalker Operation Manager resources on a cluster system while daemons are not running.
Example of how to backup resources in PRIMECLUSTER systems (for Solaris/Linux)/Sun Cluster system
Stop the daemons on the active nodes.
Stop all the Systemwalker Operation Manager daemons running on the active nodes. Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to stop daemons.
Mount the shared disk on the active nodes:
When the daemons are stopped with the cluster system management function in step 1, the shared disk is demounted, so mount the shared disk on the active nodes manually. An example is shown below:
[Solaris version]
# mount /disk1 (*1)
The mount point for the shared disk is "/disk1".
[Linux version]
# /bin/mount /dev/sdb1 /disk1 (*2)
The device is dev/sdb1 and the mount point for the shared disk is "/disk1".
For N:1 active/standby configuration, the symbolic links to the shared disk are deleted, so recreate them manually.
In the following example, the shared disk has been mounted to "/disk1".
# ln -s /disk1/FJSVfwseo/JM /var/opt/FJSVfwseo/JM # ln -s /disk1/FJSVjmcal/post /var/opt/FJSVjmcal/post # ln -s /disk1/FJSVJOBSC /var/opt/FJSVJOBSC # ln -s /disk1/FJSVMJS/var/spool/mjes /var/spool/mjes # ln -s /disk1/FJSVMJS/etc/mjes /etc/mjes # ln -s /disk1/FJSVstem /var/opt/FJSVstem (*1)
Required only if the Master Schedule Management function is enabled.
For dual node mutual standby configuration, delete the symbolic links to Job Execution Control for the subsystems on the standby node.
In the following example, subsystem 1 is the active node, and subsystem 2 is the standby node.
# rm /etc/mjes/mjes2 # rm /var/spool/mjes/mjes2
Back up resources on the active nodes.
Execute the mpbko backup command on the active nodes.
# /opt/systemwalker/bin/mpbko -b backup directory
In the following example, resources are backed up to the "/var/tmp/OMGRBack" directory.
# /opt/systemwalker/bin/mpbko -b /var/tmp/OMGRBack
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpbko command.
Steps 4 and 5 below are only performed for the 1:1 active/standby configuration (with subsystems and partial cluster operation). These two steps are used to back up the resources of subsystems on the standby node that are not used in cluster operation and they are not required if these resources are not to be backed up.
Stop Systemwalker Operation Manager daemons on the standby node.
Stop all the Systemwalker Operation Manager daemons running on the standby node. Execute the following command to stop only those subsystems not registered with a cluster system:
# /opt/systemwalker/bin/poperationmgr
Perform backup operation on the standby node.
Back up subsystems not registered with a cluster system at the standby node.
Temporarily delete any Job Execution Control symbolic links of subsystems registered with a cluster system.
The following example shows how deletion is performed when Subsystem 1 is registered with a cluster system:
# rm /etc/mjes/mjes1 # rm /var/spool/mjes/mjes1
Execute the backup command mpbko on the standby node.
# /opt/systemwalker/bin/mpbko -b backup destination directory
In the following example, "/var/tmp/OMGRBack" is designated as the backup destination directory:
# /opt/systemwalker/bin/mpbko -b /var/tmp/OMGRBack
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpbko command.
Recreate the Job Execution Control symbolic links that were deleted.
The following example shows how creation is performed when Subsystem 1 is registered with a cluster system and "/disk1" is designated as the shared disk:
# ln -s /disk1/FJSVMJS/spool/mjes1 /var/spool/mjes/mjes1 # ln -s /disk1/FJSVMJS/etc/mjes1 /etc/mjes/mjes1
Unmount the shared disk.
Remove the mount point to the shared disk mounted manually in step 2. The following example shows how to unmount the shared disk.
[Solaris version]
# umount /disk1 (*1)
The mount point for the shared disk is "/disk1".
[Linux version]
# /bin/umount /disk1 (*2)
The mount point for the shared disk is "/disk1".
For N:1 active/standby configuration, delete the symbolic links to the shared disk.
# rm /var/opt/FJSVfwseo/JM # rm /var/opt/FJSVjmcal/post # rm /var/opt/FJSVJOBSC # rm /var/spool/mjes # rm /etc/mjes # rm /var/opt/FJSVstem (*1)
Required only if the Master Schedule Management function is enabled.
For dual node mutual standby configuration, recreate the symbolic links to Job Execution Control for the subsystems on the standby node.
In the following example, subsystem 1 is the active node, and subsystem 2 is the standby node.
# ln -s /disk2/FJSVMJS/spool/mjes2 /var/spool/mjes/mjes2 # ln -s /disk2/FJSVMJS/etc/mjes2 /etc/mjes/mjes2
Start the daemons on the active nodes.
On the active nodes, start all the Systemwalker Operation Manager daemons registered with the cluster system. Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to start daemons.
Example of how to backup resources in MC/ServiceGuard systems
In the following example, the shard disk device is set to "/dev/vg01/lvol1" and the mount point for the shared disk is set to "/disk1".
Stop the daemons on the active nodes.
Stop all the Systemwalker Operation Manager daemons running on the active nodes. Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to stop daemons.
Mount the shared disk on the active nodes:
When the daemons are stopped with the cluster system in step 1, the shared disk is unmounted, so mount the shared disk on the active nodes manually.
# vgchange -a e /dev/vg01 # mount /dev/vg01/lvol1 /disk1
For N:1 active/standby configuration, the symbolic links to the shared disk are deleted, so recreate them manually.
# ln -s /disk1/FJSVfwseo/JM /var/opt/FJSVfwseo/JM # ln -s /disk1/FHPjmcal/post /opt/FHPjmcal/post # ln -s /disk1/FHPJOBSCH /opt/FHPJOBSCH/db # ln -s /disk1/FHPMJS/var/spool/mjes /var/spool/mjes # ln -s /disk1/FHPMJS/etc/mjes /etc/mjes # ln -s /disk1/FJSVstem /var/opt/FJSVstem (*1)
Required only if the Master Schedule Management function is enabled.
For dual node mutual standby configuration, recreate the symbolic links to Job Execution Control for the subsystems on the standby node.
In the following example, subsystem 1 is the active node, and subsystem 2 is the standby node.
# rm /etc/mjes/mjes2 # rm /var/spool/mjes/mjes2
Back up resources on the active nodes.
Execute the mpbko backup command on the active nodes.
# /opt/systemwalker/bin/mpbko -b backup directory
In the following example, resources are backed up to the "/var/tmp/OMGRBack" directory.
# /opt/systemwalker/bin/mpbko -b /var/tmp/OMGRBack
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpbko command.
Steps 4 and 5 below should only be performed for 1:1 active/standby configurations (with subsystems and partial cluster operation). These steps are performed in order to back up the resources of subsystems that have not been operated in a cluster on the standby node. These steps are not required if the resources of subsystems that have not been operated in a cluster on the standby node will not be backed up.
Stop the daemon on the standby node.
Stop each Systemwalker Operation Manager daemon that is running on the standby node. To stop a subsystem that is not registered in a cluster system, execute the following command:
# /opt/systemwalker/bin/poperationmgr
Perform the backup on the standby node.
Perform the backup of the subsystem that is not registered in the cluster system on the standby node.
Delete the Job Execution Control symbolic links of the subsystem that is registered in the cluster system.
In the following example, subsystem 1 will be deleted if it is registered in the cluster system.
# rm /etc/mjes/mjes1 # rm /var/spool/mjes/mjes1
Execute the mpbko backup command on the standby node.
# /opt/systemwalker/bin/mpbko -b backup destination directory name
In the following example, the backup destination directory is "/var/tmp/OMGRBack".
# /opt/systemwalker/bin/mpbko -b /var/tmp/OMGRBack
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpbko command.
Recreate the Job Execution Control symbolic links that were deleted.
In the following example, the Job Execution Control symbolic links will be recreated when subsystem 1 is registered in the cluster system and the shared disk is "/disk1".
# ln -s /disk1/FHPMJS/spool/mjes1 /var/spool/mjes/mjes1 # ln -s /disk1/FHPMJS/etc/mjes1 /etc/mjes/mjes1
Unmount the shared disk.
Remove the mount point to the shared disk mounted manually in step 2.
# umount /disk1 # vgchange -a n /dev/vg01
For N:1 active/standby configuration, delete the symbolic links to the shared disk.
# rm /var/opt/FJSVfwseo/JM # rm /opt/FHPjmcal/post # rm /opt/FHPJOBSCH/db # rm /var/spool/mjes # rm /etc/mjes # rm /var/opt/FJSVstem (*1)
Required only if the Master Schedule Management function is enabled.
For dual node mutual standby configuration, recreate the symbolic links to Job Execution Control for the subsystems on the standby node.
In the following example, subsystem 1 is the active node, and subsystem 2 is the standby node.
# ln -s /disk2/FHPMJS/var/spool/mjes2 /var/spool/mjes/mjes2 # ln -s /disk2/FHPMJS/etc/mjes2 /etc/mjes/mjes2
Start the daemons on the active nodes.
On the active nodes, start all the Systemwalker Operation Manager daemons registered with the cluster system. Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to start daemons.
Example of how to backup resources in HACMP systems
In the following example, the volume name of the shared disk is set to "datavg1" and the mount point to the shared disk is set to "/disk1".
Stop the daemons on the active nodes.
Stop all the Systemwalker Operation Manager daemons running on the active nodes. Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to stop daemons.
Mount the shared disk on the active nodes:
When the daemons are stopped with the cluster system in step 1, the shared disk is unmounted, so mount the shared disk on the active nodes manually.
# varyonvg datavg1 # mount /disk1
For dual node mutual standby configuration, delete the symbolic links to Job Execution Control for the subsystems on the standby node once.
In the following example, subsystem 1 is the active node, and subsystem 2 is the standby node.
# rm /etc/mjes/mjes2 # rm /var/spool/mjes/mjes2
Back up resources on the active nodes.
Execute the mpbko backup command on the active nodes.
# /opt/systemwalker/bin/mpbko -b backup directory
In the following example, resources are backed up to the "/var/tmp/OMGRBack" directory.
# /opt/systemwalker/bin/mpbko -b /var/tmp/OMGRBack
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpbko command.
Steps 4 and 5 below should only be performed for 1:1 active/standby configurations (with subsystems and partial cluster operation). These steps are performed in order to back up the resources of subsystems that have not been operated in a cluster on the standby node. These steps are not required if the resources of subsystems that have not been operated in a cluster on the standby node will not be backed up.
Stop the daemon on the standby node.
Stop each Systemwalker Operation Manager daemon that is running on the standby node. To stop a subsystem that is not registered in a cluster system, execute the following command:
# /opt/systemwalker/bin/poperationmgr
Perform the backup on the standby node.
Perform the backup of the subsystem that is not registered in the cluster system on the standby node.
Delete the Job Execution Control symbolic links of the subsystem that is registered in the cluster system.
In the following example, subsystem 1 will be deleted if it is registered in the cluster system.
# rm /etc/mjes/mjes1 # rm /var/spool/mjes/mjes1
Execute the mpbko backup command on the standby node.
# /opt/systemwalker/bin/mpbko -b backup destination directory name
In the following example, the backup destination directory is "/var/tmp/OMGRBack".
# /opt/systemwalker/bin/mpbko -b /var/tmp/OMGRBack
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpbko command.
Recreate the Job Execution Control symbolic links that were deleted.
In the following example, the Job Execution Control symbolic links will be recreated when subsystem 1 is registered in the cluster system and the shared disk is "/disk1".
# ln -s /disk1/FAIXMJS/spool/mjes1 /var/spool/mjes/mjes1 # ln -s /disk1/FAIXMJS/etc/mjes1 /etc/mjes/mjes1
Unmount the shared disk.
Remove the mount point to the shared disk mounted manually in step 2.
# umount /disk1 # varyoffvg datavg1
For dual node mutual standby configuration, recreate the symbolic links to Job Execution Control for the subsystems on the standby node.
In the following example, subsystem 1 is the active node, and subsystem 2 is the standby node.
# ln -s /disk2/FAIXMJS/var/spool/mjes2 /var/spool/mjes/mjes2 # ln -s /disk2/FAIXMJS/etc/mjes2 /etc/mjes/mjes2
Start the daemons on the active nodes.
On the active nodes, start all the Systemwalker Operation Manager daemons registered with the cluster system. Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to start daemons.
There is no need to back up information for the standby node, because the contents of the local disks are the same for the active and standby nodes, as a result of the operation performed in "2.9 Standardizing the Information Managed by Each Node".
For N:1 active/standby configuration and dual node mutual standby configuration, perform backup at each active node.