The procedure for restoring the Systemwalker Operation Manager resources in a cluster system is described below. This procedure is the same as the backup procedure, except for the following: executing the restore command, reflecting security information, and restoring calendar information and service/application startup information. Refer to "3.4.1 Procedure for backing up resources in cluster systems" for details.
Stop the daemons on the active nodes.
Stop all the Systemwalker Operation Manager daemons running on the active nodes. Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to stop daemons.
Mount the shared disk on the active nodes:
When the daemons are stopped with the cluster system management function in step 1, the shared disk is unmounted, so mount the shared disk on the active nodes manually.
For N:1 active/standby configuration, the symbolic links to the shared disk are deleted, so recreate them manually.
For dual node mutual standby configuration, delete the symbolic links to Job Execution Control for the subsystems on the standby node.
Restore resources on the active nodes.
Execute the mprso restore command on the active nodes.
# /opt/systemwalker/bin/mprso -b name of the directory where information has been backed up
In the following example, information has been backed up to the "/var/tmp/OMGRBack" directory.
# /opt/systemwalker/bin/mprso -b /var/tmp/OMGRBack
Refer to the Systemwalker Operation Manager Reference Guide for details on the mprso command.
Steps 4 and 5 below are only performed for the 1:1 active/standby configuration (with subsystems and partial cluster operation). Steps 4 and 5 are used to restore the resources of subsystems on the standby node that are not used in cluster operation, and they are not required if these resources are not to be restored.
Stop Systemwalker Operation Manager daemons on the standby node.
Stop all the Systemwalker Operation Manager daemons running on the standby node. Execute the following command to stop only those subsystems not registered with a cluster system:
# /opt/systemwalker/bin/poperationmgr
Perform restore operation on the standby node.
Restore subsystems not registered with a cluster system at the standby node.
At the standby node, temporarily delete any Job Execution Control symbolic links of subsystems registered with a cluster system.
Execute the restore command mprso on the standby node.
# /opt/systemwalker/bin/mprso -b name of directory where information has been backed up
In the following example, "/var/tmp/OMGRBack" is designated as the directory where information has been backed up:
# /opt/systemwalker/bin/mprso -b /var/tmp/OMGRBack
Refer to the Systemwalker Operation Manager Reference Guide for details on the mprso command.
Recreate the Job Execution Control symbolic links that were deleted from the standby node.
Reflect security information on the shared disk.
For N:1 active/standby configuration, since stopping the daemons using the cluster system management function cancels the settings for automatic security information reflection, set cluster information by executing the following command:
mpaclcls
Reflect security information on the shared disk for the active nodes by executing the following command.
1:1 active/standby and N:1 active/standby configurations:
mpcssave
Dual node mutual standby configuration:
mpcssave -s subsystem number for the active node
Unmount the shared disk.
Unmount the shared disk mounted manually in step 2.
For N:1 active/standby configuration, delete the symbolic links to the shared disk.
For dual node mutual standby configuration, recreate the symbolic links to Job Execution Control for the subsystems on the standby node.
Start the daemons on the active nodes.
On the active nodes, start all the Systemwalker Operation Manager daemons. Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to start daemons.
Copy information to the standby node.
Copy the following files from the active node to the standby node.
Monitoring host definition information
Password management list information/host information definitions
Job folder information
Web Console definition information
Refer to "2.9.3 Standardizing Systemwalker Operation Manager environment definitions," for details on copying this information.
Restoring calendar information and service/application startup information
After resources have been completely restored on the active node, the following types of data that have been created on the standby node since the backup was taken will be automatically reflected on the active nodes.
Calendar information
Service/application startup information
Use the procedure below for the appropriate situation:
To use data that existed when the backup was taken (and not use data created after the backup was taken)
After stopping the Systemwalker Operation Manager daemons on all nodes, delete the following files from each node. Then, by restoring resources, all nodes will be standardized with the definitions that existed when the backup was taken.
For PRIMECLUSTER systems (for the Solaris/Linux versions)/Sun Cluster system
/var/opt/FJSVjmcal/caldb/*.*
Only delete the files. Do not delete the directory.
/var/opt/FJSVjmcal/srvapp/f3crhsvb.ini
For MC/ServiceGuard systems
/opt/FHPjmcal/caldb/*.*
Only delete the files. Do not delete the directory.
/opt/FHPjmcal/srvapp/f3crhsvb.ini
For HACMP systems
/opt/FAIXjmcal/caldb/*.*
Only delete the files. Do not delete the directory.
/opt/FAIXjmcal/srvapp/f3crhsvb.in
Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to stop the Systemwalker Operation Manager daemons.
If either SYSTEM_CALENDAR or service/application startup information is not subject to automatic reflection
If the following information has been defined separately for each node, then this information will have to be backed up and restored on each node.
SYSTEM_CALENDAR schedule information, power schedule information, termination monitoring information
Service/application startup information
The procedures for backing up and restoring this information are as follows:
[Backup procedure]
Back up information on the active nodes.
Refer to "3.4.1 Procedure for backing up resources in cluster systems" for details on the back up procedure.
Back up the following files on the standby node separately using the copy command.
For PRIMECLUSTER systems (for the Solaris/Linux versions)/Sun Cluster system
/var/opt/FJSVjmcal/caldb/*.*
/var/opt/FJSVjmcal/srvapp/f3crhsvb.ini
For MC/ServiceGuard systems
/opt/FHPjmcal/caldb/*.*
/opt/FHPjmcal/srvapp/f3crhsvb.ini
For HACMP systems
/opt/FAIXjmcal/caldb/*.*
/opt/FAIXjmcal/srvapp/f3crhsvb.ini
Do not save calendars or service/application startup information until the backup operations are completed on both the active and standby nodes.
[Restoration procedure]
Stop the Systemwalker Operation Manager daemons on both the active and standby nodes.
Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to stop the Systemwalker Operation Manager daemons.
On the active nodes, restore the data that was backed up on the active nodes.
Refer to "3.4.2 Procedure for restoring resources in cluster systems" for details on the restoration procedure.
Restore data on the standby node.
Copy the files created in step 2 of the backup procedure over to the standby node. Place these files on the standby node using the same directory path as the active node where the data was backed up.
Start the Systemwalker Operation Manager daemons on both the active and standby nodes.
Refer to "3.2 Starting and Stopping Daemons in Cluster Systems" for details on how to start the Systemwalker Operation Manager daemons.
Information
Use the calsetcluster command to check the automatic reflection settings for each node.
For PRIMECLUSTER systems (for the Solaris/Linux versions)/Sun Cluster systems
# /opt/FJSVjmcal/bin/calsetcluster -v
For MC/ServiceGuard systems
# /opt/FHPjmcal/bin/calsetcluster -v
For HACMP systems
# /opt/FAIXjmcal/bin/calsetcluster -v
Items with "Disable" displayed in the standard output when this command is executed are not subject to automatic reflection.