For 1:1 active/standby and N:1 active/standby configurations, use the following procedure to uninstall Systemwalker Operation Manager from cluster systems.
In this example, the package name for Systemwalker Operation Manager is "omgr", the shared disk device is "/dev/vg01/1vol1", and the mount point to the shared disk is "/disk1".
Stop the Systemwalker Operation Manager package registered with MC/ServiceGuard
Stop the Systemwalker Operation Manager package registered with MC/ServiceGuard by executing the cmhaltpkg MC/ServiceGuard command on each node where Systemwalker Operation Manager is running.
# cmhaltpkg -v omgr
Stop Systemwalker Operation Manager
Stop Systemwalker Operation Manager on each node in the cluster system by executing the poperationmgr command.
# /opt/systemwalker/bin/poperationmgr
Refer to the Systemwalker Operation Manager Reference Guide for details on the poperationmgr command.
Delete the service application registered with MC/ServiceGuard
Delete the Systemwalker Operation Manager service application registered with MC/ServiceGuard by executing the cmdeleteconf MC/ServiceGuard command on any node.
# cmdeleteconf -p omgr
Move the resources to the active node
Move the resources that had been moved to the shared disk back to the active node.
Mount the shared disk by executing the vgchange HP-UX command on the active node.
# vgchange -c n /dev/vg01 # vgchange -a y /dev/vg01 # mount /dev/vg01/lvol1 /disk1
Delete symbolic links
On the active node, delete the symbolic links that were created to the resources that were moved to the shared disk. This operation is not required for N:1 active/standby configurations.
[1:1 active/standby configuration (without subsystems), 1:1 active/standby configuration (with subsystems)]
# rm /var/opt/FJSVfwseo/JM # rm /opt/FHPjmcal/post # rm /opt/FHPJOBSCH/db # rm /var/spool/mjes # rm /etc/mjes # rm /var/opt/FJSVstem (*1)
*1: Required only if the Master Schedule Management function is enabled.
[1:1 active/standby configuration (with subsystems and partial cluster operation)]
# rm /var/opt/FJSVfwseo/JM1 # rm /opt/FHPjmcal/post/sys1 # rm /opt/FHPJOBSCH/db/JOBDB1 # rm /var/spool/mjes/mjes1 # rm /etc/mjes/mjes1 # rm /var/opt/FJSVstem/stemDB1 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Move resources from the shared disk to the local disk
Move the resources that were moved to the shared disk when the cluster system was created back to the local disk on the active node, and restore the symbolic links.
[1:1 active/standby configuration (without subsystems), 1:1 active/standby configuration (with subsystems), N:1 active/standby configuration]
# mv /disk1/FHPjmcal/post /opt/FHPjmcal/post # mv /disk1/FHPJOBSCH /opt/FHPJOBSCH/db # mv /disk1/FHPMJS/var/spool/mjes /opt/FHPMJS/var/spool/mjes # ln -s /opt/FHPMJS/var/spool/mjes /var/spool/mjes # mv /disk1/FHPMJS/etc/mjes /opt/FHPMJS/etc/mjes # ln -s /opt/FHPMJS/etc/mjes /etc/mjes # mv /disk1/FJSVstem /var/opt/FJSVstem (*1)
*1: Required only if the Master Schedule Management function is enabled.
[1:1 active/standby configuration (with subsystems and partial cluster operation)]
# mv /disk1/FHPjmcal/post/sys1 /opt/FHPjmcal/post/sys1 # mv /disk1/FHPJOBSCH/JOBDB1 /opt/FHPJOBSCH/db/JOBDB1 # mv /disk1/FHPMJS/var/spool/mjes1 /opt/FHPMJS/var/spool/mjes/mjes1 # mv /disk1/FHPMJS/etc/mjes1 /opt/FHPMJS/etc/mjes/mjes1 # mv /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Unmount the shared disk by executing the HP-UX vgchange command on the active node.
# umount /disk1 # vgchange -a n /dev/vg01
For N:1 active/standby configurations, repeat steps 1), 3) and 4) on each of the "N" active nodes.
Copy resources to the standby node
Copy the resources that have been moved back to the local disk on the active node over to the standby node.
Delete symbolic links.
On the standby node, delete the symbolic links that were created to the resources that were moved to the shared disk. This operation is not required for N:1 active/standby configurations.
[1:1 active/standby configuration (without subsystems), 1:1 active/standby configuration (with subsystems)]
# rm /var/opt/FJSVfwseo/JM # rm /opt/FHPjmcal/post # rm /opt/FHPJOBSCH/db # rm /var/spool/mjes # rm /etc/mjes # rm /var/opt/FJSVstem (*1)
*1: Required only if the Master Schedule Management function is enabled.
[1:1 active/standby configuration (with subsystems and partial cluster operation)]
# rm /var/opt/FJSVfwseo/JM1 # rm /opt/FHPjmcal/post/sys1 # rm /opt/FHPJOBSCH/db/JOBDB1 # rm /var/spool/mjes/mjes1 # rm /etc/mjes/mjes1 # rm /var/opt/FJSVstem/stemDB1 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Copy resources
Copy the resources that have been returned to the local disk on the active node to the standby node, and restore the symbolic links. For N:1 active/standby configurations, copy the resources from any of the "N" active nodes over to the standby node.
Temporarily compress the spool directory for Job Execution Control before copying it. In the following example, this directory is compressed as "mjes.tar".
[1:1 active/standby configuration (without subsystems), 1:1 active/standby configuration (with subsystems), N:1 active/standby configuration]
# cd /opt/FHPMJS/var/spool # tar -cvf ./mjes.tar mjes # rcp /opt/FHPMJS/var/spool/mjes.tar hp02:/opt/FHPMJS/var/spool/mjes.tar # rm /opt/FHPMJS/var/spool/mjes.tar
Copy the resources. In the following example, resources are copied from the "hp01" active node to the standby node. (This operation is performed on the standby node.)
# rcp -r hp01:/opt/FHPJOBSCH/db /opt/FHPJOBSCH/db # cd /opt/FHPMJS/var/spool # tar -xvf mjes.tar # rm /opt/FHPMJS/var/spool/mjes.tar # ln -s /opt/FHPMJS/var/spool/mjes /var/spool/mjes # rcp -r hp01:/opt/FHPMJS/etc/mjes /opt/FHPMJS/etc/mjes # ln -s /opt/FHPMJS/etc/mjes /etc/mjes # rcp -r hp01:/opt/FHPjmcal/post /opt/FHPjmcal/post # rcp -r hp01:/var/opt/FJSVstem /var/opt/FJSVstem (*1)
*1: Required only if the Master Schedule Management function is enabled.
[1:1 active/standby configuration (with subsystems and partial cluster operation)]
# cd /opt/FHPMJS/var/spool/mjes # tar -cvf ./mjes1.tar mjes1 # rcp /opt/FHPMJS/var/spool/mjes/mjes1.tar hp02:/opt/FHPMJS/var/spool/mjes/mjes1.tar # rm /opt/FHPMJS/var/spool/mjes/mjes1.tar
Copy the resources. In the example, the resources are copied from the active node "hp01" on the standby node.
# rcp -r hp01:/opt/FHPJOBSCH/db/JOBDB1 /opt/FHPJOBSCH/db/JOBDB1 # cd /opt/FHPMJS/var/spool/mjes # tar -xvf mjes1.tar # rm mjes1.tar # rcp -r hp01:/opt/FHPMJS/etc/mjes/mjes1 /opt/FHPMJS/etc/mjes/mjes1 # rcp -r hp01:/opt/FHPjmcal/post/sys1 /opt/FHPjmcal/post/sys1 # rcp -r hp01:/var/opt/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Restore the settings to start and stop daemons automatically
On each node, return the automatic start/stop settings that were canceled in "2.2.3 Canceling the settings for starting and stopping daemons automatically" to the original state. Similarly, undo the changes to the process monitoring target.
Cancel the settings for automatic reflection
Use the following command to cancel the security information settings:
mpaclcls -u
Use the following command to cancel the settings for automatically reflecting calendar and service/application startup information:
/opt/FHPjmcal/bin/calsetcluster -d
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command and calsetcluster command.
Delete cluster information
Execute the mpsetcluster command with the "-d" option on each node on the cluster system to delete cluster information registered after Systemwalker Operation Manager was installed.
# /opt/systemwalker/bin/mpsetcluster -d
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpsetcluster command.
Delete unnecessary directories from the shared disk.
At this stage, Systemwalker Operation Manager will have been released from its application to the cluster system.
Uninstallation
Uninstall Systemwalker Operation Manager.
If Systemwalker Operation Manager is to be reinstalled, first uninstall it using the procedure above, and then perform a fresh installation on to the cluster system (at each node).
To perform an upgrade installation, release Systemwalker Operation Manager from its application to the cluster system by following steps 1 through to 9 above, perform the upgrade installation, and then reapply Systemwalker Operation Manager to the cluster system, starting from "2.2.3 Canceling the settings for starting and stopping daemons automatically."
Note
While uninstalling Systemwalker Operation Manager using the procedure above, be sure to cancel the calendar automatic reflection settings using the "/opt/FHPjmcal/bin/calsetcluster -d" command. Otherwise, the following symbolic file will remain and need to be deleted manually:
/sbin/rc3.d/S28JMCAL