For dual node mutual standby configurations, use the following procedure to uninstall Systemwalker Operation Manager from cluster systems.
In this example, the package name for Systemwalker Operation Manager is "omgr", the shared disk devices are "/dev/vg01/1vol1" and "/dev/vg01/1vol2", the mount points to the shared disk are "/disk1" and "/disk2", and the package names are "omgr1" and "omgr2".
Stop the Systemwalker Operation Manager packages registered with MC/ServiceGuard
Stop the Systemwalker Operation Manager packages registered with MC/ServiceGuard by executing the cmhaltpkg MC/ServiceGuard command on both nodes where Systemwalker Operation Manager is running.
# cmhaltpkg -v omgr1
# cmhaltpkg -v omgr2
Stop Systemwalker Operation Manager
Stop Systemwalker Operation Manager on both nodes by using the poperationmgr command.
# /opt/systemwalker/bin/poperationmgr
Refer to the Systemwalker Operation Manager Reference Guide for details on the poperationmgr command.
Delete the service application registered with MC/ServiceGuard
Delete the Systemwalker Operation Manager service application registered with MC/ServiceGuard by executing the cmdeleteconf MC/ServiceGuard command on either node.
# cmdeleteconf -p omgr1
# cmdeleteconf -p omgr2
Move resources to each node
Move resources that had been moved to the shared disk back to each node.
Mount the shared disk by executing the vgchange HP-UX command on the active node.
# vgchange -c n /dev/vg01 # vgchange -a y /dev/vg01 # mount /dev/vg01/lvol1 /disk1 # vgchange -c n /dev/vg02 # vgchange -a y /dev/vg02 # mount /dev/vg02/lvol1 /disk2
Delete symbolic links
From the node where the shared disk is mounted, delete the symbolic links that were created to the resources that were moved to the shared disk.
# rm /var/opt/FJSVfwseo/JM1 # rm /var/opt/FJSVfwseo/JM2 # rm /opt/FHPjmcal/post/sys1 # rm /opt/FHPjmcal/post/sys2 # rm /opt/FHPJOBSCH/db/JOBDB1 # rm /opt/FHPJOBSCH/db/JOBDB2 # rm /var/spool/mjes/mjes1 # rm /var/spool/mjes/mjes2 # rm /etc/mjes/mjes1 # rm /etc/mjes/mjes2 # rm /var/opt/FJSVstem/stemDB1 (*1) # rm /var/opt/FJSVstem/stemDB2 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Copy resources back from the shared disk to the local disks
Move the resources that were moved to the shared disk when the cluster system was created back to the local disks.
# mv /disk1/FHPjmcal/post/sys1 /opt/FHPjmcal/post/sys1 # mv /disk2/FHPjmcal/post/sys2 /opt/FHPjmcal/post/sys2 # mv /disk1/FHPJOBSCH/JOBDB1 /opt/FHPJOBSCH/db/JOBDB1 # mv /disk2/FHPJOBSCH/JOBDB2 /opt/FHPJOBSCH/db/JOBDB2 # mv /disk1/FHPMJS/var/spool/mjes1 /opt/FHPMJS/var/spool/mjes/mjes1 # mv /disk2/FHPMJS/var/spool/mjes2 /opt/FHPMJS/var/spool/mjes/mjes2 # mv /disk1/FHPMJS/etc/mjes1 /opt/FHPMJS/etc/mjes/mjes1 # mv /disk2/FHPMJS/etc/mjes2 /opt/FHPMJS/etc/mjes/mjes2 # mv /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1) # mv /disk2/FJSVstem/stemDB2 /var/opt/FJSVstem/stemDB2 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Unmount the shared disk
Unmount the shared disk using the vgchange HP-UX command.
# umount /disk1 # vgchange -a n /dev/vg01 # umount /disk2 # vgchange -a n /dev/vg02
Copy resources to the standby node
Copy the resources that were moved back to the local disk on the active node over to the standby node.
Delete symbolic links.
On the standby node, delete the symbolic links that were created to the resources that were moved to the shared disk.
# rm /var/opt/FJSVfwseo/JM1 # rm /var/opt/FJSVfwseo/JM2 # rm /opt/FHPjmcal/post/sys1 # rm /opt/FHPjmcal/post/sys2 # rm /opt/FHPJOBSCH/db/JOBDB1 # rm /opt/FHPJOBSCH/db/JOBDB2 # rm /var/spool/mjes/mjes1 # rm /var/spool/mjes/mjes2 # rm /etc/mjes/mjes1 # rm /etc/mjes/mjes2 # rm /var/opt/FJSVstem/stemDB1 (*1) # rm /var/opt/FJSVstem/stemDB2 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Copy resources
Copy the resources that were moved back to the local disk on the active node to the standby node. In the following example, the command is entered from the active node and the name of the standby node is "hp02".
Temporarily compress the spool directory for Job Execution Control before copying it. In the following example, this directory is compressed as "mjes.tar".
# rcp -r /opt/FHPJOBSCH/db/JOBDB1 hp02:/opt/FHPJOBSCH/db/JOBDB1 # rcp -r /opt/FHPJOBSCH/db/JOBDB2 hp02:/opt/FHPJOBSCH/db/JOBDB2 # cd /opt/FHPMJS/var/spool/mjes # tar -cvf ./mjes1.tar mjes1 # tar -cvf ./mjes2.tar mjes2 # rcp /opt/FHPMJS/var/spool/mjes/mjes1.tar hp02:/opt/FHPMJS/var/spool/mjes/mjes1.tar # rcp /opt/FHPMJS/var/spool/mjes/mjes2.tar hp02:/opt/FHPMJS/var/spool/mjes/mjes2.tar # rcp -r /opt/FHPMJS/etc/mjes/mjes1 hp02:/opt/FHPMJS/etc/mjes/mjes1 # rcp -r /opt/FHPMJS/etc/mjes/mjes2 hp02:/opt/FHPMJS/etc/mjes/mjes2 # rm /opt/FHPMJS/var/spool/mjes/mjes1.tar # rm /opt/FHPMJS/var/spool/mjes/mjes2.tar # rcp -r /opt/FHPjmcal/post/sys1 hp02:/opt/FHPjmcal/post/sys1 # rcp -r /opt/FHPjmcal/post/sys2 hp02:/opt/FHPjmcal/post/sys2 # rcp -r /var/opt/FJSVstem/stemDB1 hp02:/var/opt/FJSVstem/stemDB1 (*1) # rcp -r /var/opt/FJSVstem/stemDB2 hp02:/var/opt/FJSVstem/stemDB2 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Decompress the spool directory for Job Execution Control on the standby node.
# cd /opt/FHPMJS/var/spool/mjes # tar -xvf mjes1.tar # tar -xvf mjes2.tar # rm mjes1.tar # rm mjes2.tar
Restore the settings to start and stop daemons automatically
On both nodes, return the automatic start/stop settings that were canceled in "2.2.3 Canceling the settings for starting and stopping daemons automatically" to the original state. Similarly, undo the changes to the process monitoring target.
Cancel the settings for automatic reflection
Use the following command to cancel the security information settings:
mpaclcls -u
Use the following command to cancel the settings for automatically reflecting calendar and service/application startup information:
/opt/FHPjmcal/bin/calsetcluster -d
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command and calsetcluster command.
Delete cluster information
Execute the mpsetcluster command with the "-d" option on both nodes to delete cluster information registered after Systemwalker Operation Manager was installed.
# /opt/systemwalker/bin/mpsetcluster -d
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpsetcluster command.
Delete unnecessary directories from the shared disk.
At this stage, Systemwalker Operation Manager will have been released from its application to the cluster system.
Uninstallation
Uninstall Systemwalker Operation Manager on both nodes.
If Systemwalker Operation Manager is to be reinstalled, first uninstall it using the procedure above, and then perform a fresh installation on to the cluster system (at each node).
To perform an upgrade installation, release Systemwalker Operation Manager from its application to the cluster system by following steps 1 through to 9 above, perform the upgrade installation, and then reapply Systemwalker Operation Manager to the cluster system, starting from "2.2.3 Canceling the settings for starting and stopping daemons automatically."
Note
While uninstalling Systemwalker Operation Manager using the procedure above, be sure to cancel the calendar automatic reflection settings using the "/opt/FHPjmcal/bin/calsetcluster -d" command. Otherwise, the following symbolic file will remain and need to be deleted manually:
/sbin/rc3.d/S28JMCAL