This section describes how to uninstall Systemwalker Operation Manager from a cluster system with dual node mutual standby configuration.
Use the following procedure to uninstall Systemwalker Operation Manager from the HACMP system.
Stopping the Systemwalker Operation Manager daemons registered with the HACMP system
Stopping Systemwalker Operation Manager
Deleting resource groups registered with the HACMP system
Deleting the application monitor
Deleting the application server
Synchronizing cluster definitions
Moving the resources to the active node
Copying the resources to the standby node
Restoring the settings to start and stop daemons automatically
Canceling the settings for automatic reflection
Deleting cluster information
Deleting unnecessary directories from the shared disk
Uninstalling Systemwalker Operation Manager
Deletion procedure
In the following example of deletion procedure, the resource group name of Systemwalker Operation Manager and the application server name registered with HACMP are "omgr_rg," and "omgr_ap" respectively.
Stopping the Systemwalker Operation Manager daemons registered with the HACMP system
Stop the Systemwalker Operation Manager daemons running in the cluster system. Refer to "8.3 Starting and Stopping Daemons in the HACMP System" for how to stop daemons.
Stopping Systemwalker Operation Manager
On both nodes, stop Systemwalker Operation Manager by executing the poperationmgr command.
# /opt/systemwalker/bin/poperationmgr
Refer to the Systemwalker Operation Manager Reference Guide for details on the poperationmgr command.
Deleting the resource groups registered with the HACMP system
If only the application server of Systemwalker Operation Manager has been registered with the resource group, delete the resource group itself.
If other application servers have been registered with the resource group, delete the application server of Systemwalker Operation Manager from the resource group.
Depending on the operating environment, perform either of the following procedures:
Deleting the resource group
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Resource Configuration >> HACMP Extended Resource Group Configuration >> Remove a Resource Group.
In the window that displays the list of resource groups, select the resource group name of Systemwalker Operation Manager "omgr_rg" to delete it.
Deleting the application server from the resource group
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Resource Configuration >> HACMP Extended Resource Group Configuration >> Change/Show Resources and Attributes for a Resource Group.
In the window that displays the list of resource groups, select the resource group name "omgr_rg."
From the Application Server Name field, delete "omgr_ap."
In addition, delete the service IP label/address, volume group and filesystem that have been registered as the resource of Systemwalker Operation Manager.
Deleting the application monitor
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Resource Configuration >> HACMP Extended Resources Configuration >> Configure HACMP Applications >> Configure HACMP Application Monitoring >> Define Custom Application Monitor >> Remove a Custom Application Monitor.
In the window displayed, select the application monitor name of Systemwalker Operation Manager to remove it.
Deleting the application server
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Resource Configuration >> HACMP Extended Resources Configuration >> Configure HACMP Applications >> Configure HACMP Application Servers >> Remove an Application Server.
In the window that displays the list of application servers, select the application server name of Systemwalker Operation Manager "omgr_ap" to delete it.
Synchronizing cluster definitions
Synchronize the contents of definition deleted on the active node with the standby node.
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Verification and Synchronization.
Perform synchronization
Refer to the HACMP manual for details on synchronization.
Repeat steps 3 through 6 with the other resource group.
Moving the resources to the active node
Move the resources that had been moved to the shared disk back to the active node.
On the active node, activate the volume by executing the varyonvg command of AIX, and then mount the shared disk.
In the following example, the volume names of the shared disk are "datavg1" and "datavg2", and the mount points to the shared disk are "/disk1" and "/disk2".
# varyonvg datavg1 # mount /disk1 # varyonvg datavg2 # mount /disk2
Deleting symbolic links
On the active node, delete the symbolic links that were created for the resources that were moved to the shared disk.
# rm /var/opt/FJSVfwseo/JM1 # rm /var/opt/FJSVfwseo/JM2 # rm /opt/FAIXjmcal/post/sys1 # rm /opt/FAIXjmcal/post/sys2 # rm /opt/FAIXJOBSC/db/JOBDB1 # rm /opt/FAIXJOBSC/db/JOBDB2 # rm /var/spool/mjes/mjes1 # rm /var/spool/mjes/mjes2 # rm /etc/mjes/mjes1 # rm /etc/mjes/mjes2 # rm /var/opt/FJSVstem/stemDB1 (*1) # rm /var/opt/FJSVstem/stemDB2 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Moving the resources from the shared disk to the local disk
Move the resources that were moved to the shared disk at the configuration of the cluster system back to the local disk and restore the symbolic links.
# mv /disk1/FAIXjmcal/post/sys1 /opt/FAIXjmcal/post/sys1 # mv /disk2/FAIXjmcal/post/sys2 /opt/FAIXjmcal/post/sys2 # mv /disk1/FAIXJOBSC/JOBDB1 /opt/FAIXJOBSC/db/JOBDB1 # mv /disk2/FAIXJOBSC/JOBDB2 /opt/FAIXJOBSC/db/JOBDB2 # mv /disk1/FAIXMJS/var/spool/mjes1 /opt/FAIXMJS/var/spool/mjes/mjes1 # mv /disk2/FAIXMJS/var/spool/mjes2 /opt/FAIXMJS/var/spool/mjes/mjes2 # mv /disk1/FAIXMJS/etc/mjes1 /opt/FAIXMJS/etc/mjes/mjes1 # mv /disk2/FAIXMJS/etc/mjes2 /opt/FAIXMJS/etc/mjes/mjes2 # mv /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1) # mv /disk2/FJSVstem/stemDB2 /var/opt/FJSVstem/stemDB2 (*1)
*1: Required only if the Master Schedule Management function is enabled.
On the active node, unmount the shared disk, and then deactivate the volume by executing the AIX command varyoffvg.
# umount /disk1 # varyoffvg datavg1 # umount /disk2 # varyoffvg datavg2
Copying the resources to the standby node
Copy the resources that have been moved back to the local disk on the active node over to the standby node.
Deleting symbolic links
On the standby node, delete the symbolic links that were created to the resources that were moved to the shared disk.
# rm /var/opt/FJSVfwseo/JM1 # rm /var/opt/FJSVfwseo/JM2 # rm /opt/FAIXjmcal/post/sys1 # rm /opt/FAIXjmcal/post/sys2 # rm /opt/FAIXJOBSC/db/JOBDB1 # rm /opt/FAIXJOBSC/db/JOBDB2 # rm /var/spool/mjes/mjes1 # rm /var/spool/mjes/mjes2 # rm /etc/mjes/mjes1 # rm /etc/mjes/mjes2 # rm /var/opt/FJSVstem/stemDB1 (*1) # rm /var/opt/FJSVstem/stemDB2 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Copying the resources
Copy the resources on the active node that have been moved back to the local disk to the standby node, and then restore the symbolic links. In the following example, the standby node is "node2" and the command is entered on the active node.
Temporarily compress the spool directory for Job Execution Control before copying it. In the following example, this directory is compressed as "mjes.tar"
# rcp -r /opt/FAIXJOBSC/db/JOBDB1 node2:/opt/FAIXJOBSC/db/JOBDB1 # rcp -r /opt/FAIXJOBSC/db/JOBDB2 node2:/opt/FAIXJOBSC/db/JOBDB2 # cd /opt/FAIXMJS/var/spool/mjes # tar -cvf ./mjes1.tar mjes1 # tar -cvf ./mjes2.tar mjes2 # rcp /opt/FAIXMJS/var/spool/mjes/mjes1.tar node2:/opt/FAIXMJS/var/spool/mjes/mjes1.tar # rcp /opt/FAIXMJS/var/spool/mjes/mjes2.tar node2:/opt/FAIXMJS/var/spool/mjes/mjes2.tar # rcp -r /opt/FAIXMJS/etc/mjes/mjes1 node2:/opt/FAIXMJS/etc/mjes/mjes1 # rcp -r /opt/FAIXMJS/etc/mjes/mjes2 node2:/opt/FAIXMJS/etc/mjes/mjes2 # rm /opt/FAIXMJS/var/spool/mjes/mjes1.tar # rm /opt/FAIXMJS/var/spool/mjes/mjes2.tar # rcp -r /opt/FAIXjmcal/post/sys1 node2:/opt/FAIXjmcal/post/sys1 # rcp -r /opt/FAIXjmcal/post/sys2 node2:/opt/FAIXjmcal/post/sys2 # rcp -r /var/opt/FJSVstem/stemDB1 node2:/var/opt/FJSVstem/stemDB1 (*1) # rcp -r /var/opt/FJSVstem/stemDB2 node2:/var/opt/FJSVstem/stemDB2 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Restore the spool directory of Job Execution Control copied in compression format on the standby node.
# cd /opt/FAIXMJS/var/spool/mjes # tar -xvf mjes1.tar # tar -xvf mjes2.tar # rm mjes1.tar # rm mjes2.tar
Restoring the settings to start and stop daemons automatically
On both nodes, return the automatic start/stop settings that were canceled in "2.2.3 Canceling the settings for starting and stopping daemons automatically" to the original state.
Canceling the settings for automatic reflection
Cancel the security information settings by executing the following command:
mpaclcls -u
Cancel the settings for automatically reflecting calendar and service application startup information by executing the following command:
/opt/FAIXjmcal/bin/calsetcluster -d
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls and calsetcluster commands.
Deleting cluster information
On both nodes, execute the mpsetcluster command with the -d option specified to delete the cluster information registered after Systemwalker Operation Manager was installed.
# /opt/systemwalker/bin/mpsetcluster -d
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpsetcluster command.
Deleting unnecessary directories from the shared disk
Delete unnecessary directories from the shared disk.
At this stage, Systemwalker Operation Manager will have been released from its application to the cluster system.
Uninstalling Systemwalker Operation Manager
Uninstall Systemwalker Operation Manager from both nodes.
If Systemwalker Operation Manager is to be reinstalled, first uninstall it by using the procedure above, and then perform a fresh installation on to the cluster system (at each node).
To perform an upgrade installation, release Systemwalker Operation Manager from its application to the cluster system by following steps 1 through to 12 above, perform the upgrade installation, and then reapply Systemwalker Operation Manager to the cluster system, starting from "2.2.3 Canceling the settings for starting and stopping daemons automatically."