This section describes how to uninstall Systemwalker Operation Manager from a cluster system with 1:1 active/standby configuration.
Use the following procedure to uninstall Systemwalker Operation Manager from the HACMP system.
Stopping the Systemwalker Operation Manager daemons registered with the HACMP system
Stopping Systemwalker Operation Manager
Deleting the resource group registered with the HACMP system
Deleting the application monitor
Deleting the application server
Synchronizing cluster definitions
Moving resources to the active node
Copying resources to the standby node
Restoring the settings to start and stop daemons automatically
Canceling the settings for automatic reflection
Deleting cluster information
Deleting unnecessary directories from the shared disk
Uninstalling Systemwalker Operation Manager
Deletion procedure
In the following example of deletion procedure, the resource group name of Systemwalker Operation Manager, the application server name and the application monitor name registered with the HACMP system are "omgr_rg", "omgr_ap" and "omgr_mon" respectively.
Stopping the Systemwalker Operation Manager daemons registered with the HACMP system
Stop the Systemwalker Operation Manager daemons running on the cluster system. Refer to "8.3 Starting and Stopping Daemons in the HACMP System" for how to stop daemons.
Stopping Systemwalker Operation Manager
On all nodes that configure the cluster system, stop Systemwalker Operation Manage by executing the poperationmgr command.
# /opt/systemwalker/bin/poperationmgr
Refer to the Systemwalker Operation Manager Reference Guide for details on the poperationmgr command.
Deleting the resource group registered with the HACMP system
If only the application server of Systemwalker Operation Manager has been registered with the resource group, delete the resource group itself.
If other application servers have been registered with the resource group, delete the application server of Systemwalker Operation Manager from the resource group.
Depending on the operating environment, perform either of the following procedures:
Deleting the resource group
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Resource Configuration >> Extended Resource Group Configuration >> Remove a Resource Group.
In the window that displays the list of resource groups, select the resource group name of Systemwalker Operation Manager "omgr_rg" to delete it.
Deleting the application server from the resource group
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Resource Configuration >> HACMP Extended Resource Group Configuration >> Change/Show Resources and Attributes for a Resource Group.
In the window that displays the list of resource groups, select the resource group name "omgr_rg."
From the Application Server Name field, delete "omgr_ap."
In addition, delete the service IP label/address, volume group and filesystem that have been registered as the resource of Systemwalker Operation Manager.
Deleting the application monitor
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Resource Configuration >> HACMP Extended Resources Configuration >> Configure HACMP Applications >> Configure HACMP Application Monitoring >> Define Custom Application Monitor >> Remove a Custom Application Monitor.
In the Remove a Custom Application Monitor window, select the application monitor name of Systemwalker Operation Manager "omgr_mon" to remove it.
Deleting the application server
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Resource Configuration >> HACMP Extended Resources Configuration >> Configure HACMP Applications >> Configure HACMP Application Servers >> Remove an Application Server.
In the window that displays the list of application servers, select the application server name of Systemwalker Operation Manager "omgr_ap" to remove it.
Synchronizing cluster definitions
Synchronize the contents of definition deleted on the active node to the standby node.
On the active node, enter "smitty hacmp" to open a SMIT session.
#smitty hacmp
In SMIT, select Extended Configuration >> Extended Verification and Synchronization.
Perform synchronization
Refer to the HACMP manual for details on synchronization.
Moving resources to the active node
Move resources that had been moved to the shared disk back to the active node.
In the following example, the volume name of the shared disk is "datavg1" and the mount point to the shared disk is "/disk1".
On the active node, activate the volume by executing the varyonvg command of AIX, and then mount the shared disk.
# varyonvg datavg1 # mount /disk1
Deleting symbolic links
On the active node, delete the symbolic links that were created for the resources that were moved to the shared disk.
[1:1 active/standby configuration (without subsystems), 1:1 active/standby configuration (with subsystems)]
# rm /var/opt/FJSVfwseo/JM # rm /opt/FAIXjmcal/post # rm /opt/FAIXJOBSC/db # rm /var/spool/mjes # rm /etc/mjes # rm /var/opt/FJSVstem (*1)
*1: Required only if the Master Schedule Management function is enabled.
[1:1 active/standby configuration (with subsystems and partial cluster operation)]
# rm /var/opt/FJSVfwseo/JM1 # rm /opt/FAIXjmcal/post/sys1 # rm /opt/FAIXJOBSC/db/JOBDB1 # rm /var/spool/mjes/mjes1 # rm /etc/mjes/mjes1 # rm /var/opt/FJSVstem/stemDB1 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Moving resources from the shared disk to the local disk
Move the resources that were moved to the shared disk at the configuration of the cluster system back to the local disk and restore the symbolic links.
[1:1 active/standby configuration (without subsystems), 1:1 active/standby configuration (with subsystems)]
# mv /disk1/FAIXjmcal/post /opt/FAIXjmcal/post # mv /disk1/FAIXJOBSC /opt/FAIXJOBSC/db # mv /disk1/FAIXMJS/var/spool/mjes /opt/FAIXMJS/var/spool/mjes # ln -s /opt/FAIXMJS/var/spool/mjes /var/spool/mjes # mv /disk1/FAIXMJS/etc/mjes /opt/FAIXMJS/etc/mjes # ln -s /opt/FAIXMJS/etc/mjes /etc/mjes # mv /disk1/FJSVstem /var/opt/FJSVstem (*1)
*1: Required only if the Master Schedule Management function is enabled.
[1:1 active/standby configuration (with subsystems and partial cluster operation)]
# mv /disk1/FAIXjmcal/post/sys1 /opt/FAIXjmcal/post/sys1 # mv /disk1/FAIXJOBSC/JOBDB1 /opt/FAIXJOBSC/db/JOBDB1 # mv /disk1/FAIXMJS/var/spool/mjes1 /opt/FAIXMJS/var/spool/mjes/mjes1 # mv /disk1/FAIXMJS/etc/mjes1 /opt/FAIXMJS/etc/mjes/mjes1 # mv /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1)
*1: Required only if the Master Schedule Management function is enabled.
On the active node, unmount the shared disk, and then deactivate the volume by executing the AIX command varyoffvg.
# umount /disk1 # varyoffvg datavg1
Copying resources to the standby node
Copy the resources that have been moved back to the local disk on the active node over to the standby node.
Deleting symbolic links
On the standby node, delete the symbolic links that were created to the resources that were moved to the shared disk.
[1:1 active/standby configuration (without subsystems), 1:1 active/standby configuration (with subsystems)]
# rm /var/opt/FJSVfwseo/JM # rm /opt/FAIXjmcal/post # rm /opt/FAIXJOBSC/db # rm /var/spool/mjes # rm /etc/mjes # rm /var/opt/FJSVstem (*1)
*1: Required only if the Master Schedule Management function is enabled.
[1:1 active/standby configuration (with subsystems and partial cluster operation)]
# rm /var/opt/FJSVfwseo/JM1 # rm /opt/FAIXjmcal/post/sys1 # rm /opt/FAIXJOBSC/db/JOBDB1 # rm /var/spool/mjes/mjes1 # rm /etc/mjes/mjes1 # rm /var/opt/FJSVstem/stemDB1 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Copying resources
Copy the resources that have been returned to the local disk on the active node to the standby node, and restore the symbolic links.
Temporarily compress the spool directory for Job Execution Control before copying it. In the following example, this directory is compressed as "mjes.tar".
[1:1 active/standby configuration (without subsystems), 1:1 active/standby configuration (with subsystems)]
# cd /opt/FAIXMJS/var/spool # tar -cvf ./mjes.tar mjes # rcp /opt/FAIXMJS/var/spool/mjes.tar node2:/opt/FAIXMJS/var/spool/mjes.tar # rm /opt/FAIXMJS/var/spool/mjes.tar
Copy the resources. In the following example, resources are copied from the "node1" active node to the standby node.
# rcp -r node1:/opt/FAIXJOBSC/db /opt/FAIXJOBSC/db # cd /opt/FAIXMJS/var/spool # tar -xvf mjes.tar # rm /opt/FAIXMJS/var/spool/mjes.tar # ln -s /opt/FAIXMJS/var/spool/mjes /var/spool/mjes # rcp -r node1:/opt/FAIXMJS/etc/mjes /opt/FAIXMJS/etc/mjes # ln -s /opt/FAIXMJS/etc/mjes /etc/mjes # rcp -r node1:/opt/FAIXjmcal/post /opt/FAIXjmcal/post # rcp -r node1:/var/opt/FJSVstem /var/opt/FJSVstem (*1)
*1: Required only if the Master Schedule Management function is enabled.
[1:1 active/standby configuration (with subsystems and partial cluster operation)]
# cd /opt/FAIXMJS/var/spool/mjes # tar -cvf ./mjes1.tar mjes1 # rcp /opt/FAIXMJS/var/spool/mjes/mjes1.tar node2:/opt/FAIXMJS/var/spool/mjes/mjes1.tar # rm /opt/FAIXMJS/var/spool/mjes/mjes1.tar
Copy the resources. In the example, the resources are copied from the active node "node1" on the standby node.
# rcp -r node1:/opt/FAIXJOBSC/db/JOBDB1 /opt/FAIXJOBSC/db/JOBDB1 # cd /opt/FAIXMJS/var/spool/mjes # tar -xvf mjes1.tar # rm mjes1.tar # rcp -r node1:/opt/FAIXMJS/etc/mjes/mjes1 /opt/FAIXMJS/etc/mjes/mjes1 # rcp -r node1:/opt/FAIXjmcal/post/sys1 /opt/FAIXjmcal/post/sys1 # rcp -r node1:/var/opt/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1)
*1: Required only if the Master Schedule Management function is enabled.
Restoring the settings to start and stop daemons automatically
On each node, return the automatic start/stop settings that were canceled in "2.2.3 Canceling the settings for starting and stopping daemons automatically" to the original state. Similarly, undo the changes to the process monitoring target.
Canceling the settings for automatic reflection
Cancel the security information settings by executing the following command:
mpaclcls -u
Cancel the settings for automatically reflecting calendar and service application startup information by executing the following command:
/opt/FAIXjmcal/bin/calsetcluster -d
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls and calsetcluster commands.
Deleting cluster information
On all nodes that configure the cluster system, execute the mpsetcluster command with the -d option specified to delete the cluster information registered after Systemwalker Operation Manager was installed.
# /opt/systemwalker/bin/mpsetcluster -d
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpsetcluster command.
Deleting unnecessary directories from the shared disk
Delete unnecessary directories from the shared disk.
At this stage, Systemwalker Operation Manager will have been released from its application to the cluster system.
Uninstalling Systemwalker Operation Manager
Uninstall Systemwalker Operation Manager from the HACMP system.
If Systemwalker Operation Manager is to be reinstalled, first uninstall it using the procedure above, and then perform a fresh installation on to the cluster system (at each node).
To perform an upgrade installation, release Systemwalker Operation Manager from its application to the cluster system by following steps 1 through 12 above, perform the upgrade installation, and then reapply Systemwalker Operation Manager to the cluster system, starting from "2.2.3 Canceling the settings for starting and stopping daemons automatically."