Top
Systemwalker Operation Manager  Cluster Setup Guide for UNIX
FUJITSU Software

4.2 Uninstalling Systemwalker Operation Manager from the PRIMECLUSTER System

This section describes how to uninstall Systemwalker Operation Manager from the PRIMECLUSTER system.

The explanation in the following example assumes that the Systemwalker Operation Manager application name is "omgr" and the mount point of the shared disk is set to "/disk1".

  1. Stop applications registered with the cluster system.

    1. Select Global Cluster Services from the top menu of the PRIMECLUSTER Web-Based Admin View, and then select Cluster Admin from the displayed window.

      A pop-up menu will be displayed for selecting the node to connect to.

    2. Select the node from the pop-up menu.

      The RMS main window will be displayed.

    3. Right-click on the Systemwalker Operation Manager cluster application to be stopped on the active node from the RMS tree in the RMS main window, and then select Offline from the pop-up menu. For N:1 active/standby configuration and dual node mutual standby configuration, right-click on all active cluster applications and select Offline from the pop-up menu.

      The Systemwalker Operation Manager cluster application (in other words, the Systemwalker Operation Manager daemon managed by the cluster system) will stop.

  2. Delete user applications and their resources.

    Delete the Systemwalker Operation Manager user applications and resources that have been registered in "4.1 Registering Systemwalker Operation Manager with the PRIMECLUSTER System ."

    1. Select Global Cluster Services from the top menu of the PRIMECLUSTER's Web-Based Admin View, and then select userApplication Configuration Wizard from the displayed window.

      A top menu of the userApplication Configuration Wizard window will be displayed.

    2. Right-click on the Systemwalker Operation Manager user application to be deleted from the tree on the left-hand side of the window, and then select Remove userApplication or Resource from the displayed pop-up menu.

      A confirmation window will be displayed.

    3. Select All.

      The selected Systemwalker Operation Manager user application and all of its resources will be deleted.

    For N:1 active/standby configuration and dual node mutual standby configuration, delete all of the registered Systemwalker Operation Manager user applications and resources by repeating the procedure above.

  3. Delete resources that use the state transition procedure.

    Delete the resources that use the state transition procedure by using the cldelprocrsc command. In the example below, the resource ID is 113.

    Resource IDs that have been registered can be looked up with the clgetree command.

    Refer to the PRIMECLUSTER manual for details on the cldelprocrsc command and the clgetree command.

    # /etc/opt/FJSVcluster/bin/cldelprocrsc  -r 113
  4. Delete state transition procedure.

    Delete the state transition procedure by executing the cldelproc command on each of the active and standby nodes. In the following example, the procedure name is "omgr".

    # /etc/opt/FJSVcluster/bin/cldelproc -c SystemState3 omgr
  5. Stop daemons.

    On each node in the cluster system, use the poperationmgr command to stop each Systemwalker Operation Manager daemon. Refer to the Systemwalker Operation Manager Reference Guide for details on the poperationmgr command.

  6. Delete symbolic links.

    Delete the symbolic links for the resources that were moved to the shared disk when the cluster system was created. Do this operation at each node in the cluster system.

  7. Copy resources back from the shared disk to the local disks.

    Copy the resources that were moved to the shared disk when the cluster system was created back to the local disks. Do this operation at each active node.

  8. Copy resources to the standby node.

    Copy the resources moved back to the local disk on the active node over to the standby node. For N:1 active/standby configuration, copy these resources to the standby node from any one of the N active nodes. For cascading configuration, copy these resources to all standby nodes.

  9. Create symbolic links.

    For 1:1 active/standby (without subsystems), 1:1 active/standby (with subsystems), N:1 active/standby and cascading configurations, recreate the symbolic links that were deleted when resources were moved to the shared disk.

    • Create "/var/spool/mjes" as a symbolic link to "var/opt/FJSVMJS/var/spool/mjes".

    • Create "/etc/mjes" as a symbolic link to "etc/opt/FJSVMJS/etc/mjes".

    Perform this operation on all active and standby nodes. This operation is not required for 1:1 active/standby (with subsystems and partial cluster operation) configurations or dual node mutual standby configurations.

  10. Restore the settings for starting and stopping daemons automatically.

    Restore the settings for the monitored process and the settings for automatically starting and stopping daemons (these were canceled in "2.2.3 Canceling the settings for starting and stopping daemons automatically") to their original settings.

  11. Cancel the settings for automatic reflection.

    1. Use the following command to cancel the security information settings:

      mpaclcls -u
    2. Use the following command to cancel the settings for automatically reflecting calendar and service/application startup information:

      /opt/FJSVjmcal/bin/calsetcluster -d

    Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command and the calsetcluster command.

  12. Delete cluster information.

    Execute the mpsetcluster command with the "-d" option specified on each node in the cluster system to delete cluster information that was registered after Systemwalker Operation Manager was installed.

    # /opt/systemwalker/bin/mpsetcluster -d

    Refer to the Systemwalker Operation Manager Reference Guide for details on the mpsetcluster command.

  13. Delete unnecessary directories from the shared disk.

    Delete any unnecessary directories left over on the shared disk from the resources that were moved to the shared disk in "2.6 Moving Resources to the Shared Disk."

    Steps 1 through 13 release Systemwalker Operation Manager from its application to the cluster system.

  14. Uninstall Systemwalker Operation Manager.

    Uninstall Systemwalker Operation Manager. Refer to the Systemwalker Operation Manager Installation Guide for details on how to uninstall Systemwalker Operation Manager.

To reinstall Systemwalker Operation Manager, first use the procedure above to uninstall Systemwalker Operation Manager, and then reinstall it in the cluster system.

To perform upgrade installation, release Systemwalker Operation Manager from its application to the cluster system by following steps 1 through to 13 above, perform the upgrade installation, and then reapply Systemwalker Operation Manager to the cluster system, starting from "2.2.3 Canceling the settings for starting and stopping daemons automatically."

Note

While uninstalling Systemwalker Operation Manager using the procedure above, be sure to cancel the calendar automatic reflection settings using the "/opt/FJSVjmcal/bin/calsetcluster -d" command. Otherwise, the following symbolic file will remain and need to be deleted manually:

  • /etc/rc3.d/S28JMCAL