Top
Systemwalker Operation Manager  Cluster Setup Guide for UNIX
FUJITSU Software

6.2 Uninstalling Systemwalker Operation Manager from the Oracle Solaris Cluster System

This section describes how to uninstall Systemwalker Operation Manager from the Oracle Solaris Cluster system.

  1. Delete Systemwalker Operation Manager from the cluster system.
    Delete the following settings that control the starting and stopping of each Systemwalker Operation Manager daemon set up in Oracle Solaris Cluster.

    1. Delete the resources.

      /usr/cluster/bin/scswitch -n -j data service resources name
      /usr/cluster/bin/scrgadm -r -j data service resources name

      In the following example, the name of the data service resource is "OMGR_rs".

      /usr/cluster/bin/scswitch -n -j OMGR_rs
      /usr/cluster/bin/scrgadm -r -j OMGR_rs
    2. Delete the shared address resource name.

      /usr/cluster/bin/scrgadm -r -j shared address resources name

      In the following example, the name of the resource type is "FJSV.host".

      /usr/cluster/bin/scrgadm -r -j FJSV.host
    3. Delete the resource group.

      /usr/cluster/bin/scrgadm -r -g resource group name

      In the following example, the name of the resource group is "OMGR_rg".

      /usr/cluster/bin/scrgadm -r -g OMGR_rg
    4. Delete the resource type.

      /usr/cluster/bin/scrgadm -r -t resource type name

      In the following example, the name of the resource type is "FJSV.OMGR".

      /usr/cluster/bin/scrgadm -r -t FJSV.OMGR
  2. Stop daemons not managed by the cluster system.
    On each node in the cluster system, use the poperationmgr command to stop all the Systemwalker Operation Manager daemons not managed by the cluster system. Refer to the Systemwalker Operation Manager Reference Guide for details on the poperationmgr command.

  3. Delete symbolic links.
    Delete the symbolic links created for resources moved to the shared disk. Perform this operation on all nodes (active and standby).

  4. Copy resources back from the shared disk to the local disks.
    Move the resources relocated to the shared disk when the cluster system was created back to the local disks. Perform this operation at each active node.

  5. Copy resources to the standby node.
    Copy the information moved back to the local disk on the active node (4. above) over to the standby node. Copy this information to the standby node from any one of the N active nodes.

  6. Create symbolic links.
    For 1:1 active/standby configuration and N:1 active/standby configurations, re-create the symbolic links deleted when the resources were moved to the shared disk.

    • "/var/spool/mjes" is a symbolic link to "var/opt/FJSVMJS/var/spool/mjes".

    • "/etc/mjes" is a symbolic link to "etc/opt/FJSVMJS/etc/mjes".

    Perform this operation on all nodes (active and standby).

  7. Restore the settings to start and stop daemons automatically.
    Return the automatic start/stop settings and the settings for the monitored process that were canceled in "2.2.3 Canceling the settings for starting and stopping daemons automatically" to the original state.

  8. Cancel the settings for automatic reflection.

    1. Use the following command to cancel the security information settings:

      mpaclcls -u
    2. Use the following command to cancel the settings for automatically reflecting calendar and service/application startup information:

      /opt/FJSVjmcal/bin/calsetcluster -d
  9. Delete cluster information.
    Execute the mpsetcluster command with the "-d" option on each node on the cluster system to delete cluster information registered after Systemwalker Operation Manager was installed.

    # /opt/systemwalker/bin/mpsetcluster -d

    Refer to the Systemwalker Operation Manager Reference Guide for details on the mpsetcluster command.

  10. Delete unnecessary directories from the shared disk.
    Delete any unnecessary directories left over on the shared disk from the resources that were moved to the local disk. (Refer to "2.6 Moving Resources to the Shared Disk.")

    At this stage, Systemwalker Operation Manager will have been released from its application to the cluster system.

  11. Uninstall Systemwalker Operation Manager.
    Uninstall Systemwalker Operation Manager. Refer to the Systemwalker Operation Manager Installation Guide for details on how to uninstall Systemwalker Operation Manager.

To reinstall Systemwalker Operation Manager, first use the procedure above to uninstall Systemwalker Operation Manager, and then reinstall on the cluster system.

To perform an upgrade installation, release Systemwalker Operation Manager from its application to the cluster system by following steps 1 through to 10 above, perform the upgrade installation, and then reapply Systemwalker Operation Manager to the cluster system, starting from "2.2.3 Canceling the settings for starting and stopping daemons automatically."

Note

While uninstalling Systemwalker Operation Manager using the procedure above, be sure to cancel the calendar automatic reflection settings using the "/opt/FJSVjmcal/bin/calsetcluster -d" command. Otherwise, the following symbolic file will remain and need to be deleted manually:

  • /etc/rc3.d/S28JMCAL