This section describes the procedure for installing Systemwalker Operation Manager on cluster system.
New installations
The procedure for installing Systemwalker Operation Manager on a cluster system for the first time is as follows:
Determine the structure for the cluster system, and install the cluster system and Systemwalker Operation Manager on each node making up the cluster system.
Refer to "2.2 Installation."
For structures where a cluster configuration will only be used for the schedule server (with network jobs executed in an execution server), or structures where some Systemwalker Operation Manager server subsystems will be clustered (with network jobs submitted to non-clustered subsystems), use the jmmode command to set up "continuous execution mode" on all the nodes making up the cluster system and also on the nodes that will execute network jobs. Also set a logical IP address for Job Execution Control.
Refer to "2.3 Settings for Submitting Network Jobs from Cluster System."
These definitions are not required if the schedule server and the execution server are both part of the same cluster system configuration.
Set up the Master Schedule Management function if it is to be used.
Refer to "2.4 Settings for Using the Master Schedule Management Function."
These definitions are not required if the Master Schedule Management function is not used.
Set the IP address required to operate Systemwalker Operation Manager with a cluster system.
Refer to "2.5 IP Address Settings."
Move the resources used by Systemwalker Operation Manager to the shared disk of the cluster system.
Refer to "2.6 Moving Resources to the Shared Disk."
Perform settings so that information can be automatically reflected in both the active node and the standby node.
Refer to "2.7 Settings for Automatic Reflection."
Register Systemwalker Operation Manager with the cluster system.
Refer to "2.8 Registering Systemwalker Operation Manager with a Cluster System."
Standardize the information managed by each node between active and standby nodes in the cluster system.
Refer to "2.9 Standardizing the Information Managed by Each Node."
Stop the Jobscheduler daemon when you test a failover.
Refer to "2.10 Failover Testing."
To enable jobs to restart automatically when failover occurs (assuming that a cluster configuration is used for the entire Systemwalker Operation Manager server system), perform the settings required to restart jobs.
Refer to "2.11 Settings for Making Jobs Restart."
These definitions are not required if a cluster configuration is only used for the schedule server with the execution server running on a separate node.
Migrating to a cluster system
Refer to "New installations" to migrate Systemwalker Operation Manager from a standard system to a cluster system.
Note that, however, back up resources by using the mpbko command before migrating to a cluster system.
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpbko command.
Upgrading Systemwalker Operation Manager on a cluster system
Use the following procedure to upgrade an installation of Systemwalker Operation Manager operating on a cluster system:
Remove Systemwalker Operation Manager from the cluster system.
Refer to "2.12 Uninstalling Systemwalker Operation Manager from a Cluster System."
Upgrade Systemwalker Operation Manager on all the nodes (active and standby) making up the cluster system.
Refer to the Systemwalker Operation Manager Upgrade Guide.
Apply Systemwalker Operation Manager to the cluster system again.
Perform the procedure described in "2.2.3 Canceling the settings for starting and stopping daemons automatically" and the subsequent procedures.
Information
When upgrade installation is performed, the following settings are transferred from the previous installation, and so need not be set again.
Subsystem environments
Continuous execution mode
IP address settings
The process monitoring targets are not transferred. Either perform the settings again, or transfer them by referring to the Systemwalker Operation Manager Upgrade Guide.
For Solaris version 10.1 or earlier or HP-UX version 10.0 or earlier, any node name definitions that have been specified will be transferred. To perform operation using logical IP address, set the logical IP address by referring to "2.3.2 Setting logical IP address for Job Execution Control."