This section presents some notes on operating Systemwalker Operation Manager on cluster system.
Jobs that cannot be submitted
The following jobs and job nets cannot be run on cluster system:
Network jobs submitted from an active node to a standby node in a cluster configuration
It is not possible to submit network jobs from an active node to a standby node when an entire Systemwalker Operation Manager server system is in a cluster configuration, as described in "1.3.1 Using a cluster configuration for the entire Systemwalker Operation Manager server system" and "1.3.2 Using a cluster configuration for the schedule server only."
When only some Systemwalker Operation Manager server subsystems are organized into a cluster configuration, as described in "1.3.3 Using a cluster configuration for some Systemwalker Operation Manager server subsystems," network jobs can be submitted to those subsystems not in a cluster configuration, even if those subsystems exist on the standby node.
Network jobs submitted to a lower version when multiple subsystems are operating
If a 1:1 active/standby (with subsystems), a 1:1 active/standby (with subsystems and partial cluster operation) or a dual node mutual standby configuration is used, network jobs cannot be submitted to the following versions:
SystemWalker/OperationMGR V5.0L30 or earlier for Windows
SystemWalker/OperationMGR 5.0 or earlier for Solaris/HP-UX/AIX
SystemWalker/OperationMGR 5.2 for Linux
Refer to the Systemwalker Operation Manager Technical Guide for details on the range of jobs that can be submitted.
Execution attributes for registering job nets
When registering user resources required for running batch jobs (such as shell scripts, executable files and data) on the shared disk, register the execution attribute of the job net as the Job Execution Control attribute. Do not place user resources on the shared disk if the backward version compatibility (earlier/standard) attribute is to be used.
If job nets with the backward version compatibility (earlier/standard) attribute are registered on the shared disk, then if error occurs on the node where the job is executing, the batch job may continue to access resources on the shared disk, preventing the shared disk from being released and preventing the active node from failing over to the standby node.
Notes on when servers running Systemwalker Operation Manager coexist with Operation Management Servers running Systemwalker Centric Manager and those servers are running as cluster systems
The Event Monitoring function and the Action Management function are failed over as a group of Systemwalker Centric Manager. To use the following functions, Systemwalker Centric Manager and Systemwalker Operation Manager need to work on the same node. Therefore, register them with the same cluster application (*1).
Using the Event Monitoring function and Action Management function from Systemwalker Operation Manager clients
By linking with Systemwalker Centric Manager, monitoring job nets abended, and restarting/confirming the job net abended so that the job net's status is automatically changed to "resolved"
By linking with Systemwalker Centric Manager, displaying the Monitor Job Net window of the abended job net directly from the event list of Systemwalker Centric Manager's monitoring screen
The terminology "cluster application" varies from cluster system to cluster system. So, "cluster application" should be interpreted depending on cluster system as below:
PRIMECLUSTER: cluster application
Oracle Solaris Cluster: resource group
MC/ServiceGuard: package
HACMP: resource group
Notes on defining cluster system as trusted hosts
When defining cluster system as trusted hosts, define the logical IP address specified in the Cluster Settings tab of the Define Operating Information window.
Submitting network jobs from the cluster system configuration's schedule server
When submitting a network job from the cluster system configuration's schedule server, define the logical IP address of the schedule server, according to the communications environment of the execution server, as follows:
When the execution server is in an IPv6 single stack environment:
IPv6 address
When the execution server is in an IPv4 single stack environment or an IPv4/IPv6 dual stack environment:
IPv4 address
Note that you cannot operate an execution server in an IPv6 single stack environment together with an execution server in an IPv4 single stack environment or an IPv4/IPv6 dual stack environment. Refer to the following for whether or not to allow operation in cases where the execution server's communications environment is mixed, and for the logical IP address of the schedule server that will be defined during operations.
Combinations of execution server communications environments | Operation | Schedule server | ||
---|---|---|---|---|
IPv6 single | IPv4 single | - | N | - |
IPv6 single | - | IPv4/IPv6 dual | N | - |
IPv6 single | IPv4 single | IPv4/IPv6 dual | N | - |
- | IPv4 single | IPv4/IPv6 dual | Y | IPv4 address |
Y: Operation allowed/ N: Operation not allowed