Perform settings so that the following information can be automatically reflected on the active and standby nodes:
Security information
Calendar information
Day change time
Calendar holiday information
Schedule information, power schedule information and completion monitoring information for SYSTEM_CALENDAR
Service/application startup information
Application startup information
These settings are not required on the execution server that configures the cluster system.
Settings for automatically reflecting security information
This section describes the procedure for automatically reflecting security information.
Note
If many users should be given the access right, it takes time to switch clusters.
As a guide, if the access right is given to more than 100 users, confirm the time required to switch clusters by performing a verification.
If the switch takes more time than planned, consider one of the following measures.
Create a group for the users with the access right.
Delete unnecessary users.
[Solaris] [Linux] Review (extend) the monitoring start time (60 seconds by default) in the monitoring script
The following sample monitoring script is provided:
Solaris: /opt/FJSVJMCMN/etc/script/omgr_smonitor
Linux: /opt/FJSVJMCMN/etc/script/omgr_smonitor
Settings for 1:1 active/standby (without subsystems), 1:1 active/standby (with subsystems), N:1 active/standby, and cascading configurations
For PRIMECLUSTER (for Solaris or Linux version), Oracle Solaris Cluster
Use the following procedure to perform settings on the active node. The command line provided here as a sample is for Solaris version.
Mount the shared disk. In the following example, the mount point of the shared disk is "/disk1".
# mount /disk1 (*1)
The command line for Linux version is as follows. (In this example, the device is "/dev/sdb1" and the shared disk is "/disk1".)
# /bin/mount /dev/sdb1 /disk1
For N:1 active/standby configuration, create symbolic links to the shared disk.
# rm /var/opt/FJSVfwseo/JM 2> /dev/null # ln -s /disk1/FJSVfwseo/JM /var/opt/FJSVfwseo/JM
Set up cluster information by executing the following command:
# mpaclcls
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command:
# mpcssave
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk.
# umount /disk1 (*1)
The command line for Linux version is as follows.
# /bin/umount /disk1
For N:1 active/standby configuration, perform steps 1 to 5 on each active node.
In this case, specify the shared disk of each active node.
For MC/ServiceGuard
Use the following procedure to perform settings on the active node.
Mount the shared disk. In the following example, the device is "/dev/vg01" and the mount point of the shared disk is "/disk1".
# vgchange -a e /dev/vg01 # mount /disk1
For N:1 active/standby configuration, create symbolic links to the shared disk.
# rm /var/opt/FJSVfwseo/JM 2> /dev/null # ln -s /disk1/FJSVfwseo/JM /var/opt/FJSVfwseo/JM
Set up cluster information by executing the following command:
# mpaclcls
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command:
# mpcssave
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk.
# umount /disk1 # vgchange -a n /dev/vg01
For N:1 active/standby configuration, perform steps 1 to 5 on each active node.
In this case, specify the shared disk of each active node.
For HACMP
Use the following procedure to perform settings on the active node.
Mount the shared disk. In the following example, the volume name of the shared disk is "datavg1" and the mount point to the shared disk is "/disk1".
# varyonvg datavg1 # mount /disk1
Set up cluster information by executing the following command:
# mpaclcls
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command:
# mpcssave
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk.
# umount /disk1 # varyoffvg datavg1
Settings for 1:1 active/standby configuration (with subsystems and partial cluster operation)
Note
These settings should only be applied to subsystems that will be registered with a cluster system.
For PRIMECLUSTER (for Solaris or Linux version)
This section explains the setup procedure using an example where Subsystem 1 is used in cluster operation and "/disk1" is designated as the mount point of the shared disk. The command line provided here as a sample is for the Solaris version.
Perform the following settings at the active node.
Mount the shared disk.
# mount /disk1(*1)
The command line for Linux version is as follows. (In this example, the device is "/dev/sdb1" and the shared disk is "/disk1".)
# /bin/mount /dev/sdb1 /disk1
Execute the following command to set the cluster information:
# mpaclcls-s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Execute the following command to apply the security information to files on the shared disk:
# mpcssave-s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk.
# umount /disk1(*1)
The command line for Linux version is as follows:
# /bin/umount /disk1
For MC/ServiceGuard
In the examples that explain the migration procedure shown below, Subsystem 1 runs as a cluster system, the device is "/dev/vg01", and the mount point to the shared disk is "/disk1".
Use the following procedure to configure the settings on the active node.
Mount the shared disk.
# vgchange -a e /dev/vg01 # mount /disk1
Execute the following command to set the cluster information:
# mpaclcls -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Execute the following command to reflect the security information in the files on the shared disk:
# mpcssave -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk.
# umount /disk1 # vgchange -a n /dev/vg01
For HACMP
In the examples that explain the settings procedure shown below, Subsystem 1 runs as a cluster system, the shared disk volume name is "datavg1", and the mount point to the shared disk is "/disk1".
Use the following procedure to configure the settings on the active node.
Mount the shared disk.
# varyonvg datavg1 # mount /disk1
Execute the following command to set the cluster information:
# mpaclcls -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Execute the following command to reflect the security information in the files on the shared disk:
# mpcssave -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk.
# umount /disk1 # varyoffvg datavg1
Settings for dual node mutual standby configuration
The explanation provided here as an example is based on the cluster configuration shown in the following figure:
For PRIMECLUSTER (for Solaris or Linux version), Oracle Solaris Cluster
Mount the shared disk on Node A.
# mount /disk1 (*1)
The command line for Linux version is as follows. (In this example, the device is "/dev/sdb1" the shared disk is "/disk1".)
# /bin/mount /dev/sdb1 /disk1
Set up cluster information by executing the following command on Node A.
# mpaclcls -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command on Node A.
# mpcssave -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk on Node A.
# umount /disk1 (*1)
The command line for Linux version is as follows.
# /bin/umount /disk1
Mount the shared disk on Node B.
# mount /disk2 (*1)
The command line for Linux version is as follows. (In this example, the device is "/dev/sdb2" the shared disk is "/disk2".)
# /bin/mount /dev/sdb2 /disk2
Set up cluster information by executing the following command on Node B.
# mpaclcls -s 2
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command on Node B.
# mpcssave -s 2
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk on Node B.
# umount /disk2 (*1)
The command line for Linux version is as follows.
# /bin/umount /disk2
For MC/ServiceGuard
Mount the shared disk on Node A.
# vgchange -a e /dev/vg01 # mount /disk1
Set up cluster information by executing the following command on Node A.
# mpaclcls -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command on Node A.
# mpcssave -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk on Node A.
# umount /disk1 # vgchange -a n /dev/vg01
Mount the shared disk on Node B.
# vgchange -a e /dev/vg02 # mount /disk2
Set up cluster information by executing the following command on Node B.
# mpaclcls -s 2
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command on Node B.
# mpcssave -s 2
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk on Node B.
# umount /disk2 # vgchange -a n /dev/vg02
For HACMP
Mount the shared disk on Node A.
# varyonvg datavg1 # mount /disk1
Set up cluster information by executing the following command on Node A.
# mpaclcls -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command on Node A.
# mpcssave -s 1
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk on Node A.
# umount /disk1 # varyoffvg datavg1
Mount the shared disk on Node B.
# varyonvg datavg2 # mount /disk2
Set up cluster information by executing the following command on Node B.
# mpaclcls -s 2
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpaclcls command.
Reflect security information in files on the shared disk by executing the following command on Node B.
# mpcssave -s 2
Refer to the Systemwalker Operation Manager Reference Guide for details on the mpcssave command.
Unmount the shared disk on Node B.
# umount /disk2 # varyoffvg datavg2
Settings for automatically reflecting calendar information and service/application startup information
The procedure for automatically reflecting calendar information is described below.
1) Enable the automatic reflection function
Information
In the operation of registering Systemwalker Operation Manager with a cluster system, the automatic reflection function allows you to automatically synchronize the setting information of the calendar and the service application startup function between active node and standby node by using Systemwalker Operation Manager daemons.
Enable the automatic reflection function for calendar information and service/application startup information.
Enable the automatic reflection function for calendar information and service/application startup information by executing the calsetcluster command on all the nodes (active and standby) on the cluster system.
The following examples show how this command is executed.
For PRIMECLUSTER (for Solaris or Linux version), Oracle Solaris Cluster
[Example where all information is automatically reflected with a 1:1 active/standby configuration]
# /opt/FJSVjmcal/bin/calsetcluster -type s
[Example where all information is automatically reflected with a N:1 active/standby configuration]
# /opt/FJSVjmcal/bin/calsetcluster -type n
[Example where all information is automatically reflected with a dual node mutual standby configuration]
# /opt/FJSVjmcal/bin/calsetcluster -type e
[Example where all information is automatically reflected with a cascading configuration]
# /opt/FJSVjmcal/bin/calsetcluster -type c
For MC/ServiceGuard
[Example where all information is automatically reflected with a 1:1 active/standby configuration]
# /opt/FHPjmcal/bin/calsetcluster -type s
[Example where all information is automatically reflected with a N:1 active/standby configuration]
# /opt/FHPjmcal/bin/calsetcluster -type n
[Example where all information is automatically reflected with a dual node mutual standby configuration]
# /opt/FHPjmcal/bin/calsetcluster -type e
For HACMP
[Example where all information is automatically reflected with a 1:1 active/standby configuration]
# /opt/FAIXjmcal/bin/calsetcluster -type s
[Example where all information is automatically reflected with a dual node mutual standby configuration]
# /opt/FAIXjmcal/bin/calsetcluster -type e
Refer to the Systemwalker Operation Manager Reference Guide for details on the calsetcluster command.
2) Set up the automatic reflection hosts
Define the hosts where calendar information and service/application startup information will be reflected in the calcphost.def definition file on the active and standby nodes.
When performing automatic reflection using an IP address (hereinafter referred to as "specific IP address") different from the local host IP address (acquired by name resolution, for example from the hosts file) on the nodes that use multiple physical IP addresses, perform steps 3 to 5 on those nodes that use a "specific IP address". The following steps 3 to 5 are not required if using the local host IP address (acquired by name resolution, for example from the hosts file).
Open the calcphost.def definition file with a text editor such as vi.
For PRIMECLUSTER (for Solaris or Linux version), Oracle Solaris Cluster
/var/opt/FJSVjmcal/etc/calcphost.def
For MC/ServiceGuard
/opt/FHPjmcal/etc/calcphost.def
For HACMP
/opt/FAIXjmcal/etc/calcphost.def
Enter the physical IP addresses or the host names corresponding to the physical IP addresses for all the nodes on the cluster system. Make sure that the definition files include the host name or physical IP address of the local node.
Also, specify the same content for the calcphost.def definition file in all nodes that configure cluster systems.
When setting a "specific IP address" as the automatic reflection destination without using a "main IP address", specify the physical IP address to be used as the "specific IP address" instead of specifying a host name.
Open the myhostip.def definition file with a text editor such as vi in nodes that will use a "specific IP address".
For PRIMECLUSTER (for Solaris or Linux version), Oracle Solaris Cluster
/var/opt/FJSVjmcal/etc/myhostip.def
For MC/ServiceGuard
/opt/FHPjmcal/etc/myhostip.def
For HACMP
/opt/FAIXjmcal/etc/myhostip.def
Enter the physical IP address that will be used as the "specific IP address".
XXX.XXX.XXX.XXX
Note: Only use alphanumeric characters, or "." and ":" as separators. Do not include characters other than those listed above such as leading or trailing blank spaces or newline codes.
Set access rights in the myhostip.def definition file.
Set write permissions for administrators and read permissions for all users.
# chmod 644 /var/opt/FJSVjmcal/etc/myhostip.def
To enable the contents of the definition file, start the calendar daemon.
Refer to the Systemwalker Operation Manager Reference Guide for details on the calcphost.def definition file.
Note
To apply automatic reflection to nodes that are already operating, first distribute policy to standardize the calendar information and service/application startup information on all the nodes. The information extracted by the policy is the information that will be subject to automatic reflection (either calendar information or service/application startup information or both).
Note
myhostip.def definition file
The physical IP address specified in the myhostip.def definition file must exist in the calendar reflection destination host calcphost.def definition file. Also if the IP address does not exist or the specification format is invalid, automatic reflection of calendar information will be invalid.
The myhostip.def definition file is not subject to backup and restore by Systemwalker Operation Manager. If necessary, manually perform backup or restore.