Create the following state transition procedures to register Systemwalker Operation Manager with the PRIMECLUSTER system:
State transition procedures that control daemon behavior (required)
Monitoring scripts (required if there are state transition procedures that monitor daemons)
State transition procedures that monitor daemons (optional)
Create these procedures on both the active and standby nodes, and place them at the same location in each node (use the same directory path). Do not create these files on the shared disk. Be sure to set up execution privileges after creating these procedures.
Create monitoring scripts and state transition procedures that monitor daemons in order to monitor the daemons running on the active node, and use daemon termination as a trigger for cluster failover. These procedures do not need to be created if daemon termination is not going to be used as trigger for a cluster failover.
The following sections describe examples of how to create each of these procedures. Samples of each of these procedures are provided with this product. Use these samples by making copies and modifying the copies to match the environment.
Create a state transition procedure that controls the behavior of the Jobscheduler and Job Execution Control daemons.
The following sample state transition procedure to control daemon behavior is provided:
/opt/FJSVJMCMN/etc/script/OperationMGR.proc |
The sample state transition procedure is for 1:1 active/standby (without subsystems) and cascading configurations.
The state transition procedure must be modified for 1:1 active/standby (with subsystems), 1:1 active/standby (with subsystems and partial cluster operation), N:1 active/standby, and dual node mutual standby configurations.
Note also that the sample state transition procedure mounts the shared disk as "/disk1". If necessary, modify the state transition procedure file so that the shared disk is referred to by the correct name.
Copy the sample file, and then modify it to match the cluster system operation. For 1:1 active/standby (without subsystems) and cascading configuration, this sample file can be used without modification, but should be backed up anyway.
The following example shows how to modify the state transition procedure.
Example showing how to modify state transition procedures for 1:1 active/standby (with subsystems) and 1:1 active/standby (with subsystems and partial cluster operation) configurations
In a system where multiple subsystems are running, start/stop the daemons of Jobscheduler and Job Execution Control on each subsystem.
Below are an example showing Subsystem 0 and Subsystem 1 operating in a 1:1 active/standby configuration (with subsystems) and an example of a 1:1 active/standby configuration (with subsystems and partial cluster operation) in which Subsystem 1 is involved in cluster operation but Subsystem 0 is not.
Change the "SUBSYSTEM" variable to "PLU_SUBSYSTEM" and its value to the subsystem numbers. In the case of a 1:1 active/standby configuration (with subsystems and partial cluster operation), modify the procedure so that only the numbers of subsystems involved in cluster operation are specified.
[Before]
SUBSYSTEM="0"
[After]
1:1 active/standby configuration (with subsystems):
PLU_SUBSYSTEM="0 1"
1:1 active/standby configuration (with subsystems and partial cluster operation):
PLU_SUBSYSTEM="1"
Add the for, do, and done statements to the end of START-RUN-AFTER to start Jobscheduler and Job Execution Control on each subsystem.
[Before]
# Starts Job Scheduler & Job Execution Control # - 1:1 standby, N:1 standby, 2 nodes mutual standby sh /etc/opt/FJSVMJS/etc/rc3.d/S99MJS -sys $SUBSYSTEM sh /opt/FJSVJOBSC/etc/rc3.d/S99JOBSCH -sys $SUBSYSTEM ;;
[After]
# Starts Job Scheduler & Job Execution Control # - 1:1 standby, N:1 standby, 2 nodes mutual standby for SUBSYSTEM in $PLU_SUBSYSTEM do sh /etc/opt/FJSVMJS/etc/rc3.d/S99MJS -sys $SUBSYSTEM sh /opt/FJSVJOBSC/etc/rc3.d/S99JOBSCH -sys $SUBSYSTEM done ;;
Add the for and do statements to the top of STOP-RUN-BEFORE to stop Jobscheduler and Job Execution Control on each subsystem.
[Before]
'BEFORE') # Job Execution Control Server if [ -x /usr/bin/zonename ]
[After]
'BEFORE') # Job Execution Control Server for SUBSYSTEM in $PLU_SUBSYSTEM do if [ -x /usr/bin/zonename ]
Add the done statement to the end of STOP-RUN-BEFORE.
[Before]
done ;; 'AFTER')
[After]
done done ;; 'AFTER')
In the case of a 1:1 active/standby configuration (with subsystems and partial cluster operation), add the for, do and done statements to the end of START-RUN-AFTER to update security information on each subsystem automatically.
[Before]
# - 1:1 standby, N:1 standby /opt/FJSVfwseo/bin/mpaclcls sh /opt/FJSVfwseo/bin/jmacltrn.sh
[After]
# - 1:1 standby, N:1 standby for SUBSYSTEM in $PLU_SUBSYSTEM do /opt/FJSVfwseo/bin/mpaclcls -s $SUBSYSTEM sh /opt/FJSVfwseo/bin/jmacltrn.sh $SUBSYSTEM done
Example of how to modify the state transition procedure for N:1 active/standby configurations
For START-RUN-AFTER, remove the comments for the "Make symbolic links. (if N:1 standby)" section. (That is, remove the "#" from the lines of code.)
[Before]
# Make symbolic links. (if N:1 standby) # ACL Manager #if [ ! "(" -h "/var/opt/FJSVfwseo/JM" -o -f "/var/opt/FJSVfwseo/JM" ")" ] #then # ln -s /disk1/FJSVfwseo/JM /var/opt/FJSVfwseo/JM #fi # Job Scheduler #if [ ! "(" -h "/var/opt/FJSVJOBSC" -o -f "/var/opt/FJSVJOBSC" ")"] #then # ln -s /disk1/FJSVJOBSC /var/opt/FJSVJOBSC #fi # Job Execution Control #if [ ! "(" -h "/var/spool/mjes" -o -f "/var/spool/mjes" ")" ] #then # ln -s /disk1/FJSVMJS/var/spool/mjes /var/spool/mjes #fi #if [ ! "(" -h "/etc/mjes" -o -f "/etc/mjes" ")" ] #then # ln -s /disk1/FJSVMJS/etc/mjes /etc/mjes #fi # Calendar #if [ ! "(" -h "/var/opt/FJSVjmcal/post" -o -f "/var/opt/FJSVjmcal/post" ")" ] #then # ln -s /disk1/FJSVjmcal/post /var/opt/FJSVjmcal/post #fi # Stem #if [ ! "(" -h "/var/opt/FJSVstem" -o -f "/var/opt/FJSVstem" ")" ] #then # ln -s /disk1/FJSVstem /var/opt/FJSVstem #fi # - 1:1 standby, N:1 standby
[After]
# Make symbolic links. (if N:1 standby) # ACL Manager if [ ! "(" -h "/var/opt/FJSVfwseo/JM" -o -f "/var/opt/FJSVfwseo/JM" ")" ] then ln -s /disk1/FJSVfwseo/JM /var/opt/FJSVfwseo/JM fi # Job Scheduler if [ ! "(" -h "/var/opt/FJSVJOBSC" -o -f "/var/opt/FJSVJOBSC" ")"] then ln -s /disk1/FJSVJOBSC /var/opt/FJSVJOBSC fi # Job Execution Control if [ ! "(" -h "/var/spool/mjes" -o -f "/var/spool/mjes" ")" ] then ln -s /disk1/FJSVMJS/var/spool/mjes /var/spool/mjes fi if [ ! "(" -h "/etc/mjes" -o -f "/etc/mjes" ")" ] then ln -s /disk1/FJSVMJS/etc/mjes /etc/mjes fi # Calendar if [ ! "(" -h "/var/opt/FJSVjmcal/post" -o -f "/var/opt/FJSVjmcal/post" ")" ] then ln -s /disk1/FJSVjmcal/post /var/opt/FJSVjmcal/post fi # Stem (*1) if [ ! "(" -h "/var/opt/FJSVstem" -o -f "/var/opt/FJSVstem" ")" ] then ln -s /disk1/FJSVstem /var/opt/FJSVstem fi # - 1:1 standby, N:1 standby
*1: Remove this comment only if the Master Schedule Management function is enabled.
For STOP-RUN-AFTER, remove the comments for the "remove symbolic links. (if N:1 standby)" section. (That is, remove the "#" from the lines of code.)
[Before]
# remove symbolic links.(if N:1 standby) # Job Scheduler #if [ -h "/var/opt/FJSVJOBSC" ] #then # rm /var/opt/FJSVJOBSC #fi # Job Execution Control #if [ -h "/var/spool/mjes" ] #then # rm /var/spool/mjes #fi #if [ -h "/etc/mjes" ] #then # rm /etc/mjes #fi # ACL Manager #/opt/FJSVfwseo/bin/mpaclcls -u #if [ -h "/var/opt/FJSVfwseo/JM" ] #then # rm /var/opt/FJSVfwseo/JM #fi # Calendar #if [ -h "/var/opt/FJSVjmcal/post" ] #then # rm /var/opt/FJSVjmcal/post #fi # Stem #if [ -h "/var/opt/FJSVstem" ] #then # rm /var/opt/FJSVstem #fi ;;
[After]
# remove symbolic links. (if N:1 standby) # Job Scheduler if [ -h "/var/opt/FJSVJOBSC" ] then rm /var/opt/FJSVJOBSC fi # Job Execution Control if [ -h "/var/spool/mjes" ] then rm /var/spool/mjes fi if [ -h "/etc/mjes" ] then rm /etc/mjes fi # ACL Manager /opt/FJSVfwseo/bin/mpaclcls -u if [ -h "/var/opt/FJSVfwseo/JM" ] then rm /var/opt/FJSVfwseo/JM fi # Calendar if [ -h "/var/opt/FJSVjmcal/post" ] then rm /var/opt/FJSVjmcal/post fi # Stem (*1) if [ -h "/var/opt/FJSVstem" ] then rm /var/opt/FJSVstem fi ;;
*1: Remove this comment only if the Master Schedule Management function is enabled.
Prepare N copies of the state transition procedure file (one for each active node), assigning a unique name to each copy. Change the directory ("/disk1" in the example) where symbolic links will be created so that it matches the shared disk of each active node.
Place each state transition procedure file on its respective active node, and then copy all of the state transition procedure files to the standby node, storing all N files in the same directory path as was used to store the files on the active nodes.
The following example shows how to place these files when there are three active nodes:
Active node1: /opt/FJSVJMCMN/etc/script/OperationMGR1.proc
Active node2: /opt/FJSVJMCMN/etc/script/OperationMGR2.proc
Active node3: /opt/FJSVJMCMN/etc/script/OperationMGR3.proc
Standby node: /opt/FJSVJMCMN/etc/script/OperationMGR1.proc /opt/FJSVJMCMN/etc/script/OperationMGR2.proc /opt/FJSVJMCMN/etc/script/OperationMGR3.proc
Example of how to modify the state transition procedure for dual node mutual standby configuration
Change the value of the "SUBSYSTEM" variable in the state transition procedure file to "1".
[Before]
SUBSYSTEM="0"
[After]
SUBSYSTEM="1"
Modify the "- 1:1 standby, N:1 standby" section for START-RUN-AFTER so that it matches dual node mutual standby configuration. (Change the positions of "#".)
[Before]
# - 1:1 standby, N:1 standby /opt/FJSVfwseo/bin/mpaclcls sh /opt/FJSVfwseo/bin/jmacltrn.sh # - 2 nodes mutual standby #/opt/FJSVfwseo/bin/mpaclcls -s $SUBSYSTEM #sh /opt/FJSVfwseo/bin/jmacltrn.sh $SUBSYSTEM # Starts Job Scheduler & Job Execution Control
[After]
# - 1:1 standby, N:1 standby #/opt/FJSVfwseo/bin/mpaclcls #sh /opt/FJSVfwseo/bin/jmacltrn.sh # - 2 nodes mutual standby /opt/FJSVfwseo/bin/mpaclcls -s $SUBSYSTEM sh /opt/FJSVfwseo/bin/jmacltrn.sh $SUBSYSTEM # Starts Job Scheduler & Job Execution Control
Prepare another state transition procedure file with the same (modified) contents, rename the file, and change the value of the "SUBSYSTEM" variable to "2".
[Before]
SUBSYSTEM="1"
[After]
SUBSYSTEM="2"
Place the two state transition procedure files on the active and standby nodes, using the same directory path in each case.
The following example shows how to place these files:
Active node1:/opt/FJSVJMCMN/etc/script/OperationMGR1.proc (Standby node2)/opt/FJSVJMCMN/etc/script/OperationMGR2.proc
Active node2:/opt/FJSVJMCMN/etc/script/OperationMGR2.proc (Standby node1)/opt/FJSVJMCMN/etc/script/OperationMGR1.proc
Create monitoring scripts that will be called by the state transition procedure that monitors the daemons.
The following sample monitoring script is provided:
/opt/FJSVJMCMN/etc/script/omgr_smonitor
The sample monitoring script is for 1:1 active/standby (without subsystems) and cascading configurations. It must be modified for 1:1 active/standby (with subsystems), 1:1 active/standby (with subsystems and partial cluster operation), N:1 active/standby and dual node mutual standby configurations.
Copy the sample file, and then modify it to match the cluster system operation. For 1:1 active/standby configuration (without subsystems) and cascading configuration, this sample file can be used without modification, but should be backed up anyway.
The following example shows how to modify the monitoring script.
Example showing how to modify state transition procedures for 1:1 active/standby (with subsystems) and 1:1 active/standby (with subsystems and partial cluster operation) configurations
In a system where multiple subsystems are running, perform daemon monitoring for each subsystem. Below are an example showing Subsystem 0 and Subsystem 1 operating in a 1:1 active/standby configuration (with subsystems) and an example of a 1:1 active/standby configuration (with subsystems and partial cluster operation) in which Subsystem 1 is involved in cluster operation but Subsystem 0 is not.:
Change the "SUBSYSTEM" variable to "PLU_SUBSYSTEM" and its value to the subsystem numbers. In the case of a 1:1 active/standby configuration (with subsystems and partial cluster operation), modify the procedure so that only the numbers of subsystems involved in cluster operation are specified.
[Before]
SUBSYSTEM="0"
[After]
1:1 active/standby configuration (with subsystems):
PLU_SUBSYSTEM="0 1"
1:1 active/standby configuration (with subsystems and partial cluster operation):
PLU_SUBSYSTEM="1"
Add the for and do statements immediately after while do to monitor daemons on each subsystem.
[Before]
while /bin/true do if [ -x /usr/bin/zonename ]
[After]
while /bin/true do for SUBSYSTEM in $PLU_SUBSYSTEM do if [ -x /usr/bin/zonename ]
Add the done statement before sleep 10.
[Before]
sleep 10 done
[After]
done sleep 10 done
Example of how to modify the monitoring script for N:1 active/standby configuration
Prepare N copies of the monitoring script (one for each active node), assigning a unique name to each copy.
Place each monitoring script on its respective active node, and then copy all the monitoring scripts to the standby node, storing all N files in the same directory path as was used to store the files on the active nodes.
The following example shows how to place these files when there are three active nodes:
Active node1:/opt/FJSVJMCMN/etc/script/omgr_smonitor1
Active node2:/opt/FJSVJMCMN/etc/script/omgr_smonitor2
Active node3:/opt/FJSVJMCMN/etc/script/omgr_smonitor3
Standby node:/opt/FJSVJMCMN/etc/script/omgr_smonitor1 /opt/FJSVJMCMN/etc/script/omgr_smonitor2 /opt/FJSVJMCMN/etc/script/omgr_smonitor3
Example of how to modify the monitoring script for dual node mutual standby configuration
Change the value of the "SUBSYSTEM" variable to "1".
[Before]
SUBSYSTEM="0"
[After]
SUBSYSTEM="1"
Prepare another state transition procedure file with the same (modified) contents, rename the file, and change the value of the "SUBSYSTEM" variable to "2".
[Before]
SUBSYSTEM="1"
[After]
SUBSYSTEM="2"
Place the two state transition procedure files on the active and standby nodes, using the same directory path in each case.
The following example shows how to place these files:
Active node1:/opt/FJSVJMCMN/etc/script/omgr_smonitor1 (Standby node2)/opt/FJSVJMCMN/etc/script/omgr_smonitor2
Active node2:/opt/FJSVJMCMN/etc/script/omgr_smonitor2 (Standby node1)/opt/FJSVJMCMN/etc/script/omgr_smonitor1
Create a state transition procedure that monitors Jobscheduler and Job Execution Control daemons.
The following sample state transition procedure to monitor daemons is provided:
/opt/FJSVJMCMN/etc/script/omgr_monitor.proc |
The sample state transition procedure is for 1:1 active/standby configuration and cascading configuration. This state transition procedure must be modified for N:1 active/standby configuration and dual node mutual standby configuration.
Copy the sample file, and then modify it to match the cluster system operation. For 1:1 active/standby and cascading configuration, this sample file can be used without modification, but should be backed up anyway.
The following example shows how to modify the state transition procedure.
Example of how to modify the state transition procedure for N:1 active/standby configuration
Create N copies of state transition procedure file that monitor daemons, assigning a unique name to each copy. Change the monitoring script file name specified in the state transition procedure to the name of the file created in "4.1.1.2 Creating monitoring scripts ."
Prepare N copies of the state transition procedure file (one for each active node), assigning a unique name to each copy. Change the information in each file to match the environment.
The following example shows how to make changes when there are three nodes:
State transition procedure that monitors daemons: omgr_monitor1.proc
$CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor1 -c 0
State transition procedure that monitors daemons: omgr_monitor2.proc
$CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor2 -c 0
State transition procedure that monitors daemons: omgr_monitor3.proc
$CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor3 -c 0
Place each state transition procedure file on its respective active node, and then copy all of the state transition procedure files to the standby node, storing all N files in the same directory path as was used to store the files on the active nodes.
The following example shows how to place these files when there are three active nodes:
Active node1:/opt/FJSVJMCMN/etc/script/omgr_monitor1.proc
Active node2:/opt/FJSVJMCMN/etc/script/omgr_monitor2.proc
Active node3:/opt/FJSVJMCMN/etc/script/omgr_monitor3.proc
Standby node:/opt/FJSVJMCMN/etc/script/omgr_monitor1.proc /opt/FJSVJMCMN/etc/script/omgr_monitor2.proc /opt/FJSVJMCMN/etc/script/omgr_monitor3.proc
Example of how to modify the state transition procedure for dual node mutual standby configuration
Create two copies of state transition procedure file that monitor daemons, assigning a unique name to each copy. Change the monitoring script file name specified in the state transition procedure to the name of the file created in "4.1.1.2 Creating monitoring scripts ."
Change the file name in the monitoring script for one state transition procedure.
[Before]
$CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor -c 0
[After]
$CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor1 -c 0
Change the file name in the monitoring script for another state transition procedure.
[Before]
$CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor -c 0
[After]
$CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor2 -c 0
Place the two state transition procedure files on the active and standby nodes, using the same directory path in each case.
The following example shows how to place these files:
Active node1:/opt/FJSVJMCMN/etc/script/omgr_monitor1.proc (Standby node2)/opt/FJSVJMCMN/etc/script/omgr_monitor2.proc
Active node2:/opt/FJSVJMCMN/etc/script/omgr_monitor2.proc (Standby node1)/opt/FJSVJMCMN/etc/script/omgr_monitor1.proc