Top
Systemwalker Operation Manager  Cluster Setup Guide for UNIX
FUJITSU Software

5.1.1 Creating state transition procedures

Create the following state transition procedures to register Systemwalker Operation Manager with the PRIMECLUSTER system:

Create these procedures on both the active and standby nodes, and place them at the same location in each node (use the same directory path). Do not create these files on the shared disk. Be sure to set up execution privileges after creating these procedures.

Create monitoring scripts and state transition procedures that monitor daemons in order to monitor the daemons running on the active node, and use daemon termination as a trigger for cluster failover. These procedures do not need to be created if daemon termination is not going to be used as trigger for a cluster failover.

The following sections describe examples of how to create each of these procedures. Samples of each of these procedures are provided with this product. Use these samples by making copies and modifying the copies to match the environment.

5.1.1.1 Creating state transition procedures that control daemon behavior

Create a state transition procedure to control the behavior of the Jobscheduler and Job Execution Control daemons.

The following sample state transition procedure to control daemon behavior is provided:

/opt/FJSVJMCMN/etc/script/OperationMGR.proc

The sample state transition procedure is for 1:1 active/standby (without subsystems) and cascading configurations. The state transition procedure must be modified for 1:1 active/standby (with subsystems), 1:1 active/standby (with subsystems and partial cluster operation), N:1 active/standby, and dual node mutual standby configurations.

This sample state transition procedure assumes that the name of the shared disk is "/disk1". If necessary, change this specification to match the actual name of the shared disk.

Copy the sample file, and then modify it to match the cluster system operation. For 1:1 active/standby (without subsystems) and cascading configurations, this sample file can be used without modification, but should be backed up anyway.

The following examples show how to modify the state transition procedure.

Example showing how to modify state transition procedures for 1:1 active/standby (with subsystems) and 1:1 active/standby (with subsystems and partial cluster operation) configurations

In a system where multiple subsystems are running, start/stop the daemons of Jobscheduler and Job Execution Control on each subsystem.

Below are an example showing Subsystem 0 and Subsystem 1 operating in a 1:1 active/standby configuration (with subsystems) and an example of a 1:1 active/standby configuration (with subsystems and partial cluster operation) in which Subsystem 1 is involved in cluster operation but Subsystem 0 is not. :

  1. Change the "SUBSYSTEM" variable to "PLU_SUBSYSTEM" and its value to the subsystem numbers. In the case of a 1:1 active/standby configuration (with subsystems and partial cluster operation), modify the procedure so that only the numbers of subsystems involved in cluster operation are specified.

    [Before]

    SUBSYSTEM="0"

    [After]

    1:1 active/standby configuration (with subsystems):

    PLU_SUBSYSTEM="0 1"

    1:1 active/standby configuration (with subsystems and partial cluster operation):

    PLU_SUBSYSTEM="1"
  2. Add the for, do, and done statements to the end of START-RUN-AFTER to start Jobscheduler and Job Execution Control on each subsystem.

    [Before]

    # Starts Job Scheduler & Job Execution Control
    # - 1:1 standby, N:1 standby, 2 nodes mutual standby
          /bin/sh /etc/opt/FJSVMJS/etc/rc3.d/S99MJS -sys $SUBSYSTEM
          /bin/sh /opt/FJSVJOBSC/etc/rc3.d/S99JOBSCH -sys $SUBSYSTEM
          ;;

    [After]

    # Starts Job Scheduler & Job Execution Control
    # - 1:1 standby, N:1 standby, 2 nodes mutual standby
          for SUBSYSTEM in $PLU_SUBSYSTEM
          do
          /bin/sh /etc/opt/FJSVMJS/etc/rc3.d/S99MJS -sys $SUBSYSTEM
          /bin/sh /opt/FJSVJOBSC/etc/rc3.d/S99JOBSCH -sys $SUBSYSTEM
          done
          ;;
  3. Add the for and do statements to the top of STOP-RUN-BEFORE to stop Jobscheduler and Job Execution Control on each subsystem.

    [Before]

    'BEFORE')
        # Job Execution Control Server
        MJSDAEMON=`/bin/ps -eo pid,args | /bin/grep "/usr/lib/mjes/mjsdaemon -sys $SUBSYSTEM" | /bin/grep -v "grep" | /usr/bin/wc -l `

    [After]

    BEFORE')
        # Job Execution Control Server
        for SUBSYSTEM in $PLU_SUBSYSTEM
        do
        MJSDAEMON=`/bin/ps -eo pid,args | /bin/grep "/usr/lib/mjes/mjsdaemon -sys $SUBSYSTEM" | /bin/grep -v "grep" | /usr/bin/wc -l `
  4. Add the done statement to the end of STOP-RUN-BEFORE.

    [Before]

        done
        ;;
    'AFTER')

    [After]

       done
        done
        ;;
    'AFTER')
  5. In the case of a 1:1 active/standby configuration (with subsystems and partial cluster operation), add the for, do and done statements to the end of START-RUN-AFTER to update security information on each subsystem automatically.

    [Before]

    # - 1:1 standby, N:1 standby
            /opt/FJSVfwseo/bin/mpaclcls
            /bin/sh /opt/FJSVfwseo/bin/jmacltrn.sh

    [After]

    # - 1:1 standby, N:1 standby
            for SUBSYSTEM in $PLU_SUBSYSTEM
            do
            /opt/FJSVfwseo/bin/mpaclcls -s $SUBSYSTEM
            /bin/sh /opt/FJSVfwseo/bin/jmacltrn.sh $SUBSYSTEM
            done

Example of how to modify the state transition procedures for N:1 active/standby configuration

  1. For START-RUN-AFTER, remove the comments for the "Make symbolic links. (if N:1 standby)" section. (That is, remove the "#" from the lines of code.)

    [Before]

    # Make symbolic links.(if N:1 standby)
    # ACL Manager
    #if [ ! "(" -h "/var/opt/FJSVfwseo/JM" -o -f "/var/opt/FJSVfwseo/JM" ")" ]
    #then
    #  /bin/ln -s /disk1/FJSVfwseo/JM /var/opt/FJSVfwseo/JM
    #fi
    # Job Scheduler
    #if [ ! "(" -h "/var/opt/FJSVJOBSC" -o -f "/var/opt/FJSVJOBSC" ")"]
    #then
    #  /bin/ln -s /disk1/FJSVJOBSC /var/opt/FJSVJOBSC
    #fi
    # Job Execution Control
    #if [ ! "(" -h "/var/spool/mjes" -o -f "/var/spool/mjes" ")" ]
    #then
    #  /bin/ln -s /disk1/FJSVMJS/var/spool/mjes /var/spool/mjes
    #fi
    #if [ ! "(" -h "/etc/mjes" -o -f "/etc/mjes" ")" ]
    #then
    #  /bin/ln -s /disk1/FJSVMJS/etc/mjes /etc/mjes
    #fi
    # Calendar
    #if [ ! "(" -h "/var/opt/FJSVjmcal/post" -o -f "/var/opt/FJSVjmcal/post" ")" ]
    #then
    #  /bin/ln -s /disk1/FJSVjmcal/post /var/opt/FJSVjmcal/post
    #fi
    # Stem
    #if [ ! "(" -h "/var/opt/FJSVstem" -o -f "/var/opt/FJSVstem" ")" ]
    #then
    #  /bin/ln -s /disk1/FJSVstem /var/opt/FJSVstem
    #fi
    # - 1:1 standby, N:1 standby

    [After]

    # Make symbolic links.(if N:1 standby)
    # ACL Manager
    if [ ! "(" -h "/var/opt/FJSVfwseo/JM" -o -f "/var/opt/FJSVfwseo/JM" ")" ]
    then
      /bin/ln -s /disk1/FJSVfwseo/JM /var/opt/FJSVfwseo/JM
    fi
    # Job Scheduler
    if [ ! "(" -h "/var/opt/FJSVJOBSC" -o -f "/var/opt/FJSVJOBSC" ")"]
    then
      /bin/ln -s /disk1/FJSVJOBSC /var/opt/FJSVJOBSC
    fi
    # Job Execution Control
    if [ ! "(" -h "/var/spool/mjes" -o -f "/var/spool/mjes" ")" ]
    then
     /bin/ln -s /disk1/FJSVMJS/var/spool/mjes /var/spool/mjes
    fi
    if [ ! "(" -h "/etc/mjes" -o -f "/etc/mjes" ")" ]
    then
     /bin/ln -s /disk1/FJSVMJS/etc/mjes /etc/mjes
    fi
    # Calendar
    if [ ! "(" -h "/var/opt/FJSVjmcal/post" -o -f "/var/opt/FJSVjmcal/post" ")" ]
    then
     /bin/ln -s /disk1/FJSVjmcal/post /var/opt/FJSVjmcal/post
    fi
    # Stem (*1)
    if [ ! "(" -h "/var/opt/FJSVstem" -o -f "/var/opt/FJSVstem" ")" ]
    then
    /bin/ln -s /disk1/FJSVstem /var/opt/FJSVstem
    fi
    # - 1:1 standby, N:1 standby

    *1: Remove the comment only if the Master Schedule Management function is enabled.

  2. For STOP-RUN-AFTER, remove the comments for the "remove symbolic links. (if N:1 standby)" section. (That is, remove the "#" from the lines of code.)

    [Before]

    # remove symbolic links.(if N:1 standby)
    # Job Scheduler
    #if [ -h "/var/opt/FJSVJOBSC" ]
    #then
    #      /bin/rm /var/opt/FJSVJOBSC
    #fi
    # Job Execution Control
    #if [ -h "/var/spool/mjes" ]
    #then
    #      /bin/rm /var/spool/mjes
    #fi
    #if [ -h "/etc/mjes" ]
    #then
    #      /bin/rm /etc/mjes
    #fi
    # ACL Manager
    #/opt/FJSVfwseo/bin/mpaclcls -u
    #if [ -h "/var/opt/FJSVfwseo/JM" ]
    #then
    #      /bin/rm /var/opt/FJSVfwseo/JM
    #fi
    # Calendar
    #if [ -h "/var/opt/FJSVjmcal/post" ]
    #then
    #      /bin/rm /var/opt/FJSVjmcal/post
    #fi
    # Stem
    #if [ -h "/var/opt/FJSVstem" ]
    #then
    #      /bin/rm /var/opt/FJSVstem
    #fi
    ;;

    [After]

    # remove symbolic links.(if N:1 standby)
    # Job Scheduler
    if [ -h "/var/opt/FJSVJOBSC" ]
    then
          /bin/rm /var/opt/FJSVJOBSC
    fi
    # Job Execution Control
    if [ -h "/var/spool/mjes" ]
    then
          /bin/rm /var/spool/mjes
    fi
    if [ -h "/etc/mjes" ]
    then
          /bin/rm /etc/mjes
    fi
    # ACL Manager
    /opt/FJSVfwseo/bin/mpaclcls -u
    if [ -h "/var/opt/FJSVfwseo/JM" ]
    then
          /bin/rm /var/opt/FJSVfwseo/JM
    fi
    # Calendar
    if [ -h "/var/opt/FJSVjmcal/post" ]
    then
          /bin/rm /var/opt/FJSVjmcal/post
    fi
    # Stem (*1)
    if [ -h "/var/opt/FJSVstem" ]
    then
          /bin/rm /var/opt/FJSVstem
    fi
    ;;

    *1: Remove the comment only if the Master Schedule Management function is enabled.

  3. Prepare N copies of the state transition procedure file (one for each active node), assigning a unique name to each copy. Change the directory ("/disk1" in the example) where symbolic links will be created so that it matches the shared disk of each active node.

  4. Place each state transition procedure file on its respective active node, and then copy all of the state transition procedure files to the standby node, storing all N files in the same directory path as was used to store the files on the active nodes.

    The following example shows how to place these files when there are three active nodes:

    Active node1: /opt/FJSVJMCMN/etc/script/OperationMGR1.proc
    Active node2: /opt/FJSVJMCMN/etc/script/OperationMGR2.proc
    Active node3: /opt/FJSVJMCMN/etc/script/OperationMGR3.proc
    Standby node: /opt/FJSVJMCMN/etc/script/OperationMGR1.proc
                  /opt/FJSVJMCMN/etc/script/OperationMGR2.proc
                  /opt/FJSVJMCMN/etc/script/OperationMGR3.proc

Example of how to modify the state transition procedure for dual node mutual standby configurations

  1. Change the value of the "SUBSYSTEM" variable in the state transition procedure file to "1".

    [Before]

    SUBSYSTEM="0"

    [After]

    SUBSYSTEM="1"
  2. Modify the "- 1:1 standby, N:1 standby" section for START-RUN-AFTER so that it matches dual node mutual standby configuration. (Change the positions of "#".)

    [Before]

    # - 1:1 standby, N:1 standby
        /opt/FJSVfwseo/bin/mpaclcls
        /bin/sh /opt/FJSVfwseo/bin/jmacltrn.sh
    # - 2 nodes mutual standby
        #/opt/FJSVfwseo/bin/mpaclcls -s $SUBSYSTEM
        #/bin/sh /opt/FJSVfwseo/bin/jmacltrn.sh $SUBSYSTEM
    # Starts Job Scheduler & Job Execution Control

    [After]

    # - 1:1 standby, N:1 standby
        #/opt/FJSVfwseo/bin/mpaclcls
        #/bin/sh /opt/FJSVfwseo/bin/jmacltrn.sh
    # - 2 nodes mutual standby
        /opt/FJSVfwseo/bin/mpaclcls -s $SUBSYSTEM
        /bin/sh /opt/FJSVfwseo/bin/jmacltrn.sh $SUBSYSTEM
    # Starts Job Scheduler & Job Execution Control
  3. Prepare another state transition procedure file with the same (modified) contents, rename the file, and change the value of the "SUBSYSTEM" variable to "2".

    [Before]

    SUBSYSTEM="1"

    [After]

    SUBSYSTEM="2"
  4. Place the two state transition procedure files on the active and standby nodes, using the same directory path in each case.
    The following example shows how to place these files:

    Active node1: /opt/FJSVJMCMN/etc/script/OperationMGR1.proc
      (Standby node2)/opt/FJSVJMCMN/etc/script/OperationMGR2.proc
    Active node2: /opt/FJSVJMCMN/etc/script/OperationMGR2.proc
      (Standby node1)/opt/FJSVJMCMN/etc/script/ OperationMGR1.proc

5.1.1.2 Creating monitoring scripts

Create monitoring scripts that will be called from the state transition procedure that monitors the daemons.

The following sample monitoring script is provided:

/opt/FJSVJMCMN/etc/script/omgr_smonitor

This sample monitoring script is for the 1:1 active/standby (without subsystems) and cascading configurations. It must be modified for 1:1 active/standby (with subsystems), 1:1 active/standby (with subsystems and partial cluster operation), N:1 active/standby and dual node mutual standby configurations.

Copy the sample file, and then modify it to match the cluster system operation. For 1:1 active/standby (without subsystems) and cascading configurations, this sample file can be used without modification, but should be backed up anyway.

The following examples show how to modify the monitoring script.

Example showing how to modify state transition procedures for 1:1 active/standby (with subsystems) and 1:1 active/standby (with subsystems and partial cluster operation) configurations

In a system where multiple subsystems are running, perform daemon monitoring for each subsystem. Below are an example showing Subsystem 0 and Subsystem 1 operating in a 1:1 active/standby configuration (with subsystems) and an example of a 1:1 active/standby configuration (with subsystems and partial cluster operation) in which Subsystem 1 is involved in cluster operation but Subsystem 0 is not. :

  1. Change the "SUBSYSTEM" variable to "PLU_SUBSYSTEM" and its value to the subsystem numbers. In the case of a 1:1 active/standby configuration (with subsystems and partial cluster operation), modify the script so that only the numbers of subsystems involved in cluster operation are specified.

    [Before]

    SUBSYSTEM="0"

    [After]

    1:1 active/standby configuration (with subsystems):

    PLU_SUBSYSTEM="0 1"

    1:1 active/standby configuration (with subsystems and partial cluster operation):

    PLU_SUBSYSTEM="1"
  2. Add the for and do statements immediately after while do to monitor daemons on each subsystem.

    [Before]

    while /bin/true
    do
        MJSDAEMON=`/bin/ps -eo pid,args | /bin/grep "/usr/lib/mjes/mjsdaemon -sys $SUBSYSTEM" | /bin/grep -v "grep" | /usr/bin/wc -l `

    [After]

    while /bin/true
    do
        for SUBSYSTEM in $PLU_SUBSYSTEM
        do
        MJSDAEMON=`/bin/ps -eo pid,args | /bin/grep "/usr/lib/mjes/mjsdaemon -sys $SUBSYSTEM" | /bin/grep -v "grep" | /usr/bin/wc -l `
  3. Add the done statement before sleep 10.

    [Before]

      /bin/sleep 10
    done

    [After]

      done
      /bin/sleep 10
    done

Example of how to modify the monitoring script for N:1 active/standby configuration

  1. Prepare N copies of the monitoring scripts (one for each active node), assigning a unique name to each copy.

  2. Place each monitoring script on its respective active node, and then copy all of the monitoring scripts to the standby node, storing all N scripts in the same directory path as was used to store the files on the active nodes.

    The following example shows how to place these files when there are three active nodes:

    Active node1: /opt/FJSVJMCMN/etc/script/omgr_smonitor1
    Active node2: /opt/FJSVJMCMN/etc/script/omgr_smonitor2
    Active node3: /opt/FJSVJMCMN/etc/script/omgr_smonitor3
    Standby node: /opt/FJSVJMCMN/etc/script/omgr_smonitor1
                  /opt/FJSVJMCMN/etc/script/omgr_smonitor2
                  /opt/FJSVJMCMN/etc/script/omgr_smonitor3

Example of how to modify the monitoring script for dual node mutual standby configuration

  1. Change the value of the "SUBSYSTEM" variable to "1".

    [Before]

    SUBSYSTEM="0"

    [After]

    SUBSYSTEM="1"
  2. Prepare another monitoring script with the same (modified) contents, rename the file, and change the value of the "SUBSYSTEM" variable to "2".

    [Before]

    SUBSYSTEM="1"

    [After]

    SUBSYSTEM="2"
  3. Place the two monitoring scripts on the active and standby nodes, using the same directory path in each case.

    The following example shows how to place these files:

    Active node1: /opt/FJSVJMCMN/etc/script/omgr_smonitor1 
      (Standby node2)/opt/FJSVJMCMN/etc/script/omgr_smonitor2
    Active node2: /opt/FJSVJMCMN/etc/script/omgr_smonitor2 
      (Standby node1)/opt/FJSVJMCMN/etc/script/omgr_smonitor1

5.1.1.3 Creating state transition procedures that monitor daemons

Create a state transition procedure that monitors the daemons of Jobscheduler and Job Execution Control. The following sample state transition procedure that monitors the daemons is provided:

/opt/FJSVJMCMN/etc/script/omgr_monitor.proc

This sample state transition procedure is for 1:1 active/standby and cascading configurations. This state transition procedure must be modified for N:1 active/standby configuration and dual node mutual standby configuration.

Copy the sample file, and then modify it to match the cluster system operation. For 1:1 active/standby and cascading configurations, this sample file can be used without modification, but should be backed up anyway.

The following examples show how to modify the state transition procedure.

Example of how to modify the state transition procedure for N:1 active/standby configuration

Create N copies of state transition procedure file that monitor daemons, assigning a unique name to each copy. Change the monitoring script file name specified in the state transition procedure to the name of the file created in "5.1.1.2 Creating monitoring scripts."

  1. Prepare N copies of the state transition procedure file (one for each active node), assigning a unique name to each copy. Change the information in each file to match the environment.
    The following example shows how to make changes when there are three nodes:

    • State transition procedure that monitors daemons: omgr_monitor1.proc

      $CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor1 -c 0
    • State transition procedure that monitors daemons: omgr_monitor2.proc

      $CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor2 -c 0
    • State transition procedure that monitors daemons: omgr_monitor3.proc

      $CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor3 -c 0
  2. Place each state transition procedure file on its respective active node, and then copy all the state transition procedure files to the standby node, storing all N files in the same directory path as was used to store the files on the active nodes.

    The following example shows how to place these files when there are three active nodes:

    Active node1: /opt/FJSVJMCMN/etc/script/omgr_monitor1.proc
    Active node2: /opt/FJSVJMCMN/etc/script/omgr_monitor2.proc
    Active node3: /opt/FJSVJMCMN/etc/script/omgr_monitor3.proc
    Standby node: /opt/FJSVJMCMN/etc/script/omgr_monitor1.proc
                  /opt/FJSVJMCMN/etc/script/omgr_monitor2.proc
                  /opt/FJSVJMCMN/etc/script/omgr_monitor3.proc

Example of how to modify the state transition procedure for dual node mutual standby configuration

Create two copies of state transition procedure file that monitor daemons, assigning a unique name to each copy. Change the monitoring script file name specified in the state transition procedure to the name of the file created in "5.1.1.2 Creating monitoring scripts."

  1. Change the file name of the monitoring script for one state transition procedure.

    [Before]

    $CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor -c 0

    [After]

    $CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor1 -c 0
  2. Change the file name of the monitoring script for the other state transition procedure.

    [Before]

    $CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor -c 0

    [After]

    $CLMONPROC -s -r $4 -a /opt/FJSVJMCMN/etc/script/omgr_smonitor2 -c 0
  3. Place the two state transition procedure files on the active and standby nodes, using the same directory path in each case.

    The following example shows how to place these files:

    Active node1: /opt/FJSVJMCMN/etc/script/omgr_monitor1.proc
      (Standby node2)/opt/FJSVJMCMN/etc/script/omgr_monitor2.proc
    Active node2: /opt/FJSVJMCMN/etc/script/omgr_monitor2.proc
      (Standby node1)/opt/FJSVJMCMN/etc/script/omgr_monitor1.proc