Top
Systemwalker Operation Manager  Cluster Setup Guide for UNIX
FUJITSU Software

2.6.2 Moving resources in the case of dual node mutual standby configuration

Resources to be moved

For PRIMECLUSTER (for Solaris or Linux version), Oracle Solaris Cluster

For dual node mutual standby configuration, move the following resources to the shared disk.

  1. The directory for calendar control information

    /var/opt/FJSVjmcal/post/sysn

    n: Subsystem number

  2. The database directory for Jobscheduler

    /var/opt/FJSVJOBSC/JOBDBn

    n: Subsystem number

  3. The spool directory for Job Execution Control

    /var/opt/FJSVMJS/var/spool/mjes/mjesn

    n: Subsystem number

  4. The operation information directory for Job Execution Control

    /etc/opt/FJSVMJS/etc/mjes/mjesn

    n: Subsystem number

  5. The database directory for Master Schedule Management (only if the Master Schedule Management function is enabled)

    /var/opt/FJSVstem/stemDBn

    n: Subsystem number

For MC/ServiceGuard

For dual node mutual standby configuration, move the following resources to the shared disk.

  1. The directory for calendar control information

    /opt/FHPjmcal/post/sysn

    n: Subsystem number

  2. The database directory for Jobscheduler

    /opt/FHPJOBSCH/db/JOBDBn

    n: Subsystem number

  3. The spool directory for Job Execution Control

    /opt/FHPMJS/var/spool/mjes/mjesn

    n: Subsystem number

  4. The operation information directory for Job Execution Control

    /opt/FHPMJS/etc/mjes/mjesn

    n: Subsystem number

  5. The database directory for Master Schedule Management (only if the Master Schedule Management function is enabled)

    /var/opt/FJSVstem/stemDBn

    n: Subsystem number

For HACMP

For dual node mutual standby configuration, move the following resources to the shared disk.

  1. The directory for calendar control information

    /opt/FAIXjmcal/post/sysn

    n: Subsystem number

  2. The database directory for Jobscheduler

    /opt/FAIXJOBSC/db/JOBDBn

    n: Subsystem number

  3. The spool directory for Job Execution Control

    /opt/FAIXMJS/var/spool/mjes/mjesn

    n: Subsystem number

  4. The operation information directory for Job Execution Control

    /opt/FAIXMJS/etc/mjes/mjesn

    n: Subsystem number

  5. The database directory for Master Schedule Management (only if the Master Schedule Management function is enabled)

    /var/opt/FJSVstem/stemDBn

    n: Subsystem number

Relocation procedure

For PRIMECLUSTER (for Solaris or Linux version), Oracle Solaris Cluster

The explanation in the following example assumes that Subsystem 1 and Subsystem 2 are running on the cluster system, and the mount points of the shared disks are set to "/disk1" and "/disk2". The command line provided here as a sample is for Solaris version.

1) Stop the daemon.

Stop the Systemwalker Operation Manager daemon.

# /opt/systemwalker/bin/poperationmgr -s

For environments that include both Systemwalker Operation Manager and Systemwalker Centric Manager, be sure to specify the "-s" option with the poperationmgr command.

2) Mount the shared disks.

  1. Mount the first shared disk.

    # mount /disk1 (*1)
    *1

    The command line for Linux version is as follows. (In this example, the device is "/dev/sdb1".)

    # /bin/mount /dev/sdb1 /disk1
  2. Mount the second shared disk.

    # mount /disk2 (*1)
    *1

    The command line for Linux version is as follows. (In this example, the device is "/dev/sdb2".)

    # /bin/mount /dev/sdb2 /disk2

3) Move resources from the active node to the shared disk.

Move the resources from the active node to the shared disks.

# mkdir -p /disk1/FJSVfwseo/JM1 (*1)
# chmod -R 755 /disk1/FJSVfwseo
# mkdir -p /disk2/FJSVfwseo/JM2 (*1)
# chmod -R 755 /disk2/FJSVfwseo

# mkdir -p /disk1/FJSVjmcal/post
# chmod -R 755 /disk1/FJSVjmcal
# mv /var/opt/FJSVjmcal/post/sys1 /disk1/FJSVjmcal/post/sys1
# mkdir -p /disk2/FJSVjmcal/post
# chmod -R 755 /disk2/FJSVjmcal
# mv /var/opt/FJSVjmcal/post/sys2 /disk2/FJSVjmcal/post/sys2

# mkdir -p /disk1/FJSVJOBSC
# chmod -R 755 /disk1/FJSVJOBSC
# mv /var/opt/FJSVJOBSC/JOBDB1 /disk1/FJSVJOBSC/JOBDB1
# mkdir -p /disk2/FJSVJOBSC
# chmod -R 755 /disk2/FJSVJOBSC
# mv /var/opt/FJSVJOBSC/JOBDB2 /disk2/FJSVJOBSC/JOBDB2

# mkdir -p /disk1/FJSVMJS/var/spool
# chmod -R 755 /disk1/FJSVMJS
# mv /var/opt/FJSVMJS/var/spool/mjes/mjes1 /disk1/FJSVMJS/var/spool/mjes1
# mkdir -p /disk2/FJSVMJS/var/spool
# chmod -R 755 /disk2/FJSVMJS
# mv /var/opt/FJSVMJS/var/spool/mjes/mjes2 /disk2/FJSVMJS/var/spool/mjes2

# mkdir -p /disk1/FJSVMJS/etc
# chmod -R 755 /disk1/FJSVMJS
# mv /etc/opt/FJSVMJS/etc/mjes/mjes1 /disk1/FJSVMJS/etc/mjes1
# mkdir -p /disk2/FJSVMJS/etc
# chmod -R 755 /disk2/FJSVMJS
# mv /etc/opt/FJSVMJS/etc/mjes/mjes2 /disk2/FJSVMJS/etc/mjes2

# mkdir -p /disk1/FJSVstem (*2)
# chmod -R 755 /disk1/FJSVstem (*2)
# mv /var/opt/FJSVstem/stemDB1 /disk1/FJSVstem/stemDB1 (*2)
# mkdir -p /disk2/FJSVstem (*2)
# chmod -R 755 /disk2/FJSVstem (*2)
# mv /var/opt/FJSVstem/stemDB2 /disk2/FJSVstem/stemDB2 (*2)
*1

For security information, simply a directory is created on the shared disk. This information will be updated by the "2.7 Settings for Automatic Reflection."

*2

This is required only if the Master Schedule Management function is enabled.

4) Delete resources from the standby node.

Delete the resources from the standby node.

# rm -r /var/opt/FJSVjmcal/post/sys1
# rm -r /var/opt/FJSVjmcal/post/sys2

# rm -r /var/opt/FJSVJOBSC/JOBDB1
# rm -r /var/opt/FJSVJOBSC/JOBDB2

# rm -r /var/opt/FJSVMJS/var/spool/mjes/mjes1
# rm -r /var/opt/FJSVMJS/var/spool/mjes/mjes2

# rm -r /etc/opt/FJSVMJS/etc/mjes/mjes1
# rm -r /etc/opt/FJSVMJS/etc/mjes/mjes2

# rm -r /var/opt/FJSVstem/stemDB1 (*1)
# rm -r /var/opt/FJSVstem/stemDB2 (*1)
*1

This is required only if the Master Schedule Management function is enabled and the target directory exists.

5) Create symbolic links to the relocated resources.

Create symbolic links on all the nodes (active and standby) so that any one of them can use the resources that have been relocated to the shared disks.

# ln -s /disk1/FJSVfwseo/JM1 /var/opt/FJSVfwseo/JM1
# ln -s /disk2/FJSVfwseo/JM2 /var/opt/FJSVfwseo/JM2

# ln -s /disk1/FJSVjmcal/post/sys1 /var/opt/FJSVjmcal/post/sys1
# ln -s /disk2/FJSVjmcal/post/sys2 /var/opt/FJSVjmcal/post/sys2

# ln -s /disk1/FJSVJOBSC/JOBDB1 /var/opt/FJSVJOBSC/JOBDB1
# ln -s /disk2/FJSVJOBSC/JOBDB2 /var/opt/FJSVJOBSC/JOBDB2

# ln -s /disk1/FJSVMJS/var/spool/mjes1 /var/spool/mjes/mjes1
# ln -s /disk2/FJSVMJS/var/spool/mjes2 /var/spool/mjes/mjes2

# ln -s /disk1/FJSVMJS/etc/mjes1 /etc/mjes/mjes1
# ln -s /disk2/FJSVMJS/etc/mjes2 /etc/mjes/mjes2

# ln -s /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1)
# ln -s /disk2/FJSVstem/stemDB2 /var/opt/FJSVstem/stemDB2 (*1)
*1

This is required only if the Master Schedule Management function is enabled.

6) Configure settings for SELinux.

This must be done only in Linux environments where SELinux is enabled. Otherwise, it is not necessary.

Execute the following commands in order to set the SELinux security policies on the relocated resources. They must be executed on each node with the shared disk mounted.

# /usr/sbin/semanage fcontext -a -t var_t '/disk1/FJSVfwseo/JM1(/.*)?'
# /usr/sbin/semanage fcontext -a -t var_t '/disk2/FJSVfwseo/JM2(/.*)?'

# /usr/sbin/semanage fcontext -a -t var_t '/disk1/FJSVjmcal(/.*)?'
# /usr/sbin/semanage fcontext -a -t var_t '/disk2/FJSVjmcal(/.*)?'

# /usr/sbin/semanage fcontext -a -t var_t '/disk1/FJSVJOBSC(/.*)?'
# /usr/sbin/semanage fcontext -a -t var_t '/disk2/FJSVJOBSC(/.*)?'

# /usr/sbin/semanage fcontext -a -t var_t '/disk1/FJSVstem(/.*)?' (*1)
# /usr/sbin/semanage fcontext -a -t var_t '/disk2/FJSVstem(/.*)?' (*1)

# /usr/sbin/semanage fcontext -a -t sw_fjsvmjs_spool_t '/disk1/FJSVMJS/var/spool(/.*)?'
# /usr/sbin/semanage fcontext -a -t sw_fjsvmjs_spool_t '/disk2/FJSVMJS/var/spool(/.*)?'

# /usr/sbin/semanage fcontext -a -t etc_t '/disk1/FJSVMJS/etc(/.*)?'
# /usr/sbin/semanage fcontext -a -t etc_t '/disk2/FJSVMJS/etc(/.*)?'

# /sbin/restorecon -R /disk1
# /sbin/restorecon -R /disk2
*1:

This is required only if the Master Schedule Management function is enabled.

7) Unmount the shared disks.

  1. Unmount the first shared disk.

    # umount /disk1 (*1)
    *1

    The command line for Linux version is as follows.

    # /bin/umount /disk1
  2. Unmount the second shared disk.

    # umount /disk2 (*1)
    *1

    The command line for Linux version is as follows.

    # /bin/umount /disk2

For MC/ServiceGuard

The explanation in the following example assumes that the devices of shared disks are "/dev/vg01/lvol1" and "/dev/vg02/lvol1" and the mount points of shared disks are "/disk1" and "/disk2".

1) Stop the daemon.

Stop the Systemwalker Operation Manager daemon.

# /opt/systemwalker/bin/poperationmgr -s

For environments that include both Systemwalker Operation Manager and Systemwalker Centric Manager, be sure to specify the "-s" option with the poperationmgr command.

2) Move resources from the active node to the shared disks.

Mount the shared disks and move resources from the active node to the shared disks.

  1. Mount the shared disks by using the vgchange command for HP-UX on the active node.

    # vgchange -c y /dev/vg01
    # vgchange -a e /dev/vg01
    # mount /dev/vg01/lvol1 /disk1
    # vgchange -c y /dev/vg02
    # vgchange -a e /dev/vg02
    # mount /dev/vg02/lvol1 /disk2
  2. Move the resources from the active node to the shared disks.

    # mkdir -p /disk1/FJSVfwseo/JM1 (*1)
    # chmod -R 755 /disk1/FJSVfwseo
    # mkdir -p /disk2/FJSVfwseo/JM2 (*1)
    # chmod -R 755 /disk2/FJSVfwseo
    
    # mkdir -p /disk1/FHPjmcal/post
    # chmod -R 755 /disk1/FHPjmcal
    # mv /opt/FHPjmcal/post/sys1 /disk1/FHPjmcal/post/sys1
    # mkdir -p /disk2/FHPjmcal/post
    # chmod -R 755 /disk2/FHPjmcal
    # mv /opt/FHPjmcal/post/sys2 /disk2/FHPjmcal/post/sys2
    
    # mkdir -p /disk1/FHPJOBSCH
    # chmod -R 755 /disk1/FHPJOBSCH
    # mv /opt/FHPJOBSCH/db/JOBDB1 /disk1/FHPJOBSCH/JOBDB1
    # mkdir -p /disk2/FHPJOBSCH
    # chmod -R 755 /disk2/FHPJOBSCH
    # mv /opt/FHPJOBSCH/db/JOBDB2 /disk2/FHPJOBSCH/JOBDB2
    
    # mkdir -p /disk1/FHPMJS/var/spool
    # chmod -R 755 /disk1/FHPMJS
    # mv /opt/FHPMJS/var/spool/mjes/mjes1 /disk1/FHPMJS/var/spool/mjes1
    # mkdir -p /disk2/FHPMJS/var/spool
    # chmod -R 755 /disk2/FHPMJS
    # mv /opt/FHPMJS/var/spool/mjes/mjes2 /disk2/FHPMJS/var/spool/mjes2
    
    # mkdir -p /disk1/FHPMJS/etc
    # chmod -R 755 /disk1/FHPMJS
    # mv /opt/FHPMJS/etc/mjes/mjes1 /disk1/FHPMJS/etc/mjes1
    # mkdir -p /disk2/FHPMJS/etc
    # chmod -R 755 /disk2/FHPMJS
    # mv /opt/FHPMJS/etc/mjes/mjes2 /disk2/FHPMJS/etc/mjes2
    
    # mkdir -p /disk1/FJSVstem (*2)
    # chmod -R 755 /disk1/FJSVstem (*2)
    # mv /var/opt/FJSVstem/stemDB1 /disk1/FJSVstem/stemDB1 (*2) 
    # mkdir -p /disk2/FJSVstem (*2)
    # chmod -R 755 /disk2/FJSVstem (*2)
    # mv /var/opt/FJSVstem/stemDB2 /disk2/FJSVstem/stemDB2 (*2)
    *1

    For security information, simply a directory is created on the shared disk. This information will be updated by the "2.7 Settings for Automatic Reflection."

    *2

    This is required only if the Master Schedule Management function is enabled.

3) Delete resources from the standby node.

Delete the resources from the standby node.

# rm -r /opt/FHPjmcal/post/sys1
# rm -r /opt/FHPjmcal/post/sys2

# rm -r /opt/FHPJOBSCH/db/JOBDB1
# rm -r /opt/FHPJOBSCH/db/JOBDB2

# rm -r /opt/FHPMJS/var/spool/mjes/mjes1
# rm -r /opt/FHPMJS/var/spool/mjes/mjes2

# rm -r /opt/FHPMJS/etc/mjes/mjes1
# rm -r /opt/FHPMJS/etc/mjes/mjes2

# rm -r /var/opt/FJSVstem/stemDB1 (*1) 
# rm -r /var/opt/FJSVstem/stemDB2 (*1)
*1

This is required only if the Master Schedule Management function is enabled and the target directory exists.

4) Create symbolic links to the relocated resources.

Create symbolic links on all the nodes (active and standby) so that any one of them can use the resources that have been relocated to the shared disks.

The following example assumes that "/disk1" and "/disk2" are the mount points of the shared disks.

  1. Create symbolic links on the active node.

    # ln -s /disk1/FJSVfwseo/JM1 /var/opt/FJSVfwseo/JM1
    # ln -s /disk2/FJSVfwseo/JM2 /var/opt/FJSVfwseo/JM2
    
    # ln -s /disk1/FHPjmcal/post/sys1 /opt/FHPjmcal/post/sys1
    # ln -s /disk2/FHPjmcal/post/sys2 /opt/FHPjmcal/post/sys2
    
    # ln -s /disk1/FHPJOBSCH/JOBDB1 /opt/FHPJOBSCH/db/JOBDB1
    # ln -s /disk2/FHPJOBSCH/JOBDB2 /opt/FHPJOBSCH/db/JOBDB2
    
    # ln -s /disk1/FHPMJS/var/spool/mjes1 /var/spool/mjes/mjes1
    # ln -s /disk2/FHPMJS/var/spool/mjes2 /var/spool/mjes/mjes2
    
    # ln -s /disk1/FHPMJS/etc/mjes1 /etc/mjes/mjes1
    # ln -s /disk2/FHPMJS/etc/mjes2 /etc/mjes/mjes2
    
    # ln -s /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1) 
    # ln -s /disk2/FJSVstem/stemDB2 /var/opt/FJSVstem/stemDB2 (*1)
    *1

    This is required only if the Master Schedule Management function is enabled.

  2. Unmount the shared disks by using the vgchange command for HP-UX on the active node.

    # umount /disk1
    # vgchange -a n /dev/vg01
    # umount /disk2
    # vgchange -a n /dev/vg02
  3. Mount the shared disks by using the vgchange command for HP-UX on the standby node.

    # vgchange -a e /dev/vg01
    # mount /dev/vg01/lvol1 /disk1
    # vgchange -a e /dev/vg02
    # mount /dev/vg02/lvol1 /disk2
  4. Create symbolic links on the standby node.

    # ln -s /disk1/FJSVfwseo/JM1 /var/opt/FJSVfwseo/JM1
    # ln -s /disk2/FJSVfwseo/JM2 /var/opt/FJSVfwseo/JM2
    
    # ln -s /disk1/FHPjmcal/post/sys1 /opt/FHPjmcal/post/sys1
    # ln -s /disk2/FHPjmcal/post/sys2 /opt/FHPjmcal/post/sys2
    
    # ln -s /disk1/FHPJOBSCH/JOBDB1 /opt/FHPJOBSCH/db/JOBDB1
    # ln -s /disk2/FHPJOBSCH/JOBDB2 /opt/FHPJOBSCH/db/JOBDB2
    
    # ln -s /disk1/FHPMJS/var/spool/mjes1 /var/spool/mjes/mjes1
    # ln -s /disk2/FHPMJS/var/spool/mjes2 /var/spool/mjes/mjes2
    
    # ln -s /disk1/FHPMJS/etc/mjes1 /etc/mjes/mjes1
    # ln -s /disk2/FHPMJS/etc/mjes2 /etc/mjes/mjes2
    
    # ln -s /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1) 
    # ln -s /disk2/FJSVstem/stemDB2 /var/opt/FJSVstem/stemDB2 (*1)
    *1

    This is required only if the Master Schedule Management function is enabled.

  5. Unmount the shared disk by using the vgchange command for HP-UX on the standby node.

    # umount /disk1
    # vgchange -a n /dev/vg01
    # umount /disk2
    # vgchange -a n /dev/vg02

For HACMP

The explanation in the following example assumes that volume names of the shared disks are "datavg1" and "datavg2", and the mount points of the shared disks are "/disk1" and "/disk2".

1) Stop the daemon.

Stop the Systemwalker Operation Manager daemon.

# /opt/systemwalker/bin/poperationmgr -s

For environments that include both Systemwalker Operation Manager and Systemwalker Centric Manager, be sure to specify the "-s" option with the poperationmgr command.

2) Move resources from the active node to the shared disks.

Mount the shared disks and move resources from the active node to the shared disks.

  1. Activate the volumes by using the varyonvg command for AIX on the active node, and then mount the shared disks.

    # varyonvg datavg1
    # mount /disk1
    # varyonvg datavg2
    # mount /disk2
  2. Move the resources from the active node to the shared disks.

    # mkdir -p /disk1/FJSVfwseo/JM1 (*1)
    # chmod -R 755 /disk1/FJSVfwseo
    # mkdir -p /disk2/FJSVfwseo/JM2 (*1)
    # chmod -R 755 /disk2/FJSVfwseo
    
    # mkdir -p /disk1/FAIXjmcal/post
    # chmod -R 755 /disk1/FAIXjmcal
    # mv /opt/FAIXjmcal/post/sys1 /disk1/FAIXjmcal/post/sys1
    # mkdir -p /disk2/FAIXjmcal/post
    # chmod -R 755 /disk2/FAIXjmcal
    # mv /opt/FAIXjmcal/post/sys2 /disk2/FAIXjmcal/post/sys2
    
    # mkdir -p /disk1/FAIXJOBSC
    # chmod -R 755 /disk1/FAIXJOBSC
    # mv /opt/FAIXJOBSC/db/JOBDB1 /disk1/FAIXJOBSC/JOBDB1
    # mkdir -p /disk2/FAIXJOBSC
    # chmod -R 755 /disk2/FAIXJOBSC
    # mv /opt/FAIXJOBSC/db/JOBDB2 /disk2/FAIXJOBSC/JOBDB2
    
    # mkdir -p /disk1/FAIXMJS/var/spool
    # chmod -R 755 /disk1/FAIXMJS
    # mv /opt/FAIXMJS/var/spool/mjes/mjes1 /disk1/FAIXMJS/var/spool/mjes1
    # mkdir -p /disk2/FAIXMJS/var/spool
    # chmod -R 755 /disk2/FAIXMJS
    # mv /opt/FAIXMJS/var/spool/mjes/mjes2 /disk2/FAIXMJS/var/spool/mjes2
    
    # mkdir -p /disk1/FAIXMJS/etc
    # chmod -R 755 /disk1/FAIXMJS
    # mv /opt/FAIXMJS/etc/mjes/mjes1 /disk1/FAIXMJS/etc/mjes1
    # mkdir -p /disk2/FAIXMJS/etc
    # chmod -R 755 /disk2/FAIXMJS
    # mv /opt/FAIXMJS/etc/mjes/mjes2 /disk2/FAIXMJS/etc/mjes2
    
    # mkdir -p /disk1/FJSVstem (*2)
    # chmod -R 755 /disk1/FJSVstem (*2)
    # mv /var/opt/FJSVstem/stemDB1 /disk1/FJSVstem/stemDB1 (*2) 
    # mkdir -p /disk2/FJSVstem (*2)
    # chmod -R 755 /disk2/FJSVstem (*2)
    # mv /var/opt/FJSVstem/stemDB2 /disk2/FJSVstem/stemDB2 (*2)
    *1

    For security information, simply a directory is created on the shared disk. This information will be updated by the "2.7 Settings for Automatic Reflection."

    *2

    This is required only if the Master Schedule Management function is enabled.

3) Delete resources from the standby node.

Delete the resources from the standby node.

# rm -r /opt/FAIXjmcal/post/sys1
# rm -r /opt/FAIXjmcal/post/sys2

# rm -r /opt/FAIXJOBSC/db/JOBDB1
# rm -r /opt/FAIXJOBSC/db/JOBDB2

# rm -r /opt/FAIXMJS/var/spool/mjes/mjes1
# rm -r /opt/FAIXMJS/var/spool/mjes/mjes2

# rm -r /opt/FAIXMJS/etc/mjes/mjes1
# rm -r /opt/FAIXMJS/etc/mjes/mjes2

# rm -r /var/opt/FJSVstem/stemDB1 (*1) 
# rm -r /var/opt/FJSVstem/stemDB2 (*1)
*1

This is required only if the Master Schedule Management function is enabled and the target directory exists.

4) Create symbolic links to the relocated resources.

Create symbolic links on all the nodes (active and standby) so that any one of them can use the resources that have been relocated to the shared disks.

The following example assumes that "/disk1" and "/disk2" are the mount points of the shared disks.

  1. Create symbolic links on the active node.

    # ln -s /disk1/FJSVfwseo/JM1 /var/opt/FJSVfwseo/JM1
    # ln -s /disk2/FJSVfwseo/JM2 /var/opt/FJSVfwseo/JM2
    
    # ln -s /disk1/FAIXjmcal/post/sys1 /opt/FAIXjmcal/post/sys1
    # ln -s /disk2/FAIXjmcal/post/sys2 /opt/FAIXjmcal/post/sys2
    
    # ln -s /disk1/FAIXJOBSC/JOBDB1 /opt/FAIXJOBSC/db/JOBDB1
    # ln -s /disk2/FAIXJOBSC/JOBDB2 /opt/FAIXJOBSC/db/JOBDB2
    
    # ln -s /disk1/FAIXMJS/var/spool/mjes1 /var/spool/mjes/mjes1
    # ln -s /disk2/FAIXMJS/var/spool/mjes2 /var/spool/mjes/mjes2
    
    # ln -s /disk1/FAIXMJS/etc/mjes1 /etc/mjes/mjes1
    # ln -s /disk2/FAIXMJS/etc/mjes2 /etc/mjes/mjes2
    
    # ln -s /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1) 
    # ln -s /disk2/FJSVstem/stemDB2 /var/opt/FJSVstem/stemDB2 (*1)
    *1

    This is required only if the Master Schedule Management function is enabled.

  2. Unmount the shared disks on the active node and deactivate the volumes by using the varyoffvg command for AIX.

    # umount /disk1
    # varyoffvg datavg1
    # umount /disk2
    # varyoffvg datavg2
  3. Activate the volumes by using the varyonvg command for AIX on the standby node, and then mount the shared disks.

    # varyonvg datavg1
    # mount /disk1
    # varyonvg datavg2
    # mount /disk2
  4. Create symbolic links on the standby node.

    # ln -s /disk1/FJSVfwseo/JM1 /var/opt/FJSVfwseo/JM1
    # ln -s /disk2/FJSVfwseo/JM2 /var/opt/FJSVfwseo/JM2
    
    # ln -s /disk1/FAIXjmcal/post/sys1 /opt/FAIXjmcal/post/sys1
    # ln -s /disk2/FAIXjmcal/post/sys2 /opt/FAIXjmcal/post/sys2
    
    # ln -s /disk1/FAIXJOBSC/JOBDB1 /opt/FAIXJOBSC/db/JOBDB1
    # ln -s /disk2/FAIXJOBSC/JOBDB2 /opt/FAIXJOBSC/db/JOBDB2
    
    # ln -s /disk1/FAIXMJS/var/spool/mjes1 /var/spool/mjes/mjes1
    # ln -s /disk2/FAIXMJS/var/spool/mjes2 /var/spool/mjes/mjes2
    
    # ln -s /disk1/FAIXMJS/etc/mjes1 /etc/mjes/mjes1
    # ln -s /disk2/FAIXMJS/etc/mjes2 /etc/mjes/mjes2
    
    # ln -s /disk1/FJSVstem/stemDB1 /var/opt/FJSVstem/stemDB1 (*1) 
    # ln -s /disk2/FJSVstem/stemDB2 /var/opt/FJSVstem/stemDB2 (*1)
    *1

    This is required only if the Master Schedule Management function is enabled.

  5. Unmount the shared disk on the standby node and deactivate the volumes by using the varyoffvg command for AIX.

    # umount /disk1
    # varyoffvg datavg1
    # umount /disk2
    # varyoffvg datavg2