Top
ServerView Resource Orchestrator Cloud Edition V3.0.0 Installation Guide

B.3.2 Settings [Linux]

Perform setup on the admin server.
The flow of setup is as shown below.

Figure B.2 Manager Service Setup Flow

Setup of managers as cluster services (cluster applications) is performed using the following procedure.
Perform setup using OS administrator authority.
If the image file storage directory is changed from the default directory (/var/opt/FJSVscw-deploysv/depot) during installation, in a. of step 6 perform settings so that the image file storage directory is also located in the shared disk.

Not necessary when ServerView Deployment Manager is used in the same subnet.

  1. Stop cluster applications (Primary node)

    When adding to existing operations (cluster applications)
    When adding a manager to an existing operation (cluster application), use the cluster system's operation management view (Cluster Admin) and stop operations (cluster applications).

  2. Configure the shared disk and takeover logical IP address (Primary node/Secondary node)

    1. Shared disk settings

      Use PRIMECLUSTER GDS and perform settings for the shared disk.
      For details, refer to the PRIMECLUSTER Global Disk Services manual.

    2. Configure the takeover logical IP address

      Use PRIMECLUSTER GLS and perform settings for the takeover logical IP address.
      As it is necessary to activate the takeover logical IP address using the following procedure, do not perform registration of resources with PRIMECLUSTER (by executing the /opt/FJSVhanet/usr/sbin/hanethvrsc create command) at this point.

      When adding to existing operations (cluster applications)
      When using an existing takeover logical IP address, delete the PRIMECLUSTER GLS virtual interface information from the resources for PRIMECLUSTER.
      For details, refer to the PRIMECLUSTER Global Link Services manual.

  3. Mount the shared disk (Primary node)

    Mount the shared disk for managers on the primary node.

  4. Activate the takeover logical IP address (Primary node)

    On the primary node, activate the takeover logical IP address for the manager.
    For details, refer to the PRIMECLUSTER Global Link Services manual.

  5. Change manager startup settings (Primary node)

    Perform settings so that the startup process of the manager is controlled by the cluster system, not the OS.
    Execute the following command on the primary node.

    # /opt/FJSVrcvmr/cluster/bin/rcxclchkconfig setup <RETURN>

  6. Copy dynamic disk files (Primary node)

    Copy the files from the dynamic disk of the manager on the primary node to the shared disk for managers.

    1. Create the directory "shared_disk_mount_point/Fujitsu/ROR/SVROR" on the shared disk.

    2. Copy the directories and files on the local disk of the primary node to the created directory.
      Execute the following command.

      # tar cf - copy_target | tar xf - -C shared_disk_mount_point/Fujitsu/ROR/SVROR/ <RETURN>

      Note

      The following messages may be output when the tar command is executed. They have no effect on operations, so ignore them.

      • tar: Removing leading `/' from member names

      • tar: file_name: socket ignored

      Directories and Files to Copy

      • /opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd

      • /etc/opt/FJSVrcvmr/customize_data

      • /etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate

      • /etc/opt/FJSVrcvmr/rails/config/rcx_secret.key

      • /etc/opt/FJSVrcvmr/sys/apache/conf

      • /var/opt/FJSVrcvmr

      • /etc/opt/FJSVscw-common (*1)

      • /var/opt/FJSVscw-common (*1)

      • /etc/opt/FJSVscw-tftpsv (*1)

      • /var/opt/FJSVscw-tftpsv (*1)

      • /etc/opt/FJSVscw-pxesv (*1)

      • /var/opt/FJSVscw-pxesv (*1)

      • /etc/opt/FJSVscw-deploysv (*1)

      • /var/opt/FJSVscw-deploysv (*1)

      • /etc/opt/FJSVscw-utils (*1)

      • /var/opt/FJSVscw-utils (*1)

      *1: Not necessary when ServerView Deployment Manager is used in the same subnet.

    3. Change the names of the copied directories and files listed below.

      Execute the following command. Make sure a name such as source_file_name(source_directory_name)_old is specified for the target_file_name(target_directory_name).

      # mv -i source_file_name(source_directory_name) target_file_name(target_directory_name) <RETURN>

      • /opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd

      • /etc/opt/FJSVrcvmr/customize_data

      • /etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate

      • /etc/opt/FJSVrcvmr/rails/config/rcx_secret.key

      • /etc/opt/FJSVrcvmr/sys/apache/conf

      • /var/opt/FJSVrcvmr

      • /etc/opt/FJSVscw-common (*1)

      • /var/opt/FJSVscw-common (*1)

      • /etc/opt/FJSVscw-tftpsv (*1)

      • /var/opt/FJSVscw-tftpsv (*1)

      • /etc/opt/FJSVscw-pxesv (*1)

      • /var/opt/FJSVscw-pxesv (*1)

      • /etc/opt/FJSVscw-deploysv (*1)

      • /var/opt/FJSVscw-deploysv (*1)

      • /etc/opt/FJSVscw-utils (*1)

      • /var/opt/FJSVscw-utils (*1)

      *1: Not necessary when ServerView Deployment Manager is used in the same subnet.

  7. Configure symbolic links for the shared disk (Primary node)

    1. Configure symbolic links for the copied directories and files.

      Configure symbolic links from the directories and files on the local disk of the primary node for the directories and files on the shared disk.
      Execute the following command.

      # ln -s shared_disk local_disk <RETURN>

      For shared_disk specify the shared disk in "Table B.8 Directories to Link" or "Table B.9 Files to Link".
      For local_disk, specify the local disk in "Table B.8 Directories to Link" or "Table B.9 Files to Link".

      Table B.8 Directories to Link

      Shared Disk

      Local Disk

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVrcvmr/customize_data

      /etc/opt/FJSVrcvmr/customize_data

      shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate

      /etc/opt/FJSVrcvmr/opt/FJSVssmgr/current/certificate

      shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVrcvmr/sys/apache/conf

      /etc/opt/FJSVrcvmr/sys/apache/conf

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/FJSVrcvmr

      /var/opt/FJSVrcvmr

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVscw-common (*1)

      /etc/opt/FJSVscw-common

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/FJSVscw-common (*1)

      /var/opt/FJSVscw-common

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVscw-tftpsv (*1)

      /etc/opt/FJSVscw-tftpsv

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/FJSVscw-tftpsv (*1)

      /var/opt/FJSVscw-tftpsv

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVscw-pxesv (*1)

      /etc/opt/FJSVscw-pxesv

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/FJSVscw-pxesv (*1)

      /var/opt/FJSVscw-pxesv

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVscw-deploysv (*1)

      /etc/opt/FJSVscw-deploysv

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/FJSVscw-deploysv (*1)

      /var/opt/FJSVscw-deploysv

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVscw-utils (*1)

      /etc/opt/FJSVscw-utils

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/var/opt/FJSVscw-utils (*1)

      /var/opt/FJSVscw-utils

      *1: Not necessary when ServerView Deployment Manager is used in the same subnet.

      Table B.9 Files to Link

      Shared Disk

      Local Disk

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd

      /opt/FJSVrcvmr/rails/config/rcx/rcxdb.pwd

      Shared_disk_mount_point/Fujitsu/ROR/SVROR/etc/opt/FJSVrcvmr/rails/config/rcx_secret.key

      /etc/opt/FJSVrcvmr/rails/config/rcx_secret.key

    2. When changing the image file storage directory, perform the following.

      When changing the image file storage directory, refer to "1.7.2 rcxadm imagemgr" in the "Reference Guide (Resource Management) CE", and change the path for the image file storage directory.

      Also, specify a directory on the shared disk for the new image file storage directory.

      Not necessary when ServerView Deployment Manager is used in the same subnet.

  8. Change the manager admin LAN IP Address (Primary node)

    Change the admin LAN IP address of the manager.
    Execute the following command.

    # /opt/FJSVrcvmr/bin/rcxadm mgrctl modify -ip IP_address <RETURN>

    For IP_address, specify the admin LAN IP address activated in step 4.

  9. Deactivate the takeover logical IP address (Primary node)

    On the primary node, deactivate the takeover logical IP address for the manager.
    For details, refer to the PRIMECLUSTER Global Link Services manual.

  10. Unmount the shared disk (Primary node)

    Unmount the shared disk for managers from the primary node.

  11. Mount the shared disk (Secondary node)

    Mount the shared disk for managers on the secondary node.

  12. Change manager startup settings (Secondary node)

    Perform settings so that the startup process of the manager is controlled by the cluster system, not the OS.
    On the secondary node, execute the same command as used in step 5.

  13. Configure symbolic links for the shared disk (Secondary node)

    1. Change the directory names and file names given in c. of step 6.

    2. Configure symbolic links for the shared disk.

      Configure symbolic links from the directories and files on the local disk of the secondary node for the directories and files on the shared disk.
      The directories and files to set symbolic links for are the same as those for "Table B.8 Directories to Link" and "Table B.9 Files to Link".

  14. Unmount the shared disk (Secondary node)

    Unmount the shared disk for managers from the secondary node.

  15. Register takeover logical IP address resources (Primary node/Secondary node)

    On PRIMECLUSTER GLS, register the takeover logical IP address as a PRIMECLUSTER resource.

    Note

    When using an existing takeover logical IP address, as it was deleted in step 2., registration as a resource is necessary.

    For details, refer to the PRIMECLUSTER Global Link Services manual.

  16. Create cluster resources/cluster applications (Primary node)

    1. Use the RMS Wizard of the cluster system to create the necessary PRIMECLUSTER resources on the cluster service (cluster application).

      When creating a new cluster service (cluster application), select Application-Create and create the settings for primary node as Machines[0] and the secondary node as Machines[1]. Then create the following resources on the created cluster service (cluster application).

      Perform the RMS Wizard settings for any of the nodes comprising the cluster.
      For details, refer to the PRIMECLUSTER manual.

      • Cmdline resources

        Create the Cmdline resources for Resource Orchestrator.
        On RMS Wizard, select "CommandLines" and perform the following settings.

        - Start script: /opt/FJSVrcvmr/cluster/cmd/rcxclstartcmd

        - Stop script: /opt/FJSVrcvmr/cluster/cmd/rcxclstopcmd

        - Check script: /opt/FJSVrcvmr/cluster/cmd/rcxclcheckcmd

        Note

        When specifying a value other than nothing for the attribute value StandbyTransitions of a cluster service (cluster application), enable the Flag of ALLEXITCODES(E) and STANDBYCAPABLE(O).

        When adding to existing operations (cluster applications)
        When adding Cmdline resources to existing operations (cluster applications), decide the startup priority order considering the restrictions of the other components that will be used in combination with the operation (cluster application).

      • Gls resources

        Configure the takeover logical IP address to use for the cluster system.
        On the RMS Wizard, select "Gls:Global-Link-Services", and set the takeover logical IP address.
        When using an existing takeover logical IP address this operation is not necessary.

      • Fsystem resources

        Set the mount point of the shared disk.
        On the RMS Wizard, select "LocalFileSystems", and set the file system. When no mount point has been defined, refer to the PRIMECLUSTER manual and perform definition.

      • Gds resources

        Specify the settings created for the shared disk.
        On the RMS Wizard, select "Gds:Global-Disk-Services", and set the shared disk. Specify the settings created for the shared disk.

    2. Set the attributes of the cluster application.

      When you have created a new cluster service (cluster application), use the cluster system's RMS Wizard to set the attributes.

      • In the Machines+Basics settings, set "yes" for AutoStartUp.

      • In the Machines+Basics settings, set "HostFailure|ResourceFailure|ShutDown" for AutoSwitchOver.

      • In the Machines+Basics settings, set "yes" for HaltFlag.

      • When using hot standby for operations, in the Machines+Basics settings, set "ClearFaultRequest|StartUp|SwitchRequest" for StandbyTransitions.
        When configuring the HBA address rename setup service in cluster systems, ensure that hot standby operation is configured.

    3. After settings are complete, save the changes and perform Configuration-Generate and Configuration-Activate.

    Note

    When registering the admin LAN subnet, additional settings are required.
    For the setting method, refer to the [Linux] section of "Settings for Clustered Manager Configurations" in "2.10 Registering Admin LAN Subnets" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  17. Set up the HBA address rename setup service (Primary node/Secondary node)

    Configuring the HBA address rename setup service for cluster systems
    When configuring managers and the HBA address rename setup service in cluster systems, perform the following procedure.

    Not necessary when ServerView Deployment Manager is used in the same subnet.

    Performing the following procedure starts the HBA address rename setup service on the standby node in the cluster.

    1. HBA address rename setup service startup settings (Primary node)

      Configure the startup settings of the HBA address rename setup service.
      Execute the following command.

      # /opt/FJSVrcvhb/cluster/bin/rcvhbclsetup <RETURN>

    2. Configuring the HBA address rename setup service (Primary node)

      Configure the settings of the HBA address rename setup service.
      Execute the following command on the primary node.

      # /opt/FJSVrcvhb/bin/rcxhbactl modify -ip IP_address <RETURN>
      # /opt/FJSVrcvhb/bin/rcxhbactl modify -port port_number <RETURN>

      IP Address

      Specify the takeover logical IP address for the manager.

      Port number

      Specify the port number for communication with the manager. The port number during installation is 23461.

    3. HBA address rename setup service Startup Settings (Secondary node)

      Configure the startup settings of the HBA address rename setup service.
      On the secondary node, execute the same command as used in step a.

    4. Configuring the HBA address rename setup service (Secondary node)

      Configure the settings of the HBA address rename setup service.
      On the secondary node, execute the same command as used in step b.

  18. Start cluster applications (Primary node)

    Use the cluster system's operation management view (Cluster Admin) and start the manager cluster service (cluster application).

  19. Set up the HBA address rename setup service startup information (Secondary node)

    1. Execute the following command.

      # nohup /opt/FJSVrcvhb/bin/rcxhbactl start& <RETURN>

      The [HBA address rename setup service] dialog is displayed.

    2. Click <Stop>.

      Confirm that the "Status" becomes "Stopping".

    3. Click <Run>.

      Confirm that the "Status" becomes "Running".

    4. Click <Stop>.

      Confirm that the "Status" becomes "Stopping".

    5. Click <Cancel> and close the [HBA address rename setup service] dialog.

  20. Switch over cluster applications (Secondary node)

    Use the cluster system's operation management view (Cluster Admin) and switch the manager cluster service (cluster application) to the secondary node.

  21. Set up the HBA address rename setup service startup information (Primary node)

    The procedure is the same as step 19.

  22. Switch over cluster applications (Primary node)

    Use the cluster system's operation management view (Cluster Admin) and switch the manager cluster service (cluster application) to the primary node.