Top
PRIMECLUSTER  Installation and Administration Guide4.5
FUJITSU Software

9.1.2 Changing a CF Node Name

This section describes how to change the CF node name after building a PRIMECLUSTER system.

Operation Procedure:

The following explains the operation procedure using an example when changing the CF name from fuji2 and fuji3 to fuji4 and fuji5.

  1. When using RMS, stop the automatic RMS startup.
    Check the current automatic RMS startup, and then execute the following regarding the setting. Perform this operation on all nodes where RMS is used.

    # hvsetenv HV_RCSTART
    1 <- Check this value.
    • If "0" is set, RMS has been stopped. Go to Step 2.

    • If "1" is set, execute the following to stop the automatic RMS startup.

      # hvsetenv HV_RCSTART 0
      # hvsetenv HV_RCSTART
      0 <- Check the "0" is output.
  2. When using the GFS Shared File System, back up the GFS configuration information with the following procedure:

    1. Back up the management partition information of the GFS Shared File System on the node before change.

      Execute the following command on any running node.

      # sfcgetconf _backup_file_

      In the above example, sfcgetconf(1M) generates a shell script named _backup_file_ in the current directory.

    2. Check the setup of the startup procedure of the sfcfrmd daemon.

      # sfcsetup -m
      wait_bg

      Record the output value.

      This value is used when restoring the GFS configuration information after changing the CF node name.

    3. Check the management partition information of GFS.

      # sfcinfo -a
      /dev/sfdsk/gfs01/dsk/volume01:
      FSID special                                      size  Type  mount
         1 /dev/sfdsk/gfs01/dsk/volume01(15000000021)  14422  META  -----
         1 /dev/sfdsk/gfs01/dsk/volume01(15000000021)   5116  LOG   -----
         1 /dev/sfdsk/gfs01/dsk/volume01(15000000021)  95112  DATA  -----
      # sfcrscinfo -m -a
      /dev/sfdsk/gfs01/dsk/volume01:
      FSID  MDS/AC  STATE  S-STATE   RID-1   RID-2   RID-N  hostname
         1  MDS(P)  stop   -             0       0       0  host2
         1  AC      stop   -             0       0       0  host2
         1  MDS(S)  stop   -             0       0       0  host3
         1  AC      stop   -             0       0       0  host3

      Save the output result.

      This information is used for checking the restored configuration information of GFS after changing the CF node name.

  3. Start all nodes in single-user mode.

  4. Change the CF node name and the CIP/Sysnode name.

    Perform the following operation on all nodes.

    Note

    For the naming convention of the CF node name, see "5.1.1 Setting Up CF and CIP."

    1. Change the string of the CF node name within the CF node name and the CIP/Sysnode name that are described in /etc/cip.cf.

      [Before change]
      fuji2       fuji2RMS:netmask:255.255.255.0
      fuji3       fuji3RMS:netmask:255.255.255.0
      [After change]
      fuji4       fuji4RMS:netmask:255.255.255.0
      fuji5       fuji5RMS:netmask:255.255.255.0
    2. Change the string of the CF node name within the CIP/Sysnode name that is described in /etc/inet/hosts.

      [Before change]
      192.168.0.1     fuji2RMS
      192.168.0.2     fuji3RMS
      [After change]
      192.168.0.1     fuji4RMS
      192.168.0.2     fuji5RMS
    3. Change the CF node name described in /etc/default/cluster.

      [Before change]
      nodename fuji2
      clustername PRIMECLUSTER1
      device /dev/hme2
      device /dev/hme3
      [After change]
      nodename fuji4
      clustername PRIMECLUSTER1
      device /dev/hme2
      device /dev/hme3
  5. Delete the file of the Cluster Resource Management Facility.

    If the /etc/opt/FJSVcluster/FJSVcldbm/config/shmno file exists, delete it.

    # rm /etc/opt/FJSVcluster/FJSVcldbm/config/shmno
  6. Change the node name of the Cluster Resource Management Facility.

    Note

    This procedure is unnecessary when the Cluster Resource Management Facility is not being set.

    Execute the following command to change the node name of the Cluster Resource Management Facility.

    # /etc/opt/FJSVcluster/bin/clchgnodename
  7. Delete the information in the management partition of GFS.

    Note

    This procedure is unnecessary when the GFS Shared File System is not being used.

    Delete the information in the management partition of the GFS Shared File System. Execute the following command on all nodes.

    # rm /var/opt/FJSVgfs/sfcfsrm.conf
  8. Delete the /etc/opt/SMAW/SMAWsf/rcsd.cfg file of the shutdown facility.

    Execute the following on all nodes.

    # rm /etc/opt/SMAW/SMAWsf/rcsd.cfg
  9. Start all nodes in multi-user mode.

  10. Set up the Cluster Integrity Monitor (CIM).

    Delete the CF node names used before change, and set the CF node names to be used after change.

    Perform this setting on any node that configures the cluster system.

    Example: The CF node names used before change are fuji2 and fuji3, and those to be used after change are fuji4 and fuji5.

    # rcqconfig -d fuji2 fuji3
    # rcqconfig -a fuji4 fuji5
  11. Checking the CF setting item

    Check if the changed CF node name and CIP/SysNode name are correct.

    1. Checking the CF node name

      Execute the "cfconfig -g" command on each node to check if the set CF node name is correct.

      Example: When the changed is fuji4

      # cfconfig -g
      fuji4 PRIMECLUSTER1 /dev/hme2 /dev/hme3
    2. Checking the CIP/Sysnode name

      Check that all the CIP/Sysnode names set in the remote host are enabled to communicate. Check the communication status on all nodes.

      Example: When the Sysnode name set in the remote host is fuji5RMS

      # ping fuji5RMS

    If an error occurs in the above step a or b, check if the CF node name and CIP/SysNode name set in /etc/cip.cf, /etc/default/cluster or /etc/inet/hosts are correct.

    If an error occurs, take the procedure below:

    1. Start the system in single-user mode.

    2. Perform "4. Change the CF node name and the CIP/Sysnode name" and start the node again.

    3. Perform "10. Set up the Cluster Integrity Monitor (CIM)" again.

  12. Set up the shutdown facility.

    1. Delete the information of the asynchronous monitoring that was used before changing the CF node name.

      For SPARC M10 and M12

      Execute the following command on any node to check that the information of the SNMP asynchronous monitoring that was used before changing the CF node name is displayed.

      # /etc/opt/FJSVcluster/bin/clsnmpsetup -l

      Execute the following commands on any node to delete the information of the SNMP asynchronous monitoring that was used before changing the CF node name.

      # /etc/opt/FJSVcluster/bin/clsnmpsetup -d fuji2
      # /etc/opt/FJSVcluster/bin/clsnmpsetup -d fuji3

      Execute the following command on any node to check that information of the SNMP asynchronous monitoring is not displayed.

      # /etc/opt/FJSVcluster/bin/clsnmpsetup -l

      For SPARC Enterprise M3000, M4000, M5000, M8000, M9000, SPARC Enterprise T5120, T5220, T5140, T5240, T5440, SPARC T3, T4, T5, T7, S7 series

      Execute the following command on any one node to check that the information of the console asynchronous monitoring that was used before changing the CF node name is displayed.

      # /etc/opt/FJSVcluster/bin/clrccusetup -l

      Execute the following command on any node to delete the information of the console asynchronous monitoring that was used before changing the CF node name.

      # /etc/opt/FJSVcluster/bin/clrccusetup -d fuji2
      # /etc/opt/FJSVcluster/bin/clrccusetup -d fuji3

      Execute the following command on any node to check that the information of the console asynchronous monitoring is not displayed.

      # /etc/opt/FJSVcluster/bin/clrccusetup -l

      For SPARC Enterprise T1000 and T2000

      Delete the /etc/opt/SMAW/SMAWsf/SA_sunF.cfg file.

      # rm /etc/opt/SMAW/SMAWsf/SA_sunF.cfg
    2. See "5.1.2 Configuring the Shutdown Facility" to set the shutdown facility again.

  13. When using the GFS Shared File System, restore the GFS configuration information with the following procedure:

    1. Reinitialize the management partition on the one node of the copy destination servers.

      Example: Initializing the /dev/sfdsk/gfs/rdsk/control file as the management partition.

      # sfcsetup -cf /dev/sfdsk/gfs/rdsk/control
    2. Reregister the information of the configuration node on each node.

      # sfcsetup -a /dev/sfdsk/gfs/rdsk/control
    3. Redo the settings for the startup method of the sfcfrmd daemon as recorded in Step 2-2 on any node.

      Example: when the startup method of sfcfrmd daemon is wait_bg

      # sfcsetup -m wait_bg

      Note

      This procedure is required when changing the startup method of the sfcfrmd daemon from the default value wait.

    4. Confirm that the management partition is reinitialized.

      The path name of the management partition for which the settings were made can be confirmed by executing the"sfcsetup(1M)" command with the -p option.

      # sfcsetup -p
      /dev/sfdsk/gfs/rdsk/control

      The registered node information can be confirmed by executing the "sfcsetup(1M)" command without any option.

      Check that the value of CIPNAME is the CIP/Sysnode name that was changed in Step 4-2.

      # sfcsetup
      HOSTID          CIPNAME         MP_PATH
      80380000        fuji4RMS        yes
      80380001        fuji5RMS        yes

      The startup method of the sfcfrmd daemon can be confirmed by executing the "sfcsetup(1M)" command with the -m option.

      # sfcsetup -m
      wait_bg
    5. Start the sfcfrmd daemon by executing the following command on all nodes.

      # sfcfrmstart
    6. Restore the information of the management partition.

      Execute the shell script generated in Step 2-1 on any node.

      # sh _backup_file_
      get other node information start ... end

      Confirm that restoration of the management partition of GFS was successful by running the "sfcinfo(1M)" command and the "sfcrscinfo(1M)" command.

      Moreover, confirm that the information is the same as the one in Step 2-3.

      # sfcinfo -a
      /dev/sfdsk/gfs01/dsk/volume01: FSID special size Type mount 1 /dev/sfdsk/gfs01/dsk/volume01(15000000021) 14422 META ----- 1 /dev/sfdsk/gfs01/dsk/volume01(15000000021) 5116 LOG ----- 1 /dev/sfdsk/gfs01/dsk/volume01(15000000021) 95112 DATA -----
      # sfcrscinfo -m -a
      /dev/sfdsk/gfs01/dsk/volume01:
      FSID  MDS/AC  STATE  S-STATE   RID-1   RID-2   RID-N  hostname
         1  MDS(P)  stop   -             0       0       0  host2
         1  AC      stop   -             0       0       0  host2
         1  MDS(S)  stop   -             0       0       0  host3
         1  AC      stop   -             0       0       0  host3
    7. Mount the GFS Shared File System on all nodes.

      # sfcmntgl <mount point>

    Perform the following procedures when the cluster application is being set. If not, it is unnecessary.

  14. Obtain the RMS Configuration name. Follow the procedure below to output the file content. The character string after -c will be the RMS Configuration name. Perform this operation on any one of the nodes constituting a cluster.

    Example

    If the RMS Configuration name is 'config'

    # cat /opt/SMAW/SMAWRrms/etc/CONFIG.rms
    hvcm -c config <- RMS Configuration name
  15. Collect the backup of the configuration file. Perform this operation on all cluster nodes.

    Execute the following command.

    cd /opt/SMAW/SMAWRrms/build/wizard.d
    cp -rp <RMS Configuration name> <RMS Configuration name>.backup

    When the takeover network resource is being used, execute the following command on all cluster nodes.

    cp -p /opt/SMAW/SMAWRrms/etc/hvipalias /opt/SMAW/SMAWRrms/etc/hvipalias.backup

    Example

    If <RMS Configuration name> is config

    # cd /opt/SMAW/SMAWRrms/build/wizard.d
    # cp -rp config config.backup
    # cp -p /opt/SMAW/SMAWRrms/etc/hvipalias /opt/SMAW/SMAWRrms/etc/hvipalias.backup
  16. Modify the SysNode name of the configuration file. Perform this operation on the same node where the Step 14 was performed.

    1. Move it to the directory where the configuration file is stored.

      # cd /opt/SMAW/SMAWRrms/build/wizard.d/<RMS Configuration name>

      Example

      If <RMS Configuration name> is config

      # cd /opt/SMAW/SMAWRrms/build/wizard.d/config
    2. Search the target file with the following command.

      grep -l HvpMachine *

      Example

      # grep -l HvpMachine *
      userApp_0.m
      userApp_1.m
          :

      The displayed ".m" file is the target file.

    3. Search and change the line starting "HvpMachine" in the displayed file. Open the file with the vi command.

      # vi userApp_0.m
      HvpApplication=userApp_0 HvpAutoBreak=yes : (Omitted) : HvpMachine000=fuji2RMS <- Target line to be changed HvpMachine001=fuji3RMS <- Target line to be changed : (Omitted)

      Change the SysNode name that is set in the right side of "=" to the new SysNode name.

      The SysNode name is set by adding RMS (capital letters) to the CF node name.

      The following example indicates when changing the CF node name from fuji2 and fuji3 to fuji4 and fuji5.

      [Before change]
      HvpMachine000=fuji2RMS
      HvpMachine001=fuji3RMS
      [After change]
      HvpMachine000=fuji4RMS
      HvpMachine001=fuji5RMS

      Perform this procedure on all the ".m" files obtained in Step 16-2.

    4. Check that the old SysNode does not exist.

      Example

      # grep HvpMachine *.m
      userApp_0.m:HvpMachine000=fuji4RMS
      userApp_0.m:HvpMachine001=fuji5RMS
      userApp_1.m:HvpMachine000=fuji4RMS
      userApp_1.m:HvpMachine001=fuji5RMS
    5. Rename the <SysNode name>.s file under the directory where the configuration file is stored with the mv command. Change the file with the old SysNode name to the new SysNode name.

      Example

      # mv fuji2RMS.s fuji4RMS.s
      # mv fuji3RMS.s fuji5RMS.s
  17. Modify the CF node name of the procedure resource, line switching unit resource, and patrol diagnosis. Perform this operation on the same node where the Step 16 was performed.

    This procedure is unnecessary if the procedure resource, line switching unit resource, and patrol diagnosis are not being used.

    1. Move it to the directory where the configuration file is stored.

      # cd /opt/SMAW/SMAWRrms/build/wizard.d/<RMS Configuration name>

      Example

      If <RMS Configuration name> is config

      # cd /opt/SMAW/SMAWRrms/build/wizard.d/config
    2. Search the target file wit the following command.

      grep -l HvpCrmScopeFilter *

      Example

      # grep -l HvpCrmScopeFilter *
      Procedure0.m
      Procedure1.m
         :

      The displayed ".m" file is the target file.

    3. Search and change the line starting "HvpCrmScopeFilter" in the displayed file. Open the file with the vi command.

      # vi Procedure0.m
      HvpApplType=RESOURCE
      HvpApplication=Procedure0
      HvpClassNameFilter=BasicApplication
      HvpConsistent=consistent
      HvpCrmFlags000=OT1800
      HvpCrmResourceId000=33
      HvpCrmResourceName000=SDISK
      HvpCrmScopeFilter=fuji2:fuji3  <- Target line to be changed.
      HvpPlugin=BasicApplication
      HvpPreCheckHeritageIn=''
         :
      (Omitted)

      Change the CF node name that is set in the right side of "=" to the new CF node name.

      The following example indicates when changing the CF node name from fuji2 and fuji3 to fuji4 and fuji5.

      [Before change]
      HvpCrmScopeFilter=fuji2:fuji3
      [After change]
      HvpCrmScopeFilter=fuji4:fuji5

      Perform this procedure on all the ".m" files displayed in Step 17-2.

    4. Check that the old CF node name does not exist.

      grep HvpCrmScopeFilter *.m

      Example

      # grep HvpCrmScopeFilter *.m
      Procedure0.m:HvpCrmScopeFilter=fuji4:fuji5 Procedure1.m:HvpCrmScopeFilter=fuji4:fuji5 :
  18. When the takeover network resource is being used, change the CF node name in the hvipalias file.

    This file is stored in /opt/SMAW/SMAWRrms/etc/hvipalias. If the file does not exist or there is no CF node name in the file, this procedure is unnecessary because the takeover network resource is not being used.

    For each line, the CF node name is set in the first field and the host name of the takeover network is set in the second field. Modify the CF node name of each line to the new name. Perform this operation on all the nodes constituting a cluster.

    Example

    When the CF node names are changed from fuji2 and fuji3 to fuji4 and fuji5.

    [Before change]
    fuji2  hostname    net0  0xffffff00        # sh_rid=34 rid=32 192.168.100.233
    fuji3  hostname    net0  0xffffff00        # sh_rid=34 rid=33 192.168.100.233
    [After change]
    fuji4  hostname    net0  0xffffff00        # sh_rid=34 rid=32 192.168.100.233
    fuji5  hostname    net0  0xffffff00        # sh_rid=34 rid=33 192.168.100.233
  19. Execute the following command on the same node where the Step 14 was performed.

    hvw -F Configuration-Activate -xj -n <RMS Configuration name>

    Example

    # hvw -F Configuration-Activate -xj -n config

    In the command output result, check the following contents of (1) and (2).

    Testing for RMS to be up somewhere in the cluster ... done.
    
    Arranging sub applications topologically ... done.
    
    Check for all applications being consistent ... done.
    
    Running overall consistency check ... done.
    
    Generating pseudo code [one dot per (sub) application]: ..... done.
    
    Generating RMS resources [one dot per
    resource]: ............................................................ done
    
    
    hvbuild using /usr/opt/reliant/build/wizard.d/config/config.us
    About to distribute the new configuration data to hosts: fuji4RMS,fuji5RMS (1)
    
    The new configuration was distributed successfully.
    
    About to put the new configuration in effect ... done.
    
    The activation has finished successfully. (2)
    #

    In (1), check that the new SysNode name is output.

    In (2), check that "The activation has finished successfully." is displayed.

    If the output results are different, check the following and take a necessary action:

    • Check if RMS is in operation on any node that constitutes a cluster. If it is, stop RMS.

    • Check that all nodes that constitute a cluster run in multi-user mode.

    • Check that the host name in the /etc/hosts file or the CF node name has been change on all nodes.

    After taking the necessary action, perform the operation in Step 19 again.

  20. If the automatic RMS startup was changed in Step 1, return it with the following procedure. Perform this operation on all nodes where RMS is used.

    # hvsetenv HV_RCSTART 1
    # hvsetenv HV_RCSTART
    1   <- Check that the returned value is output.