This work is to be performed after completing the settings on all the nodes in the cluster system of the copy destination in single-user mode.
Start all the nodes in multi-user mode.
Set up the class Cluster Integrity Monitor (CIM).
Delete the CF node names that were used in the copy source, and set the CF node names to be used in the copy destination.
Perform the settings on any node that configures the cluster system.
Example: The CF node names used in the copy source are fuji2 and fuji3, and those used in the copy destination are fuji4 and fuji5.
# rcqconfig -d fuji2 fuji3 # rcqconfig -a fuji4 fuji5
Checking the CF setting item
Check if the changed CF node name, CIP/SysNode name, and cluster name are correct.
Checking the CF node name and cluster name
Execute the cfconfig -g command on each node to check if the set CF node name and cluster name are correct.
Example: When the CF node name used in the copy destination is fuji4, and the cluster name used in the copy destination is PRIMECLUSTER2
# cfconfig -g fuji4 PRIMECLUSTER2 eth1 eth2
Checking the CIP/Sysnode name
Check that all the CIP/SysNode names set in the remote host are enabled to communicate. Check the communication status on all the nodes.
Example: When the SysNode name set in the remote host is fuji5RMS
# ping fuji5RMS
If an error occurs in the above step a or b, check if the CF node name, CIP/SysNode name, and cluster name that are set in /etc/cip.cf, /etc/default/cluster or /etc/hosts are correct.
If an error occurs, take the procedure below:
Start the system in single-user mode.
Perform "4. Change the CF node name, CIP/SysNode name, and the cluster name." of "L.3.2 Setup in Single-User Mode" again, and then restart the node.
Perform "L.3.3 Changing the Settings in Multi-User Mode " again.
Changing the cluster name of the Cluster Resource Management Facility
Change the cluster name of the Cluster Resource Management Facility.
Perform the settings on any node that configures the cluster system.
Example: The new cluster name of the copy destination is "PRIMECLUSTER 2."
# /etc/opt/FJSVcluster/bin/clsetrsc -n PRIMECLUSTER2 1 # /etc/opt/FJSVcluster/bin/clsetrsc -n PRIMECLUSTER2 2
Changing the SF settings
For the Blade server, change the CF node name, slot number of the server blade, the SNMP community name, and the IP address of the management blade in the /etc/opt/SMAW/SMAWsf/SA_blade.cfg file.
Example: When changing the values as follows.
The SNMP community name
public -> private
CF node name slot number IP address of management blade fuji2 -> fuji4 1 -> 3 10.20.30.200 -> 10.20.30.202 fuji3 -> fuji5 1 -> 3 10.20.30.201 -> 10.20.30.203
community-string public management-blade-ip 10.20.30.200 fuji2 1 cycle management-blade-ip 10.20.30.201 fuji3 1 cycle
community-string private management-blade-ip 10.20.30.202 fuji4 3 cycle management-blade-ip 10.20.30.203 fuji5 3 cycle
For PRIMERGY, except for the Blade server, change the entries for the CF node names and the IP address for IPMI (BMC or iRMC) in "/etc/opt/SMAW/SMAWsf/SA_ipmi.cfg".
Example: When changing the values as follows.
CF node name IP address for IPMI (BMC or iRMC) fuji2 -> fuji4 10.20.30.200 -> 10.20.30.202 fuji3 -> fuji5 10.20.30.201 -> 10.20.30.203
fuji2 10.20.30.200:root:D0860AB04E1B8FA3 cycle fuji3 10.20.30.201:root:D0860AB04E1B8FA3 cycle
fuji4 10.20.30.202:root:D0860AB04E1B8FA3 cycle fuji5 10.20.30.203:root:D0860AB04E1B8FA3 cycle
For PRIMEQUEST, execute the following procedure:
Change the setting of PSA/Svmco and MMB. For details on the setting methods, see the following manuals:
PRIMEQUEST 1000 Series
- "PRIMEQUEST 1000 Series Installation Manual"
- "PRIMEQUEST 1000 Series ServerView Mission Critical Option User Manual"
PRIMEQUEST 2000 Series
- "PRIMEQUEST 2000 Series Installation Manual"
- "PRIMEQUEST 2000 Series ServerView Mission Critical Option User Manual"
You need to create an RMCP user so that PRIMECLUSTER can link with the MMB units. In all PRIMEQUEST instances that make up the PRIMECLUSTER system, be sure to create a user who uses RMCP to control the MMB. To create a user who uses RMCP to control the MMB, log in to the MMB Web-UI and create the user from the "Remote Server Management" window of the "Network Configuration" menu. Create the user as shown below.
- Set [Privilege] to "Admin".
- Set [Status] to "Enabled".
For details about creating a user who uses RMCP to control the MMB, see the following manuals:
PRIMEQUEST 1000 Series
- "PRIMEQUEST 1000 Series Tool Reference"
PRIMEQUEST 2000 Series
- "PRIMEQUEST 2000 Series Tool Reference"
Delete the MMB information of the copy source CF node.
Example: Delete the MMB information of fuji2, fuji3 on the copy source.
# /etc/opt/FJSVcluster/bin/clmmbsetup -d fuji2 # /etc/opt/FJSVcluster/bin/clmmbsetup -d fuji3
Execute the "clmmbsetup -a" command and register the MMB information of the copy destination nodes.
For information on how to use the "clmmbsetup" command, see the "clmmbsetup" manual page.
# /etc/opt/FJSVcluster/bin/clmmbsetup -a mmb-user Enter User's Password: Re-enter User's Password:
For mmb-user and User's Password, enter the user and password created in Step a.
Check that the MMB asynchronous monitoring daemon has started on all the nodes.
# /etc/opt/FJSVcluster/bin/clmmbmonctl
If "The devmmbd daemon exists." is displayed, the MMB asynchronous monitoring daemon has started.
If "The devmmbd daemon does not exist." is displayed, the MMB asynchronous monitoring daemon has not started. Execute the following command to start the MMB asynchronous monitoring daemon.
# /etc/opt/FJSVcluster/bin/clmmbmonctl start
Restore the saved rcsd.org file to the rcsd.cfg file.
# mv /etc/opt/SMAW/SMAWsf/rcsd.org /etc/opt/SMAW/SMAWsf/rcsd.cfg
Change the CF node names and the IP address of the administrative LAN (admIP) described in /etc/opt/SMAW/SMAWsf/rcsd.cfg.
Example: When changing the values as follows
CF node name IP address of administrative LAN fuji2 -> fuji4 10.20.30.100 -> 10.20.30.02 fuji3 -> fuji5 10.20.30.101 -> 10.20.30.103
fuji2,weight=1,admIP=10.20.30.100:agent=SA_lkcd,timeout=25:SA_ipmi,timeout=25 fuji3,weight=1,admIP=10.20.30.101:agent=SA_lkcd,timeout=25:SA_ipmi,timeout=25
fuji4,weight=1,admIP=10.20.30.102:agent=SA_lkcd,timeout=25:SA_ipmi,timeout=25 fuji5,weight=1,admIP=10.20.30.103:agent=SA_lkcd,timeout=25:SA_ipmi,timeout=25
When kdump is used to collect the crash dump in the PRIMERGY including the Blade server, set up the kdump shutdown agent. Execute the following command on any one of the nodes.
# /etc/opt/FJSVcllkcd/bin/panicinfo_setup
panicinfo_setup: WARNING: /etc/panicinfo.conf file already exists. (I)nitialize, (C)opy or (Q)uit (I/C/Q) ? <- Input I
Start up the Shutdown Facility.
# sdtool -b
Use sdtool -s to confirm whether the shutdown daemon (rcsd) is active.
# sdtool -s
By executing sdtool -s on all the nodes, the composition of the shutdown facility can be confirmed.
Note
Confirm the shutdown facility operates normally by the display result of the sdtool -s command.
There is a possibility that the configuration settings of the agent or hardware are not correct when any of the following statuses are displayed though setting the shutdown facility is completed.
InitFailed is displayed in Init State.
Unknown or TestFailed is displayed in Test State.
Confirm whether the error message is output to the /var/log/messages file. Then, take corrective actions according to the content of the output message.