This work is to be performed after completing the settings on all nodes in the cluster system of the copy destination in single-user mode.
Start all nodes in multi-user mode.
At this time, the following error message may be output to the console. However, no corrective action is necessary.
SDX:sdxservd: ERROR: class_name: failed to start shared volumes, class closed down, node=node_name
Set up the class Cluster Integrity Monitor (CIM).
Delete the CF node names that were used in the copy source, and set the CF node names to be used in the copy destination.
Perform the settings on any node that configures the cluster system.
Example: The CF node names used in the copy source are fuji2 and fuji3, and those used in the copy destination are fuji4 and fuji5.
# rcqconfig -d fuji2 fuji3 # rcqconfig -a fuji4 fuji5
Check the CF setting item.
Check if the changed CF node name, CIP/SysNode name, cluster name, and device names of the interconnect are correct.
Checking the CF node name, cluster name, and device names of the interconnect.
Execute the cfconfig -g command on each node to check if the set CF node name, cluster name, and device names of the interconnect are correct.
Example: When the CF node name is fuji4, the cluster name is PRIMECLUSTER2, and the device names of the interconnect are /dev/vnet2 and /dev/vnet3 in the copy destination.
# cfconfig -g fuji4 PRIMECLUSTER2 /dev/vnet2 /dev/vnet3
Checking the CIP/Sysnode name
Check that all the CIP/SysNode names set in the remote host are enabled to communicate. Check the communication status on all nodes.
Example: When the SysNode name set in the remote host is fuji5RMS
# ping fuji5RMS
If an error occurs in the above step a or b, check if the CF node name, CIP/SysNode name, cluster name, and device names of the interconnect that are set in /etc/cip.cf, /etc/default/cluster or /etc/inet/hosts are correct.
If an error occurs, take the procedure below:
Start the system in single-user mode.
Perform "4. Change the CF node name, CIP/SysNode name, cluster name, and device names of the interconnect." of "I.3.1 Setup in Single-User Mode" again, and then restart the node.
Perform "I.3.3 Changing the Settings in Multi-User Mode" again.
Check the existence of the symbolic links to the device file.
For Solaris11 or later, check that the symbolic links to the device file are correctly restored.
# pkgchk SMAWcf
If "pathname does not exist" is displayed, take the following procedure.
Start the system in single-user mode.
Perform "5. Restore the symbolic links to the device file" of "I.3.1 Setup in Single-User Mode", and then restart the node.
Perform "I.3.3 Changing the Settings in Multi-User Mode" again.
Change the cluster name of the Cluster Resource Management Facility.
Change the cluster name of the Cluster Resource Management Facility.
Perform the settings on any node that configures the cluster system.
Example: The new cluster name of the copy destination is "PRIMECLUSTER 2".
# /etc/opt/FJSVcluster/bin/clsetrsc -n PRIMECLUSTER2 1 # /etc/opt/FJSVcluster/bin/clsetrsc -n PRIMECLUSTER2 2
Delete the network interface resources.
Delete the network interfaces registered in the resource database.
Perform the settings on any node that configures the cluster system.
Check the resource IDs of the registered network interfaces.
The resource IDs of the network interfaces are the underlined values displayed on the "Ethernet" line in the following command output result.
# /etc/opt/FJSVcluster/bin/clgettree
... Ethernet 27 vnet0 UNKNOWN Ethernet 28 vnet1 UNKNOWN ...
Delete all the network interface resources that were checked in Step 1.
Example: The resource IDs of the registered network interfaces are 27 and 28.
# /etc/opt/FJSVcluster/bin/cldelrsc -r 27
# /etc/opt/FJSVcluster/bin/cldelrsc -r 28
Register the network interface resources again.
Register the network interface resources in the resource database again.
For the procedure, see "8.1.1.2 Adding a Network Interface Card Used for the Public LAN and the Administrative LAN."
Change the SF settings.
Delete the information of the asynchronous monitoring that was used in the copy source.
For SPARC M10 and M12
Execute the following command on any node to check the information of the SNMP asynchronous monitoring that was used in the copy source.
# /etc/opt/FJSVcluster/bin/clsnmpsetup -l
Execute the following commands on any node to delete the information of the SNMP asynchronous monitoring that was used in the copy source.
# /etc/opt/FJSVcluster/bin/clsnmpsetup -d fuji2 # /etc/opt/FJSVcluster/bin/clsnmpsetup -d fuji3
Execute the following command on any node to check that information of the SNMP asynchronous monitoring is not displayed.
# /etc/opt/FJSVcluster/bin/clsnmpsetup -l
For SPARC Enterprise M3000, M4000, M5000, M8000, M9000, SPARC Enterprise T5120, T5220, T5140, T5240, T5440, SPARC T3, T4, T5, T7, S7 series
Execute the following command on any node to check the information of the console asynchronous monitoring that was used in the copy source.
# /etc/opt/FJSVcluster/bin/clrccusetup -l
Execute the following command on any node to delete the information of the console asynchronous monitoring that was used in the copy source.
# /etc/opt/FJSVcluster/bin/clrccusetup -d fuji2 # /etc/opt/FJSVcluster/bin/clrccusetup -d fuji3
Execute the following command on any node to check that the information of the console asynchronous monitoring is not displayed.
# /etc/opt/FJSVcluster/bin/clrccusetup -l
For SPARC Enterprise T1000 and T2000
Delete the /etc/opt/SMAW/SMAWsf/SA_sunF.cfg file.
# rm /etc/opt/SMAW/SMAWsf/SA_sunF.cfg
See "5.1.2 Configuring the Shutdown Facility" to set the shutdown facility again.