When configuring CF by CLI, perform the following steps.
In this section, the cluster system configured with two nodes where the CF node names are "shasta1" and "shasta2", is explained as an example.
For cluster nodes placed between different network segments, in a cloud environment, or in an RHOSP environment, make sure the IP address of cluster interconnect is set.
Example: When you use "eth1" as the NIC of the cluster interconnect.
Execute the ip command in order to make sure that the IP address (192.168.223.105) is set to "eth1."
# ip eth1 x: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu xxxx qdisc pfifo_fast state UP qlen xxxx link/ether xx:xx:xx:xx:xx:xx brd xx:xx:xx:xx:xx:xx inet 192.168.223.105/24 brd 192.168.223.255 scope global eth1 valid_lft forever preferred_lft forever inet6 xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/xx scope link valid_lft forever preferred_lft forever
Create CIP configuration files.
Specify /etc/cip.cf as below on all the nodes which configure the cluster system.
Example:
shasta1 shasta1RMS:netmask:255.255.255.0 shasta2 shasta2RMS:netmask:255.255.255.0
Note
If you manually create /etc/cip.cf, you cannot reconfigure CF by Cluster Admin. To reconfigure CF by Cluster Admin, delete the /etc/cip.cf file beforehand.
When reconfiguring CF, create the /etc/cip.cf file from Cluster Admin.
Set IP addresses.
Specify /etc/hosts as below on all the nodes which configure the cluster system.
Example:
<cip address1> shasta1RMS <cip address2> shasta2RMS
Enable remote access by using cfcp/cfsh.
Specify /etc/default/cluster.config as below on all the nodes which configure the cluster system.
CFCP "cfcp" CFSH "cfsh"
Edit /etc/default/cluster on all the nodes.
Edit /etc/default/cluster with the following contents:
- For physical environment, KVM environment and VMware environment
nodename <CF node name> clustername <Cluster name> device <Cluster interconnect1> device <Cluster interconnect2>
Example:
nodename shasta1 clustername PRIMECLUSTER1 device eth2 device eth3
Note
Make sure that the node name to be defined in nodename is the CF node name, not the node name of the OS.
- For cluster nodes placed between different network segments, in a cloud environment (other than a single-node cluster), or in an RHOSP environment
nodename <CF node name> clustername <Cluster name> device <IP device1> <Self IP address1> <Self broadcast address1> <IP address of other node1> device <IP device2> <Self IP address2> <Self broadcast address2> <IP address of other node2>
Example: /etc/default/cluster of shasta1
nodename shasta1 clustername PRIMECLUSTER1 device /dev/ip0 192.168.223.105 192.168.223.255 192.168.123.112 device /dev/ip1 192.168.200.105 192.168.200.255 192.168.100.112
Note
Make sure that the node name to be defined in nodename is the CF node name, not the node name of the OS.
For 3 nodes or more, separate the IP address of all nodes for the IP address of another node with a space.
A maximum of 4 cluster interconnects can be used. When using multiple number of cluster interconnects, list the multiple number of device line. Specify different /dev/ipX (X is 0 to 3) for each IP device of each device line.
- For a single-node cluster in a cloud environment
nodename <CF node name> clustername <Cluster name> device /dev/ip0 <IP address of dummy interface> <Broadcast address of dummy interface>
Example: /etc/default/cluster of shasta1
nodename shasta1 clustername PRIMECLUSTER1 device /dev/ip0 192.168.223.105 192.168.223.255
Note
Make sure that the node name to be defined in nodename is the CF node name, not the node name of the OS.
Set the owner, group, and access permission.
# chown root:root /etc/default/cluster # chmod 600 /etc/default/cluster
Reboot the nodes.
Execute the following command with any node in the cluster system and set the Cluster Integrity Monitor (CIM).
# rcqconfig -a <nodename> ...
nodename : CF node name
Example:
# rcqconfig -a shasta1 shasta2
If this command fails, check again that CF node names and cluster names configured in /etc/default/cluster in step 5 are correct.
Check that the cluster system can be communicated with the RMS node name.
Example: When checking from shasta1
# ping shasta2RMS
If the cluster system cannot be communicated, check again that CF node names, RMS node names, and CIP addresses configured in /etc/cip.cf and /etc/hosts in step 2 and step 3 are correct.
For cluster nodes placed between different network segments, in a cloud environment, or in an RHOSP environment, check the CF configuration with the following procedure:
Execute the following command on any one node to make sure that all the nodes have already joined in the cluster.
# cftool -n
Example: When CF node names are shasta1, shasta2 and when there are 2 nodes
# cftool -n
Node Number State Os Cpu
shasta1 1 UP Linux EM64T
shasta2 2 UP Linux EM64T
Make sure that both shasta1 and shasta2 are displayed on "Node" and "UP" is displayed on "State" for each node.
Execute the following command on all the nodes to make sure that the CF over IP configuration is enabled.
# cftool -d
Example: For 2 cluster interconnects
# cftool -d
Number Device Type Speed Mtu State Configured Address
4 /dev/ip0 6 n/a 1392 UP YES 0a.00.00.c9.00.00
5 /dev/ip1 6 n/a 1392 UP YES 0a.00.00.ca.00.00
Make sure that /dev/ipX (X is 0 to 3. Only the number of cluster interconnect is displayed) is only displayed in Device.
If the above step 1 or step 2 is not performed correctly, check the following configurations again.
- For cluster nodes placed between different network segments, or in an FJcloud-Baremetal environment
See "Appendix K Using Firewall" in "PRIMECLUSTER Installation and Administration Guide" and check if the configuration necessary to allow communication of CF over IP is properly set.
Check again that the CF node name, RMS node name, CIP address, IP address of cluster interconnect, IP device, broadcast address, and cluster name set in /etc/cip.cf, /etc/default/cluster, and /etc/hosts are correct.
- In a cloud environment (other than an FJcloud-Baremetal environment) or an RHOSP environment
Make sure that the configurations of the security groups (or security rules) created with the following references are properly set.
For an FJcloud-O environment, see "3.1.2.4 Creating the Security Group for the Cluster Interconnect" in "PRIMECLUSTER Installation and Administration Guide Cloud Services."
For a NIFCLOUD environment, see "8.3.2.2 Rules Applied to the Cluster Interconnect" in "PRIMECLUSTER Installation and Administration Guide Cloud Services."
For an AWS environment, see "20.3.2.2 Rules Applied to the Cluster Interconnect" in "PRIMECLUSTER Installation and Administration Guide Cloud Services."
For an Azure environment, see "26.3.2.2 Security Rules Applied to the Cluster Interconnect" in "PRIMECLUSTER Installation and Administration Guide Cloud Services."
For an RHOSP environment, see "I.2.2.2 Creating Virtual Network" in "PRIMECLUSTER Installation and Administration Guide."
When creating multiple virtual network interfaces and using a virtual network interface, for which a default gateway is not set, to communicate with interfaces on different subnets, see "21.1.3 Setting Instances" in "PRIMECLUSTER Installation and Administration Guide Cloud Services" and make sure that the configuration of the routing is set so that communication between interconnects can be performed.
Check again that the CF node name, RMS node name, CIP address, IP address of cluster interconnect, IP device, broadcast address, and cluster name set in /etc/cip.cf, /etc/default/cluster, and /etc/hosts are correct.