Top
PRIMECLUSTER  Cluster Foundation Configuration and Administration Guide 4.5
FUJITSU Software

1.1 CF, CIP, and CIM configuration

You must configure CF before any other cluster services, such as Reliant Monitor Services (RMS). CF defines which nodes are in a given cluster. In addition, after you configure CF and CIP, the Shutdown Facility (SF) and RMS can be run on the nodes.

The Shutdown Facility (SF) is responsible for node elimination. This means that even if RMS is not installed or running in the cluster, missing CF heartbeats will cause SF to eliminate nodes.

You can use the Cluster Admin CF Wizard to easily configure CF, CIP, and CIM for all nodes in the cluster, and you can use the Cluster Admin SF Wizard to configure SF.

A CF configuration consists of the following main attributes:

The dedicated network connections used by CF are known as interconnects. They typically consist of some form of high speed networking such as 100 MB or Gigabit Ethernet links. There are a number of special requirements that these interconnects must meet if they are to be used for CF:

  1. The network links used for interconnects must have low latency and low error rates. This is required by the CF protocol. Private switches and hubs will meet this requirement. Public networks, bridges, and switches shared with other devices may not necessarily meet these requirements, and their use is not recommended.

    It is recommended that each CF interface be connected to its own private network with each interconnect on its own switch or hub.

  2. The interconnects should not be used on any network that might experience network outages for 5 seconds or more. A network outage of 10 seconds will, by default, cause a route to be marked as DOWN. (The state becomes DOWN if it is confirmed by the cftool -r command.) cfset(1M) can be used to change the 10 second default. See "1.1.2 cfset."

    Since CF automatically attempts to activate interconnects, the problem with "split-brain" only occurs if all interconnects experience a 10-second outage simultaneously. Nevertheless, CF requires highly reliable interconnects.

    CF can also be run over IP. Any IP interface on the node can be chosen as an IP device, and CF will treat this device much as it does an Ethernet device. However, all the IP addresses for all the cluster nodes on that interconnect must have the same IP subnetwork, and their IP broadcast addresses must be the same (refer to "Chapter 8 CF over IP" for more information).

The IP interfaces used by CF must be completely configured by the System Administrator before they are used by CF. You can run CF over both Ethernet devices and IP devices.

Higher level services, such as RMS, SF, Global File Services (hereinafter GFS), and so forth, will not notice any difference when CF is run over IP.

You should carefully choose the number of interconnects you want in the cluster before you start the configuration process. If you decide to change the number of interconnects after you have configured CF across the cluster, you will need to bring down CF on each node to do the reconfiguration. Bringing down CF requires that higher level services, like RMS, SF and applications, be stopped on that node, so the reconfiguration process is neither trivial nor unobtrusive.

Note

Interconnects should be redundant to avoid a single point of failure in the cluster.

Before you begin the CF configuration process, ensure that all of the nodes are connected to the interconnects you have chosen and that all of the nodes can communicate with each other over those interconnects. For proper CF configuration using Cluster Admin, all of the interconnects should be working during the configuration process.

CIP configuration involves defining virtual CIP interfaces and assigning IP addresses to them. Up to eight CIP interfaces can be defined per node. These virtual interfaces act like normal TCP/IP interfaces except that the IP traffic is carried over the CF interconnects. Because CF is typically configured with multiple interconnects, the CIP traffic will continue to flow even if an interconnect fails. This helps eliminate single points of failure as far as physical networking connections are concerned for intracluster TCP/IP traffic.

Except for their IP configuration, the eight possible CIP interfaces per node are all treated identically. There is no special priority for any interface, and each interface uses all of the CF interconnects equally. For this reason, many system administrators may choose to define only one CIP interface per node.

To ensure that you can communicate between nodes using CIP, the IP address on each node for a specific CIP interface should use the same subnet.

CIP traffic is really intended only to be routed within the cluster. The CIP addresses should not be used outside of the cluster. Because of this, you should use addresses from the non-routable reserved IP address range.

For the IPv4 address, Address Allocation for Private Internets (RFC 1918) defines the following address ranges that are set aside for private subnets:

Subnets(s)                       Class        Subnetmask
10.0.0.0                         A            255.0.0.0
172.16.0.0 ... 172.31.0.0        B            255.255.0.0
192.168.0.0 ... 192.168.255.0    C            255.255.255.0

For the IPv6 address, the range where Unique Local IPv6 Unicast Addresses (RFC 4193) defined with the prefix FC00::/7 is used as the address (Unique Local IPv6 Unicast Addresses) which can be allocated freely within the private network.

For the CIP name, it is strongly recommended that you use the following convention for RMS:

cfnameRMS

cfname is the CF name of the node and RMS is a literal suffix. This will be used for one of the CIP interfaces on a node. This naming convention is used in the Cluster Admin GUI to help map between normal nodenames and CIP names. In general, only one CIP interface per node is needed to be configured.

Note

A proper CIP configuration uses /etc/hosts to store CIP names. You should make sure that /etc/nsswitch.conf(4) is properly set up to use files criteria first in looking up its nodes.

The recommended way to configure CF, CIP and CIM is to use the Cluster Admin GUI. A CF/CIP Wizard in the GUI can be used to configure CF, CIP, and CIM on all nodes in the cluster in just a few screens. Before running the wizard, however, the following steps must have been completed:

  1. CF/CIP, Web-Based Admin View, and Cluster Admin should be installed on all nodes in the cluster.

  2. If you are running CF over Ethernet, then all of the interconnects in the cluster should be physically attached to their proper hubs or networking equipment and should be working.

  3. If you are running CF over IP, then all interfaces used for CF over IP should be properly configured and be up and running. See "Chapter 8 CF over IP" for details.

  4. Web-Based Admin View configuration must be configured. See "2.4.2 Management server configuration" in "PRIMECLUSTER Web-Based Admin View Operation Guide" for details.

In the cf tab in Cluster Admin, make sure that the CF driver is loaded on that node. Press the Load Driver button if necessary to load the driver. Then press the Configure button to start the CF Wizard.

The CF/CIP Wizard is automatically started by selecting the node where CF has not been configured to start Cluster Admin.

Start GUI in one of the following ways.

management_server is the primary or secondary management server you configured for this cluster. For more information on how to start Cluster Admin GUI and Web-Based Admin View, refer to "3.1 Prerequisite for screen startup" and "3.2 Screen startup" in "PRIMECLUSTER Web-Based Admin View Operation Guide."