Create the virtual server for the cluster node.
Perform the following operations by the number of the nodes in the cluster system and create the virtual server for the cluster node.
Creating the virtual server
Creating the port for the public LAN (used also for the administrative LAN)
Creating the port for the cluster interconnect
Creating/Attaching the expanded storage
Setting up the DNS client
Application of the necessary OS patch
Creating .curlrc
From the IaaS Service Portal of FJcloud-O, for the server group created in "3.1.3 Creating the Server Group", set the virtual servers in the cluster system as "Table 3.2 Values of the virtual servers for the cluster node."
For security reasons, leave the virtual servers in the "SHUTOFF" state until "3.1.4.2 Creating the Port for the Public LAN (Used also for the Administrative LAN)" and "3.1.4.3 Creating the Port for the Cluster Interconnect" are completed. Also, delete the ports that are automatically added when the virtual servers are created. You can check the ports on the details screen of the virtual server.
Item | Value |
---|---|
AZ (*1) | Availability zone to assign the virtual server |
Server group (*2) | Server group created in "3.1.3 Creating the Server Group" |
Virtual server name | Arbitrary virtual server name *Specify a virtual server name taking care that there are no virtual names in duplicate within the project. |
Virtual server type | Arbitrary virtual server type according to the performance requirement (flavor) |
Boot source of the virtual server | OS image used in the cluster node |
Virtual network | Network created in "3.1.2.1 Creating Subnets" |
Security group | Not specified |
Keypair | Arbitrary keypair |
(*1) Not displayed in East Japan third, West Japan third regions.
(*2) Not displayed in East Japan first/second, West Japan first/second regions.
You need to operate from the action of the server group created in "3.1.3 Creating the Server Group."
Set the port for the public LAN (used also for the administrative LAN) in the virtual server in the cluster system as follows.
Click the <+> button in the port list on [Details screen of the virtual server] of the virtual server created in "3.1.4.1 Creating the Virtual Server" to transit to the port creation screen.
After entering the following setup items on the port creation screen, click the <Create> button to create the port.
Item | Value |
---|---|
Port name | Enter an arbitrary port name. |
Management state | Select "Up". |
Network name | Select the network to which the port is connected. |
Subnet name | Select the subnet for the public LAN (used also for the administrative LAN) created in "3.1.2.1 Creating Subnets." |
Private IP address | Enter the IP address of the public LAN (used also for the administrative LAN). |
Click the port name in the port list on the details screen of the virtual server to display the details screen of the port, and make sure that all settings are correct.
Select the action of the port on the details screen of the virtual server to set the following security groups.
ID of the security group created in "3.1.2.2 Creating the Common Security Group"
ID of the security group created in "3.1.2.3 Creating the Security Group for the Public LAN (Used also for the Administrative LAN)"
ID of the security group on the cluster node side created in "3.1.2.5 Creating the Security Groups for Web-Based Admin View"
ID of the security group for the installation and maintenance of the cluster node created in "3.1.2.6 Creating the Security Group for the Virtual Server Access"
If there are any necessary security groups for operations other than those above, add them.
When taking over the IP address between the virtual servers, execute the following API, and set the takeover IP address as "allowed_address_pairs".
# curl -X PUT https://networking.jp-east-3.cloud.global.fujitsu.com/v2.0/ports/{id_of_created_port} -H 'X-Auth-Token:XXX' -H 'Content-Type:application/json' -H 'Accept:application/json' -d '{"port":{ "allowed_address_pairs" : [{"ip_address":"takeover IP address"}]}}'
Set the port for the cluster interconnect as follows.
These settings are not necessary in a single-node cluster.
Click the <+> button in the port list on [Details screen of the virtual server] of the virtual server created in "3.1.4.1 Creating the Virtual Server" to transit to the port creation screen.
After entering the following setup items on the port creation screen, click the <Create> button to create the port.
Item | Value |
---|---|
Port name | Enter an arbitrary port name. |
Management state | Select "Up". |
Network name | Select the network name of the interconnect created in "3.1.2.1 Creating Subnets." |
Subnet name | Select the subnet name of the interconnect created in "3.1.2.1 Creating Subnets." |
Private IP address | Enter the IP address of the cluster interconnect (*1). |
(*1) This is the IP address used with CF over IP. For details on CF over IP, refer to "Chapter 9 CF over IP" in "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide."
Click the port name in the port list on the details screen of the virtual server to display the details screen of the port, and make sure that all settings are correct.
Select the action of the port on the details screen of the virtual server to set the following security group.
ID of the security group for the cluster interconnect created in "3.1.2.4 Creating the Security Group for the Cluster Interconnect"
When using the mirroring among the servers of GDS, create the block storages used in the mirroring among the servers and attach to the virtual servers created in "3.1.4.1 Creating the Virtual Server" as the expanded storage.
Attach the same size of the block storage to the virtual server for each cluster node.
Note
Make sure to restart the virtual server after attaching the expanded storage.
Note
If this setting is done incorrectly by mistake, the system may not be accessible. Before setting the DNS client, acquire the snapshot to the system disk.
This setting is not necessary in a single-node cluster.
[RHEL7]
Set the following in each node building the cluster.
Add the following lines to the file, /etc/sysconfig/network-scripts/ifcfg-eth0.
DEFROUTE=yes PEERDNS=yes DNS1=<IP address of the main DNS server> DNS2=<IP address of the sub DNS server>
For eth1, set DEFROUTE=no and PEERDNS=no.
If the /etc/sysconfig/network-scripts/ifcfg-eth1 file does not exist, set the following.
Create the /etc/sysconfig/network-scripts/ifcfg-eth1 file as follows.
DEVICE=eth1 TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp DEFROUTE=no PEERDNS=no
Set the owner, group, and access rights as follows.
# chown root:root /etc/sysconfig/network-scripts/ifcfg-eth1 # chmod 644 /etc/sysconfig/network-scripts/ifcfg-eth1
If the /etc/sysconfig/network-scripts/ifcfg-eth1 file exists, add the following lines.
DEFROUTE=no PEERDNS=no
Restart the network service.
# systemctl restart network
[RHEL8]
Set the following in each node building the cluster.
The connection name of eth0 is "Wiredeth0" and the connection name of eth1 is "Wiredeth1". Read these names according to your environment.
Execute the following command, and make sure that ipv4.never-default and ipv4.ignore-auto-dns of eth0 are "no".
# nmcli connection show Wiredeth0
If they are not "no", execute the following commands and set them to "no".
# nmcli connection modify Wiredeth0 ipv4.never-default no # nmcli connection modify Wiredeth0 ipv4.ignore-auto-dns no
Set the DNS server as follows.
# nmcli connection modify Wiredeth0 ipv4.dns <IP address of the main DNS server> # nmcli connection modify Wiredeth0 +ipv4.dns <IP address of the sub DNS server>
For eth1, set ipv4.never-default and ipv4.ignore-auto-dns to yes.
# nmcli connection modify Wiredeth1 ipv4.never-default yes # nmcli connection modify Wiredeth1 ipv4.ignore-auto-dns yes
Refer to the documents provided by FJcloud-O to set up Red Hat Update Infrastructure to the virtual server.
After setting it, execute the following command to apply the necessary OS patch.
# yum update curl
Add the following line into the file, /root/.curlrc. If there is no file, create it and describe the following.
tlsv1.2
If you created the file, execute the following.
# chown root:root /root/.curlrc # chmod 600 /root/.curlrc