Top
PRIMECLUSTER  Installation and Administration Guide4.2

5.1.2 Configuring the Shutdown Facility

This section explains the procedure for configuring the shutdown facility with the shutdown configuration wizard or CLI.

The configuration procedure for the shutdown facility varies depending on the machine type. Check the machine type of hardware and set an appropriate shutdown agent.

The following table shows the shutdown agent necessary by machine type.

Server machine type name

RCI

XSCF

RCCU

ALOM

ILOM

Panic

Reset

Panic

Reset

Console
Break

Console
Break

Console
Break

Panic

Reset

PRIME
POWER

200
400
600

Y

Y

-

-

-

Y

-

-

-

250
450

XSCF
(*1)

Y

Y

Y

Y

Y

-

-

-

-

RCCU
(*2)

Y

Y

-

-

-

Y

-

-

-

650
850

Y

Y

-

-

-

Y

-

-

-

800

Y

Y

-

-

-

-

-

-

-

900

Y

Y

-

-

-

-

-

-

-

1000
2000

Y

Y

-

-

-

-

-

-

-

1500
2500

Y

Y

-

-

-

-

-

-

-

SPARC Enter
prise

M3000
M4000
M5000
M8000
M9000

Japan

Fujitsu

Y

Y

Y

Y

Y

-

-

-

-

Other than Fujitsu

-

-

Y(*6)

Y(*6)

Y(*6)

-

-

-

-

Other than Japan

-

-

Y(*7)

Y(*7)

Y(*7)

-

-

-

-

T1000
T2000

-

-

-

-

-

-

Y(*3)

-

-

T5120
T5220
T5140
T5240
T5440

-

-

-

-

-

-

-

Y(*3)

Y
(*3)
(*4)
(*5)

SPARC

T3 series

-

-

-

-

-

-

-

Y(*3)

Y
(*3)
(*4)

S-Series

-

-

-

-

-

Y

-

-

-

(*1) XSCF is used for the console.

(*2) RCCU is used for the console.

(*3) You cannot configure the shutdown configuration wizard. You configure with CLI.

(*4) When using ILOM Reset, you need to apply a patch for PRIMECLUSTER (914468-07 or later).

(*5) When using ILOM Reset, you need firmware for SPARC Enterprise server (System Firmware 7.1.6.d or later).

(*6) When using SPARC Enterprise M3000, M4000, M5000, M8000, or M9000 provided by companies other than Fujitsu in Japan, you need to apply a patch for PRIMECLUSTER (914468-05 or later).

(*7) When using SPARC Enterprise M3000, M4000, M5000, M8000, or M9000 with logos of both Fujitsu and Oracle provided in other than Japan, you need to apply a patch for PRIMECLUSTER (914468-07 or later).

Note

  • When you are operating the shutdown facility by using one of the following shutdown agents, do not use the console.

    • XSCF Panic

    • XSCF Reset

    • XSCF Console Break

    • RCCU Console Break

    • ILOM Panic

    • ILOM Reset

    If you cannot avoid using the console, stop the shutdown facility of all nodes beforehand. After using the console, cut the connection with the console, start the shutdown facility of all nodes, and then check that the status is normal. For details on stop, start, and the state confirmation of the shutdown facility, see the manual page describing sdtool(1M).

  • In the /etc/inet/hosts file, you must describe IP addresses and host names of the administrative LAN used by the shutdown facility for all nodes. Check that IP addresses and host names of all nodes are described.

  • When you set up asynchronous RCI monitoring, you must specify the timeout interval (kernel parameter) in /etc/system for monitoring via SCF/RCI. Kernel parameters vary depending on the server type. Then check your server type so you can set the appropriate timeout interval.

  • To make the administrative LAN, used in the shutdown facility, redundant by GLS, use the physical IP address takeover function of NIC switching mode.

See

For details on the shutdown facility and the asynchronous monitoring function, refer to the following manuals:

  • "3.3.1.8 PRIMECLUSTER SF" in the "PRIMECLUSTER Concept Guide".

  • "8. Shutdown Facility" in "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide".

5.1.2.1 For a PRIMEPOWER/S-Series

5.1.2.1.1 Checking Console Configuration

The console used is different in dependence on the machine type. Check the hardware machine type and configure information of console.

Note

  • Check the console information before initially configuring the cluster system.

  • Set the IP address of RCCU or XSCF to the same segment as the administrative LAN.

Checking RCCU

If RCCU is used for the console, record the following information on RCCU. Note, however, that you do not need to record if you use the console with the factory settings.

See

For details on how to configure and check RCCU, see the instruction manual provided with RCCU.

Checking XSCF

Check the following settings concerning XSCF before setting the shutdown facility.

Note

When the connection to XSCF is a serial port connection alone, it is not supported in the shutdown facility. Please use XSCF-LAN.

Moreover, record the following information on XSCF.

See

For information on how to configure and confirm XSCF, see the "XSCF User's Guide".

5.1.2.1.2 Using the Shutdown Configuration Wizard

The configuration procedure for the shutdown facility varies depending on the machine type. Check the hardware machine type and configure an appropriate shutdown agent.

Starting up the shutdown configuration wizard

From the CF main window of the Cluster Admin screen, select the Tool menu and then Shutdown Facility -> Configuration Wizard. The shutdown configuration wizard will start.

Information

You can also configure the shutdown facility immediately after you complete the CF configuration with the CF wizard.

The following confirmation popup screen will appear. Click Yes to start the shutdown configuration wizard.

Selecting a configuration mode

You can select either of the following two modes to configure the shutdown facility:

This section explains how to configure the shutdown facility using Easy configuration (recommended). With this mode, you can configure the PRIMECLUSTER shutdown facility according to the procedure.

Figure 5.1 Selecting the SF configuration mode

Select Easy configuration (recommended) and then click Next.

Selecting a shutdown agent

After you confirm the hardware machine type, select an appropriate shutdown agent.

Figure 5.2 Selecting a shutdown agent

Select No SCON Configuration.

Then, select all the following shutdown agents depending on a hardware machine type.

Upon the completion of configuration, click Next.

Configuring XSCF

If you select XSCF Panic or XSCF Reset for the shutdown agent, the screen for configuring the XSCF will appear.

Enter the settings for XSCF that you recorded in "5.1.2.1.1 Checking Console Configuration".

Figure 5.3 Configuring XSCF

XSCF-name

Enter the IP address of XSCF or the host name of XSCF that is registered in the /etc/inet/hosts file.

User-Name

Enter a user name to log in to the control port.

Password

Enter a password to log in to the control port.

Once you have made all the necessary settings, click Next.

Configuring a Console Break agent

If you select Console Break as the shutdown agent, the screen used for selecting a Console Break agent will appear.

Figure 5.4 Selecting a Console Break agent

The selection to be made for the Console Break agent varies depending on the machine type to be set up. Confirm the hardware machine type and then set up an appropriate Console Break agent.

Upon the completion of configuration, click Next.

Configuring RCCU

If you select RCCU as the Console Break agent, you must configure RCCU.

Enter the settings for RCCU that you recorded in "5.1.2.1.1 Checking Console Configuration".

If you wish to use RCCU with its factory settings, select Use Defaults.

Otherwise, uncheck Use Defaults, and then enter the user name, password, and superuser password.

Figure 5.5 Configuring RCCU (Use Defaults)

RCCU name

Enter the IP address of RCCU or the host name of RCCU that is described in the /etc/inet/hosts file.

After you have completed this configuration, click Next.

Figure 5.6 Configuring RCCU (Does not use default)

RCCU name

Enter the IP address of RCCU or the host name of RCCU that is described in the /etc/inet/hosts file.

User-Name

Enter a user name to log in to the control port of RCCU.

Password 1

Enter a password to log in to the control port of RCCU.

Confirm

Enter the password that has been set for Password 1 for confirmation.

Password 2 (admin)

Enter a password to log in to the control port of RCCU with superuser access privileges.

Confirm

Enter the password that has been set for Password 2 (Admin) for confirmation.

Upon the completion of configuration, click Next.

Configuring Wait for PROM

Note

Wait for PROM is currently not supported.
Uncheck the check box if checked, and then click the Next button.

Figure 5.7 Configure Wait for PROM

Entering node weights and administrative IP addresses

Enter the weights of the nodes and the IP addresses for the administrative LAN.

Figure 5.8 Entering node weights and administrative IP addresses

Weight

Enter the weight of the node that constitutes the cluster. Weight is used to identify the survival priority of the node group that constitutes the cluster. Possible values for each node range from 1 to 300.
For details on survival priority and weight, refer to the explanations below.

Admin IP

Enter an IP address directly or click the tab to select the host name that is assigned to the administrative IP address.

Once you have completed this configuration, click Next.

Survival priority

Even if a cluster partition occurs due to a failure in the cluster interconnect, all the nodes will still be able to access the user resources. For details on the cluster partition, see "2.2.2.1 Protecting data integrity" in the "PRIMECLUSTER Concept Guide".
To guarantee the consistency of the data constituting user resources, you have to determine the node groups to survive and those that are to be forcibly stopped.
The weight assigned to each node group is referred to as a "Survival priority" under PRIMECLUSTER.
The greater the weight of the node, the higher the survival priority. Conversely, the less the weight of the node, the lower the survival priority. If multiple node groups have the same survival priority, the node group that includes a node with the name that is first in alphabetical order will survive.

Survival priority can be found in the following calculation:

Survival priority = SF node weight + ShutdownPriority of userApplication

SF node weight (Weight):

Weight of node. Default value = 1. Set this value while configuring the shutdown facility.

userApplication ShutdownPriority:

Set this attribute when userApplication is created. For details on how to change the settings, see "8.1.2 Changing the Operation Attributes of a Cluster Application".

See

For details on the ShutdownPriority attribute of userApplication, see "6.6.5 Attributes".

Survival scenarios

The typical scenarios that are implemented are shown below:

[Largest node group survival]
  • Set the weight of all nodes to 1 (default).

  • Set the attribute of ShutdownPriority of all user applications to 0 (default).

[Specific node survival]
  • Set the "weight" of the node to survive to a value more than double the total weight of the other nodes.

  • Set the ShutdownPriority attribute of all user applications to 0 (default).

    In the following example, node1 is to survive:

[Specific application survival]
  • Set the "weight" of all nodes to 1 (default).

  • Set the ShutdownPriority attribute of the user application whose operation is to continue to a value more than double the total of the ShutdownPriority attributes of the other user applications and the weights of all nodes.

    In the following example, the node for which app1 is operating is to survive:

Saving the configuration

Confirm and then save the configuration. In the left-hand panel of the window, those nodes that constitute the cluster are displayed, as are the shutdown agents that are configured for each node.

Figure 5.9 Saving the configuration

Click Next. A popup screen will appear for confirmation. Select Yes to save the setting.

Displaying the configuration of the shutdown facility

If you save the setting, a screen displaying the configuration of the shutdown facility will appear. On this screen, you can confirm the configuration of the shutdown facility on each node by selecting each node in turn.

Information

You can also view the configuration of the shutdown facility by selecting Shutdown Facility -> Show Status from the Tool menu.

Figure 5.10 Show Status

Shut State

"Unknown" is shown during normal system operation. If an error occurs and the shutdown facility stops the relevant node successfully, "Unknown" will change to "KillWorked".

Test State

Indicates the state in which the path to shut down the node is tested when a node error occurs. If the test of the path has not been completed, "Unknown" will be displayed. If the configured shutdown agent operates normally, "Unknown" will be changed to "TestWorked".

Init State

Indicates the state in which the shutdown agent is initialized.

To exit the configuration wizard, click Finish. Click Yes in the confirmation popup screen that appears.

Note

Confirm that the shutdown facility is operating normally.

If "InitFailed" is displayed in the Initial state even when the configuration of the shutdown facility has been completed or if "Unknown" is displayed in the Test state or "TestFailed" is highlighted in red, the agent or hardware configuration may contain an error. Check the /var/adm/messages file and the console for an error message. Then, apply appropriate countermeasures as instructed the message that is output.

See

For details on how to respond to the error messages that may be output, see the following manual.

  • "12.12 Monitoring Agent messages" in the "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide".

5.1.2.1.3 Specifying the Timeout Value

Confirm that the timeout value for each shutdown agent is as shown below. You can confirm the timeout value in the left-hand panel of the shutdown configuration wizard screen.

<How to calculate a timeout value>

If the value has not been set as explained above, set a timeout value as follows.

Setting a timeout value

In the CF main window of Cluster Admin, select Shutdown Facility -> Configuration Wizard from the Tool menu and then start the configuration wizard.

Figure 5.11 Selecting a configuration mode

Select Detailed configuration, and then click Next.

Select Edit, and then click Next.

Select Finish configuration, and then click Next.

Figure 5.12 Changing the Shutdown Agent order

Click Next.

Note

Do not change the order in which shutdown agents are invoked.

Figure 5.13 Timeout value

Enter the timeout values in seconds. The default value is 20 seconds.

Once you have completed the configuration, click Next.

The "Entering node weights and administrative IP addresses" screen will appear. Click Next and then save the settings.

5.1.2.2 For a SPARC Enterprise M3000, M4000, M5000, M8000, or M9000

5.1.2.2.1 Checking Console Configuration

In SPARC Enterprise M3000, M4000, M5000, M8000, and M9000, XSCF is used. The connection method to XSCF as the shutdown facility can be selected from SSH or the telnet.

Default connection is SSH.

Please confirm the following settings concerning XSCF before setting the shutdown facility.

Note

When the connection to XSCF is a serial port connection alone, it is not supported in the shutdown facility. Please use XSCF-LAN.

Moreover, record the following information on XSCF.

See

For information on how to configure and confirm XSCF, see the "XSCF User's Guide".

5.1.2.2.2 Using the Shutdown Configuration Wizard

The required shutdown agent and the configuration procedure for the shutdown agent vary depending on the hardware machine type.

Check the following combinations of the hardware machine types and shutdown agents, and then set up an appropriate shutdown agent.

Setting up the operation environment for the asynchronous monitoring

This setting is required only for the following cases:

You need to set up the operation environment for the asynchronous monitoring beforehand.

Execute the following command in any one of the nodes in the cluster system.

# /etc/opt/FJSVcluster/bin/cldevparam -p VendorType 1

Execute the following command in all nodes to confirm that the setting is correct.

# /etc/opt/FJSVcluster/bin/cldevparam -p VendorType
1
#

Starting up the shutdown configuration wizard

From the CF main window of the Cluster Admin screen, select the Tool menu and then Shutdown Facility -> Configuration Wizard. The shutdown configuration wizard will start.

Information

You can also configure the shutdown facility immediately after you complete the CF configuration with the CF wizard.

The following confirmation popup screen will appear. Click Yes to start the shutdown configuration wizard.

Selecting a configuration mode

You can select either of the following two modes to configure the shutdown facility:

This section explains how to configure the shutdown facility using Easy configuration (recommended). With this mode, you can configure the PRIMECLUSTER shutdown facility according to the procedure.

Figure 5.14 Selecting the SF configuration mode

Select Easy configuration (recommended) and then click Next.

Selecting a shutdown agent

After you confirm the hardware machine type, select an appropriate shutdown agent.

Figure 5.15 Selecting a shutdown agent

Select No SCON Configuration.

Then, select all the following shutdown agents depending on the hardware machine type.

Upon the completion of configuration, click Next.

Configuring XSCF

If you select XSCF Panic or XSCF Reset for the shutdown agent, the screen for configuring the XSCF will appear.

Enter the settings for XSCF that you recorded in "5.1.2.2.1 Checking Console Configuration".

Figure 5.16 Configuring XSCF

XSCF-name

Enter the IP address of XSCF or the host name of XSCF that is registered in the /etc/inet/hosts file.

User-Name

Enter a user name to log in to the control port.

Password

Enter a password to log in to the control port.

Upon the completion of configuration, click Next.

Configuring a Console Break agent

If you select Console Break as the shutdown agent, the screen used for selecting a Console Break agent will appear.

If you use the SPARC Enterprise M3000, M4000, M5000, M8000, or M9000, select XSCF of Console Break agent.

Figure 5.17 Selecting a Console Break agent

Upon the completion of configuration, click Next.

Configuring Wait for PROM

Note

Wait for PROM is currently not supported.
Uncheck the check box if checked, and then click Next.

Figure 5.18 Configure Wait for PROM

Entering node weights and administrative IP addresses

Enter the weights of the nodes and the IP addresses for the administrative LAN.

Figure 5.19 Entering node weights and administrative IP addresses

Weight

Enter the weight of the node that constitutes the cluster. Weight is used to identify the survival priority of the node group that constitutes the cluster. Possible values for each node range from 1 to 300.
For details on survival priority and weight, refer to the explanations below.

Admin IP

Enter an IP address directly or click the tab to select the host name that is assigned to the administrative IP address.

Upon the completion of configuration, click Next.

Survival priority

Even if a cluster partition occurs due to a failure in the cluster interconnect, all the nodes will still be able to access the user resources. For details on the cluster partition, see "2.2.2.1 Protecting data integrity" in the "PRIMECLUSTER Concept Guide".
To guarantee the consistency of the data constituting user resources, you have to determine the node groups to survive and those that are to be forcibly stopped.
The weight assigned to each node group is referred to as a "Survival priority" under PRIMECLUSTER.
The greater the weight of the node, the higher the survival priority. Conversely, the less the weight of the node, the lower the survival priority. If multiple node groups have the same survival priority, the node group that includes a node with the name that is first in alphabetical order will survive.

Survival priority can be found in the following calculation:

Survival priority = SF node weight + ShutdownPriority of userApplication

SF node weight (Weight):

Weight of node. Default value = 1. Set this value while configuring the shutdown facility.

userApplication ShutdownPriority:

Set this attribute when userApplication is created. For details on how to change the settings, see "8.1.2 Changing the Operation Attributes of a Cluster Application".

See

For details on the ShutdownPriority attribute of userApplication, see "6.6.5 Attributes".

Survival scenarios

The typical scenarios that are implemented are shown below:

[Largest node group survival]
  • Set the weight of all nodes to 1 (default).

  • Set the attribute of ShutdownPriority of all user applications to 0 (default).

[Specific node survival]
  • Set the "weight" of the node to survive to a value more than double the total weight of the other nodes.

  • Set the ShutdownPriority attribute of all user applications to 0 (default).

    In the following example, node1 is to survive:

[Specific application survival]
  • Set the "weight" of all nodes to 1 (default).

  • Set the ShutdownPriority attribute of the user application whose operation is to continue to a value more than double the total of the ShutdownPriority attributes of the other user applications and the weights of all nodes.

    In the following example, the node for which app1 is operating is to survive:

Saving the configuration

Confirm and then save the configuration. In the left-hand panel of the window, those nodes that constitute the cluster are displayed, as are the shutdown agents that are configured for each node.

Figure 5.20 Saving the configuration

Click Next. A popup screen will appear for confirmation. Select Yes to save the setting.

Displaying the configuration of the shutdown facility

If you save the setting, a screen displaying the configuration of the shutdown facility will appear. On this screen, you can confirm the configuration of the shutdown facility on each node by selecting each node in turn.

Information

You can also view the configuration of the shutdown facility by selecting Shutdown Facility -> Show Status from the Tool menu.

Figure 5.21 Show Status

Shut State

"Unknown" is shown during normal system operation. If an error occurs and the shutdown facility stops the relevant node successfully, "Unknown" will change to "KillWorked".

Test State

Indicates the state in which the path to shut down the node is tested when a node error occurs. If the test of the path has not been completed, "Unknown" will be displayed. If the configured shutdown agent operates normally, "Unknown" will be changed to "TestWorked".

Init State

Indicates the state in which the shutdown agent is initialized.

To exit the configuration wizard, click Finish. Click Yes in the confirmation popup screen that appears.

Note

Confirm that the shutdown facility is operating normally.

  • If "InitFailed" is displayed in the Initial state even when the configuration of the shutdown facility has been completed or if "Unknown" is displayed in the Test state or "TestFailed" is highlighted in red, the agent or hardware configuration may contain an error. Check the /var/adm/messages file and the console for an error message. Then, apply appropriate countermeasures as instructed the message that is output.

  • If connection to XSCF is telnet, the test state becomes TestFailed at this point in time. Confirm that the shutdown facility is operating normally, after the "Setting of the connection method to the XSCF".

See

For details on how to respond to the error messages that may be output, see the following manual.

  • "12.12 Monitoring Agent messages" in the "PRIMECLUSTER Cluster Foundation (CF) Configuration and Administration Guide".

5.1.2.2.3 Specifying the Timeout Value

Confirm that the timeout value for each shutdown agent is as shown below. You can confirm the timeout value in the left-hand panel of the shutdown configuration wizard screen.

<How to calculate a timeout value>

If the value has not been set as explained above, set a timeout value as follows.

Setting a timeout value

In the CF main window of Cluster Admin, select Shutdown Facility -> Configuration Wizard from the Tool menu and then start the configuration wizard.

Figure 5.22 Selecting a configuration mode

Select Detailed configuration, and then click Next.

Select Edit, and then click Next.

Select Finish Configuration, and then click Next.

Figure 5.23 Changing the Shutdown Agent order

Click Next.

Note

Do not change the order in which shutdown agents are invoked.

Figure 5.24 Timeout value

Change the timeout values to the values (in seconds) calculated by the formula in <How to calculate a timeout value>. The default value is displayed as 20 seconds.

Upon the completion of configuration, click Next.

The "Entering node weights and administrative IP addresses" screen will appear. Click Next and then save the settings.

5.1.2.2.4 Setting of the connection method to the XSCF

The default of setting of the connection method to the XSCF is SSH connection, in the SPARC Enterprise M3000, M4000, M5000, M8000, or M9000. The procedure when changing to the telnet connection is the following.

Change of the connection method

Execute the following command in all nodes to change a connection method.

# /etc/opt/FJSVcluster/bin/clrccusetup -m -t telnet

Example)

# /etc/opt/FJSVcluster/bin/clrccusetup -m -t telnet <RETURN>
# /etc/opt/FJSVcluster/bin/clrccusetup -l <RETURN>
Device-name cluster-host-name IP-address host-name user-name connection-type
-------------------------------------------------------------------------------
xscf        fuji2             xscf2      1         xuser     telnet
xscf        fuji3             xscf3      1         xuser     telnet

Starting up the shutdown facility

Execute the following command in each node, and confirm the shutdown facility has started.

# /opt/SMAW/bin/sdtool -s

If the state of configuration of shutdown facility is displayed, shutdown facility is started.

If "The RCSD is not running" is displayed, shutdown facility is not started.

If shutdown facility is started, execute the following command, and restart the shutdown facility.

# /opt/SMAW/bin/sdtool -r

If shutdown facility is not started, execute the following command, and start the shutdown facility.

# /opt/SMAW/bin/sdtool -b

5.1.2.3 For a SPARC Enterprise T1000, T2000

5.1.2.3.1 Checking Console Configuration

ALOM in console can be used by SPARC Enterprise T1000 or T2000.

Confirm the following settings concerning ALOM before setting the shutdown facility.

Note

  • Connected permission from the outside to ALOM is default and SSH. In that case, it is not supported in the shutdown facility.

  • When the connection to ALOM is a serial port connection alone, it is not supported in the shutdown facility.

Moreover, record the following information on ALOM.

  • ALOM IP address(*1) or an ALOM host name registered in the "/etc/inet/hosts" file.

  • User name used to log in the ALOM.

  • Password used to log in the ALOM.

*1) When the network routing is set, Internet Protocol address of ALOM need not be the same to management LAN segment of the cluster node.

See

For information on how to configure and confirm ALOM, see the "Advanced Lights out Management (ALOM) CMT Guide".

5.1.2.3.2 Setting by CLI

You cannot execute the setting of the shutdown facility of SPARC Enterprise T1000 or T2000 in the shutdown configuration Wizard.

The following confirmation popup screen will appear, immediately after you complete the CF configuration with the CF wizard. Click <No> to end the shutdown configuration wizard.

Set up the shutdown facility according to the following procedure:

Configuring the shutdown facility

Create the "/etc/opt/SMAW/SMAWsf/rcsd.cfg" by the following contents in all nodes.

CFNameX,weight=weight,admIP=myadmIP:agent=SA_sunF,timeout=timeout
CFNameX,weight=weight,admIP=myadmIP:agent=SA_sunF,timeout=timeout
CFNameX

For the CF node name of the cluster host.

weight

For the Weight of node.

myadmIP

For the IP address of the Administrative LAN of the own node.

agent

For the name of shutdown agent.

You appoint "SA_sunF" of the ALOM shutdown agent, in SPARC Enterprise T1000 or T2000.

timeout

For the timeout duration of the shutdown agent.

For the 40 seconds in SPARC Enterprise T1000 or T2000.

Example)

node1,weight=1,admIP=10.20.30.100:agent=SA_sunF,timeout=40
node2,weight=1,admIP=10.20.30.200:agent=SA_sunF,timeout=40
Survival priority

Even if a cluster partition occurs due to a failure in the cluster interconnect, all the nodes will still be able to access the user resources. For details on the cluster partition, see "2.2.2.1 Protecting data integrity" in the "PRIMECLUSTER Concept Guide".
To guarantee the consistency of the data constituting user resources, you have to determine the node groups to survive and those that are to be forcibly stopped.
The weight assigned to each node group is referred to as a "Survival priority" under PRIMECLUSTER.
The greater the weight of the node, the higher the survival priority. Conversely, the less the weight of the node, the lower the survival priority. If multiple node groups have the same survival priority, the node group that includes a node with the name that is first in alphabetical order will survive.

Survival priority can be found in the following calculation:

Survival priority = SF node weight + ShutdownPriority of userApplication

SF node weight (Weight):

Weight of node. Default value = 1. Set this value while configuring the shutdown facility.

userApplication ShutdownPriority:

Set this attribute when userApplication is created. For details on how to change the settings, see "8.1.2 Changing the Operation Attributes of a Cluster Application".

See

For details on the ShutdownPriority attribute of userApplication, see "6.6.5 Attributes".

Survival scenarios

The typical scenarios that are implemented are shown below:

[Largest node group survival]
  • Set the weight of all nodes to 1 (default).

  • Set the attribute of ShutdownPriority of all user applications to 0 (default).

[Specific node survival]
  • Set the "weight" of the node to survive to a value more than double the total weight of the other nodes.

  • Set the ShutdownPriority attribute of all user applications to 0 (default).

    In the following example, node1 is to survive:

[Specific application survival]
  • Set the "weight" of all nodes to 1 (default).

  • Set the ShutdownPriority attribute of the user application whose operation is to continue to a value more than double the total of the ShutdownPriority attributes of the other user applications and the weights of all nodes.

    In the following example, the node for which app1 is operating is to survive:

Configuring the ALOM shutdown agent

Enter the settings for ILOM that you recorded in "5.1.2.3.1 Checking Console Configuration".

Create the "/etc/opt/SMAW/SMAWsf/rcsd.cfg" by the following contents in all nodes.

SystemContorollerTag SystemControllerHostName SystemControllerLogin PWord 
void void CFBAMEX
SystemContorollerTag SystemControllerHostName SystemControllerLogin PWord 
void void CFBAMEX
SystemControllerTag

Type of system controller.
If it is the ALOM shutdown agent, set "system-controller-alom-2k".

SystemControllerHostName

IP address of ALOM.

SystemControllerLogin

For the admin user name that you defined at the time of ALOM setting.

PWord

For the admin user password that you defined at the time of ALOM setting.

CFNameX

For the CF node name of the cluster host.

Example)

system-controller-alom-2k 10.20.30.100 admin admin01 void void node1
system-controller-alom-2k 10.20.30.200 admin admin01 void void node2

Starting up the shutdown facility

Execute the following command in each node, and confirm the shutdown facility has started.

# /opt/SMAW/bin/sdtool -s

If the state of configuration of shutdown facility is displayed, shutdown facility is started.

If "The RCSD is not running" is displayed, shutdown facility is not started.

If shutdown facility is started, execute the following command, and restart the shutdown facility.

# /opt/SMAW/bin/sdtool -r

If shutdown facility is not started, execute the following command, and start the shutdown facility.

# /opt/SMAW/bin/sdtool -b

Displaying the configuration of the shutdown facility

Execute the following command in each node, and confirm the configuration of the shutdown facility.

# /opt/SMAW/bin/sdtool -s
Cluster Host    Agent      SA State    Shut State   Test State   Init State
-------------------------------------------------------------------------------
node1           SA_sunF    Idle        Unknown      TestWorked   InitWorked
node2           SA_sunF    Idle        Unknown      TestWorked   InitWorked
Shut State

"Unknown" is shown during normal system operation. If an error occurs and the shutdown facility stops the relevant node successfully, "Unknown" will change to "KillWorked".

Test State

Indicates the state in which the path to shut down the node is tested when a node error occurs. If the test of the path has not been completed, "Unknown" will be displayed. If the configured shutdown agent operates normally, "Unknown" will be changed to "TestWorked".

Init State

Indicates the state in which the shutdown agent is initialized.

Note

Confirm that the shutdown facility is operating normally.

In the following cases, there is a possibility that the mistake is found in the configuration setting of the agent or hardware.

  • An initial state was displayed as "InitFailed" though the setting of the shutdown facility was completed.

  • The test was displayed as "Unknown" or "TestFailed" though the setting of the shutdown facility was completed.

Confirm whether the error message is output to /var/adm/messages file and the console output screen.

Afterwards, correspond according to the content of the output message.

5.1.2.4 For a SPARC Enterprise T5120, T5220, T5140, T5240, T5440, or SPARC T3 series

5.1.2.4.1 Checking Console Configuration

In SPARC Enterprise T5120, T5220, T5140, T5240, T5440, or SPARC T3 series, ILOM is used.

Check the following settings concerning ILOM before setting the shutdown facility.

If you are using ILOM 3.0, please check the following settings as well.

Moreover, record the following information on ILOM.

*1) You can check if CLI mode of the log in user account is set to the default mode by the following procedure.

  1. Log in CLI of ILOM.

  2. Check prompt status.
    Prompt status that is set to the default mode.
    ->
    Prompt status that is set to alom mode.
    sc>

*2) Due to compatibility of ILOM 3.0 with ILOM 2.x, this operation is also available for users with administrator or operator privileges from ILOM 2.x.

*3) When the network routing is set, IP address of ILOM need not be the same to management LAN segment of the cluster node.

See

For details on how to make and check ILOM settings, please refer to the following documentation.

  • For ILOM 2.x:

    • "Integrated Lights Out Manager User's Guide"

  • For ILOM 3.0:

    • "Integrated Lights Out Manager (ILOM) 3.0 Concepts Guide"

    • "Integrated Lights Out Manager (ILOM) 3.0 Web Interface Procedures Guide"

    • "Integrated Lights Out Manager (ILOM) 3.0 CLI Procedures Guide"

    • "Integrated Lights Out Manager (ILOM) 3.0 Getting Started Guide"

5.1.2.4.2 Setting by CLI

Configuring the shutdown facility

Create the "/etc/opt/SMAW/SMAWsf/rcsd.cfg" by the following contents in all nodes.

CFNameX,weight=weight,admIP=myadmIP:agent=SA_ilomp,timeout=timeout:agent=SA_ilomr,timeout=timeout
CFNameX,weight=weight,admIP=myadmIP:agent=SA_ilomp,timeout=timeout:agent=SA_ilomr,timeout=timeout
CFNameX

For the CF node name of the cluster host.

weight

For the Weight of node.

myadmIP

For the IP address of the Administrative LAN of the own node.

agent

For the name of shutdown agent.

You appoint the ILOM shutdown agent in the order of "SA_ilomp" and "SA_ilomr", in SPARC Enterprise T5120, T5220, T5140, T5240, T5440, or SPARC T3 series.

timeout

For the timeout duration of the shutdown agent.

For the 70 seconds in SPARC Enterprise T5120, T5220, T5140, T5240, T5440, or SPARC T3 series.

Example)

node1,weight=1,admIP=10.20.30.100:agent=SA_ilomp,timeout=70:agent=SA_ilomr,timeout=70
node2,weight=1,admIP=10.20.30.200:agent=SA_ilomp,timeout=70:agent=SA_ilomr,timeout=70
Survival priority

Even if a cluster partition occurs due to a failure in the cluster interconnect, all the nodes will still be able to access the user resources. For details on the cluster partition, see "2.2.2.1 Protecting data integrity" in the "PRIMECLUSTER Concept Guide".
To guarantee the consistency of the data constituting user resources, you have to determine the node groups to survive and those that are to be forcibly stopped.
The weight assigned to each node group is referred to as a "Survival priority" under PRIMECLUSTER.
The greater the weight of the node, the higher the survival priority. Conversely, the less the weight of the node, the lower the survival priority. If multiple node groups have the same survival priority, the node group that includes a node with the name that is first in alphabetical order will survive.

Survival priority can be found in the following calculation:

Survival priority = SF node weight + ShutdownPriority of userApplication

SF node weight (Weight):

Weight of node. Default value = 1. Set this value while configuring the shutdown facility.

userApplication ShutdownPriority:

Set this attribute when userApplication is created. For details on how to change the settings, see "8.1.2 Changing the Operation Attributes of a Cluster Application".

See

For details on the ShutdownPriority attribute of userApplication, see "6.6.5 Attributes".

Survival scenarios

The typical scenarios that are implemented are shown below:

[Largest node group survival]
  • Set the weight of all nodes to 1 (default).

  • Set the attribute of ShutdownPriority of all user applications to 0 (default).

[Specific node survival]
  • Set the "weight" of the node to survive to a value more than double the total weight of the other nodes.

  • Set the ShutdownPriority attribute of all user applications to 0 (default).

    In the following example, node1 is to survive:

[Specific application survival]
  • Set the "weight" of all nodes to 1 (default).

  • Set the ShutdownPriority attribute of the user application whose operation is to continue to a value more than double the total of the ShutdownPriority attributes of the other user applications and the weights of all nodes.

    In the following example, the node for which app1 is operating is to survive:

Configuring a ILOM shutdown agent

Enter the settings for ILOM that you recorded in "5.1.2.4.1 Checking Console Configuration".

Execute the clrccusetup(1M) command in all nodes, and register the console information of local node.

Example)

# /etc/opt/FJSVcluster/bin/clrccusetup -a ilom 10.20.30.51 admin <RETURN>
Enter User's Password:
Re-enter User's Password:
# /etc/opt/FJSVcluster/bin/clrccusetup -l
Device-name cluster-host-name   IP-address     host-name    user-name
-------------------------------------------------------------------------------
ilom        node1               10.20.30.50    -            admin
ilom        node1               10.20.30.51    -            admin

Starting up the console asynchronous monitoring

Execute the command in each node, and register the console information of local node.

# /etc/opt/FJSVcluster/bin/clrccumonctl

If "The devrccud daemon exixts." is displayed, the console asynchronous monitoring daemon is started.

If "The devrccud daemon does not exixts." is displayed, the console asynchronous monitoring daemon is not started. Execute the following command, and start the console asynchronous monitoring daemon.

# /etc/opt/FJSVcluster/bin/clrccumonctl start

Starting up the shutdown facility

Execute the following command in each node, and confirm the shutdown facility has started.

# /opt/SMAW/bin/sdtool -s

If the state of configuration of shutdown facility is displayed, shutdown facility is started.

If "The RCSD is not running" is displayed, shutdown facility is not started.

If shutdown facility is started, execute the following command, and restart the shutdown facility.

# /opt/SMAW/bin/sdtool -r

If shutdown facility is not started, execute the following command, and start the shutdown facility.

# /opt/SMAW/bin/sdtool -b

Displaying the configuration of the shutdown facility

Execute the following command in each node, and confirm the configuration of the shutdown facility.

# /opt/SMAW/bin/sdtool -s
Cluster Host    Agent         SA State    Shut State   Test State   Init State
-------------------------------------------------------------------------------
node1           SA_ilomp.so    Idle        Unknown      TestWorked   InitWorked
node1           SA_ilomr.so    Idle        Unknown      TestWorked   InitWorked
node2           SA_ilomp.so    Idle        Unknown      TestWorked   InitWorked
node2           SA_ilomr.so    Idle        Unknown      TestWorked   InitWorked
Shut State

"Unknown" is shown during normal system operation. If an error occurs and the shutdown facility stops the relevant node successfully, "Unknown" will change to "KillWorked".

Test State

Indicates the state in which the path to shut down the node is tested when a node error occurs. If the test of the path has not been completed, "Unknown" will be displayed. If the configured shutdown agent operates normally, "Unknown" will be changed to "TestWorked".

Init State

Indicates the state in which the shutdown agent is initialized.

Note

Confirm that the shutdown facility is operating normally on the display result of the sdtool -s command.

  • In the following cases, there is a possibility that the mistake is found in the configuration setting of the agent or hardware.

    • An initial state was displayed as "InitFailed" though the setting of the shutdown facility was completed.

    • The test was displayed as "Unknown" or "TestFailed" though the setting of the shutdown facility was completed.

    Confirm whether the error message is output to /var/adm/messages file and the console output screen.
    Afterwards, correspond according to the content of the output message.

  • If you connected to ILOM, do not do three connections or more at the same time.
    If unavoidable, stop the shutdown facility of all nodes beforehand. Then, confirm the normal of shutdown facility started in all nodes after the cut connection. For details on stop, start, and the state confirmation of the shutdown facility, see the manual page describing sdtool(1M).