Build cluster applications corresponding to each non-global zone to the global zone. Perform this task for each non-global zone.
If you created temporary cluster applications in "13.3.2 Creating Cluster Applications on the Global Zone," stop RMS, and then delete them. Do not perform deletion of the resources when deleting the cluster applications.
Create the Cmdline resource for controlling non-global zone and also cluster applications configured to the non-global zone from the global zone.
If performing application monitoring within the non-global zone (except single node cluster operations)
Create the Cmdline resource.
Select "Path Input" from "Creation Method" for creating the Cmdline to configure the Start script, Stop script, and Check script respectively as follows:
Start script
/opt/SMAW/bin/hvzone -c -z <zone_name> -a <app_name> {-s|-n} -t <timeout> -b {Solaris8|Solaris9}
Stop script
/opt/SMAW/bin/hvzone -u -z <zone_name> -a <app_name> {-s|-n} -t <timeout> -b {Solaris8|Solaris9}
Check script
/opt/SMAW/bin/hvzone -m -z <zone_name> -a <app_name> {-s|-n} -t <timeout> -b {Solaris8|Solaris9}
The differences above are only with the -c, -u, and -m options. Specify the zone name to be controlled and the cluster application name configured in the non-global zone to <zone_name> and <app_name> respectively.
Specify in seconds the shutdown process time out value for <timeout>. During Offline processing, this script performs RMS stop and non-global zone stop, but if the processing does not stop even after the time designated here has elapsed, stop the non-global zone using the halt command for zoneadm (zoneadm -z <zone_name> halt).
If sharing non-global zones between cluster nodes, specify the -s option. If not sharing, specify the -n option.
To control Solaris 8 Containers, add -b Solaris8 option in the end.
To control Solaris 9 Containers, add -b Solaris9 option in the end.
An example is given below. In this example, the configuration is as follows: the zone name is zone1, the cluster application name in the non-global zone is userApp_0, the timeout value is 200 seconds, and the non-global zone is shared between cluster nodes in Solaris 9 Containers.
Start script
/opt/SMAW/bin/hvzone -c -z zone1 -a userApp_0 -s -t 200 -b Solaris9
Stop script
/opt/SMAW/bin/hvzone -u -z zone1 -a userApp_0 -s -t 200 -b Solaris9
Check script
/opt/SMAW/bin/hvzone -m -z zone1 -a userApp_0 -s -t 200 -b Solaris9
After that, set up the script attributes. Click the "Flag" button and set the following values.
Flag | Overview |
ALLEXITCODES | Change this to "Yes." |
AUTORECOVER | For single-node cluster operations, set "Yes." In this case, do not set the following resources. They are unnecessary for single-node cluster operations because the following resources are used to take over IP addresses among multiple nodes.
|
STANDBYCAPABLE | If the non-global zone is not shared between cluster nodes, change this to "Yes." |
TIMEOUT | The default value is 300 seconds. Set a value large than the total of the following values: the time for the entire startup sequence to finish and the time it takes for the cluster application defined in the non-global zone to enter Online status. It is about 900 seconds. |
If not conducting application monitoring within the non-global zone or single-node cluster is operated:
Placing start_zone.sh script, stop_zone.sh script, and check_zone.sh script
Create script files for each Cmdline resource. Moreover, create script files for all nodes which use Cmdline resources.
Below is the example. The contents can be modified according to the elements.
Start script: /var/tmp/PCL/rmstools/start_zone.sh
Stop script: /var/tmp/PCL/rmstools/stop_zone.sh
Check script: /var/tmp/PCL/rmstools/check_zone.sh
To control Solaris 8 Containers or Solaris 9 Containers, edit the contents of start_zone.sh according to the comments below.
Create script file
# vi /var/tmp/PCL/rmstools/start_zone.sh
(Paste the following the content of Start script file) # vi /var/tmp/PCL/rmstools/stop_zone.sh
(Paste the following the content of Stop script file) # vi /var/tmp/PCL/rmstools/check_zone.sh
(Paste the following the content of Check script file) # chmod +x /var/tmp/PCL/rmstools/start_zone.sh
# chmod +x /var/tmp/PCL/rmstools/stop_zone.sh
# chmod +x /var/tmp/PCL/rmstools/check_zone.sh
Start script
#!/bin/sh MYZONE=$1 zoneadm -z $MYZONE list -p | grep :configured: if [ $? -eq 0 ]; then zoneadm -z $MYZONE attach -F || exit $? fi zoneadm -z $MYZONE list -p | grep :running: if [ $? -eq 0 ]; then zoneadm -z $MYZONE reboot RET=$? else # Remove # of the below line if MYZONE is Solaris 8 container # /usr/lib/brand/solaris8/s8_p2v $MYZONE # Remove # of the below line if MYZONE is Solaris 9 container # /usr/lib/brand/solaris9/s9_p2v $MYZONE zoneadm -z $MYZONE boot RET=$? fi exit $RET
Stop script
#!/bin/sh MYZONE=$1 RET=0 RET2=0 zoneadm -z $MYZONE list -p | grep :running: if [ $? -eq 0 ]; then zoneadm -z $MYZONE halt RET=$? fi zoneadm -z $MYZONE list -p | grep :installed: if [ $? -eq 0 ]; then zoneadm -z $MYZONE detach RET2=$? fi if [ $RET -eq 0 ]; then exit $RET2 fi exit $RET
Check script
#!/bin/sh # Return Offline if zlogin to the NGZ does not end in 30 seconds (Please change if needed) TIMEOUT=30 MYZONE=$1 zoneadm -z $MYZONE list -p | grep :running: > /dev/null 2>&1 RET=$? if [ $RET -ne 0 ]; then exit $RET fi /usr/sbin/zlogin $MYZONE "/usr/bin/ls >/dev/null 2>&1" 2>/dev/null & PID=$! i=0 while [ $i -lt $TIMEOUT ] do ps -p $PID > /dev/null 2>&1 if [ $? -ne 0 ]; then wait $PID exit $? fi sleep 1 i=`expr $i + 1` done exit 1
Creating the Cmdline resource
Select "Path Input" from "Creation Method" for creating the Cmdline to configure the Start script, Stop script, and Check script respectively as follows (When the zone name is zone1).
Start script
/var/tmp/PCL/rmstools/start_zone.sh zone1
Stop script
/var/tmp/PCL/rmstools/stop_zone.sh zone1
Check script
/var/tmp/PCL/rmstools/check_zone.sh zone1
After that, Set attributes for scripts. Click the "Flag" button and set the following values.
Flag | Overview |
---|---|
AUTORECOVER |
|
If using the non-global zone in the shared IP zone but not sharing non-global zone images, create a resource for the takeover IP address in the global zone. For details, see "13.2.5.3 Creating the Cmdline Resource for Shared IP Control" of "13.2.5 Reconfiguration of Cluster Applications on Global Zone."
Note
In Solaris 8 Containers environment and Solaris 9 Containers environment, IPv6 addresses cannot be used for takeover IP addresses.
In addition to the Gds resource, Gls resource, and Fsystem resource previously registered to the global zone, add the Cmdline resource created in "3.5.2 Creating the Cmdline Resource for Non-Global Zone Control" and create the cluster application corresponding to the target non-global zone.
Non-global zone should be stopped to create the cluster application. When the non-global zone is active, take the procedure below to stop the non-global zone that is controlled by the cluster application on all the cluster nodes, and then create the cluster application.
# zlogin zone-a shutdown -i0 -g0 -y
For the procedure for creating the cluster applications, follow "6.7.2.1 Creating Standby Cluster Applications." However, there is the following difference in procedure.
Cluster application attributes
If using warm-standby, be sure to set the "Standby Transitions" to "ClearFaultRequest|StartUp|SwitchRequest."