This section describes the operation before performing the Live Migration in an Oracle VM Server for SPARC Environment.
Note
The patch for PRIMECLUSTER (T007881SP-02 or later for Solaris 10, or T007882SP-02 or later for Solaris 11) needs to be applied to all cluster nodes.
Prerequisites in "14.2.1 Setting Prerequisites for a Guest Domain on a Control Domain" are required before performing this operation.
Make sure that following items are consistent on all control domains of the target clusters: a combination of a user name and password for the XSCF that is registered in the shutdown facility, and the connection method to the XSCF.
Check that the following settings have been performed for the destination servers:
Before performing this operation, make sure that all target cluster nodes between the guest domains work normally.
If you perform this operation, a cluster system will not be switched until "14.3.1.2 Operation after Performing Live Migration" is completed.
After performing this operation, make sure to perform "14.3.1.2 Operation after Performing Live Migration" even though you cancel the Live Migration.
After performing this operation, make sure to perform "14.3.1.2 Operation after Performing Live Migration" even though the Live Migration failed.
Do not perform the Live Migration during a cluster system switchover.
Changing the cluster configuration (control domain)
Change the cluster configuration before performing the Live Migration.
Execute the following command on the cluster nodes of the source control domain.
# /etc/opt/FJSVcluster/bin/clovmmigrate -p source-ldom
The target guest domain name to be migrated
If you execute this command, the following cluster configurations are changed on all nodes in the cluster between guests that are specified for source-ldom.
The timeout value of the CF cluster interconnect (10 seconds to 600 seconds)
Stopping the shutdown facility
This section describes the operation after performing the Live Migration in an Oracle VM Server for SPARC Environment.
Note
After performing "14.3.1.1 Operation before Performing Live Migration," perform this operation even though you cancel the Live Migration.
After performing "14.3.1.1 Operation before Performing Live Migration," perform this operation even though the Live Migration failed.
A cluster system will not be switched until this operation is completed.
Perform step 1 on a control domain.
Perform step 2 on a guest domain.
When performing this operation, the saved configuration information of the logical domains should not be above 6. Execute the following command to check the saved configuration information of the logical domains.
Also, check that "config_tmp" does not exist in the saved configuration information name of the logical domains.
# ldm list-spconfig
If the configuration information of the saved logical domain is 7 or more, execute the following command to delete the configuration information to make them 6 or less.
# ldm remove-spconfig configuration name
If "config_tmp" exists in the configuration name of the logical domain, execute the following command to delete it.
# ldm remove-spconfig config_tmp
For details on the configuration information of the logical domain, see "Operations and Commands Related to Logical Domain Configurations" in "SPARC M10 Systems Domain Configuration Guide."
Changing the cluster configuration (control domain)
Change the cluster configuration after performing the Live Migration.
Execute the following command on the cluster nodes of the source control domain:
# /etc/opt/FJSVcluster/bin/clovmmigrate -u source-ldom target-host
The guest domain name to be migrated
The host name registered to the IP address or the /etc/inet/hosts file of the destination control domain
Even if a guest domain is not migrated by such as cancelling the Migration, you need to specify a control domain. To do so, specify the host name registered to the IP address on the source control domain or the /etc/inet/hosts file.
If you execute this command, the following cluster configurations are changed on all nodes in the cluster between guests that are specified for source-ldom. In addition, the configuration information of the logical domains is saved both by the source control domain and the destination control domain.
The timeout value of the CF cluster interconnect (600 seconds to 10 seconds)
Change the setting of the shutdown facility (IP address of XSCF-LAN#0, IP address of XSCF-LAN#1, and the SF weight)
Starting the shutdown facility
Checking the state of the shutdown facility (guest domain)
Execute the following commands on each node of a guest domain to check if the cluster is configured correctly after the Live Migration.
# /etc/opt/FJSVcluster/bin/clsnmpsetup -l
# /opt/SMAW/bin/sdtool -s
Note
If TestFailed or InitFailed is displayed, the setting of the shutdown facility may not be changed.
Go back to step 1 and try again.
Example: When the Migration is performed for guest 2 in the two-node cluster between guest domains ("Figure 14.1 Cluster configuration example")
guest2 # /etc/opt/FJSVcluster/bin/clsnmpsetup -l
device-name cluster-host-name PPAR-ID domain-name IP-address1 IP-address2 user-name connection-type
-------------------------------------------------------------------------------------------------------------------
xscf cfguest1 0 guest1 10.20.30.71 10.20.40.71 xuser ssh
xscf cfguest2 1 guest2 10.20.30.73 10.20.40.73 xuser ssh
guest2 # ^^ ^^^^^^^^^^^^^^^^^^^^^^^^
guest2 # The target XSCF IP address to be migrated
guest2 # The target PPAT-ID to be migrated
guest2 # /opt/SMAW/bin/sdtool -s
Cluster Host Agent SA State Shut State Test State Init State
------------ ----- -------- ---------- ---------- ---------------
cfguest1 SA_xscfsnmpg0p.so Idle Unknown TestWorked InitWorked
cfguest1 SA_xscfsnmpg1p.so Idle Unknown TestWorked InitWorked
cfguest1 SA_xscfsnmpg0r.so Idle Unknown TestWorked InitWorked
cfguest1 SA_xscfsnmpg1r.so Idle Unknown TestWorked InitWorked
cfguest2 SA_xscfsnmpg0p.so Idle Unknown TestWorked InitWorked
cfguest2 SA_xscfsnmpg1p.so Idle Unknown TestWorked InitWorked
cfguest2 SA_xscfsnmpg0r.so Idle Unknown TestWorked InitWorked
cfguest2 SA_xscfsnmpg1r.so Idle Unknown TestWorked InitWorked
guest2 #
Note
After performing the Migration, the following message which indicates the time is not synchronized between the cluster nodes may be printed in the switchlog or /var/adm/messages file.
(WRP, 34) Cluster host <host> is no longer in time sync with local node. Sane operation of RMS can no longer be guaranteed. Further out-of-sync messages will appear in the syslog.
If this situation continues, the following message may be periodically printed in the /var/adm/messages file.
(WRP, 35) Cluster host <host> is no longer in time sync with local node. Sane operation of RMS can no longer be guaranteed.
This message stops once the time is synchronized. For details on the messages, see "PRIMECLUSTER Messages."