Top
PRIMECLUSTER  Installation and Administration Guide4.3

14.3.3 Performing Live Migration of the Cluster on a Guest Domain

14.3.3.1 Operation before Performing Live Migration

This section describes the operation before performing the Live Migration in an Oracle VM Server for SPARC Environment.

Note

  1. Changing the cluster configuration (guest domain)

    Change the cluster configuration before performing the Live Migration.
    Execute the following command on one of the cluster nodes of the guest domain.

    # /etc/opt/FJSVcluster/bin/clovmmigrate -p

    If you execute this command, the following cluster configurations are changed on all nodes:

    • The timeout value of the CF cluster interconnect (600 seconds to 10 seconds)

    • Stopping the shutdown facility

14.3.3.2 Operation after Performing Live Migration

This section describes the operation after performing the Live Migration.

Note

  1. Saving the logical domains configuration information on the source control domain (control domain)
    On the source control domain, save the logical domains configuration information.
    This operation must be done on the source control domain.

    For details, see "SPARC M10 Systems System Operation and Administration Guide."

  2. Saving the logical domains configuration information on the destination control domain (control domain)
    On the destination control domain, save the logical domains configuration information.
    This operation must be done on the source control domain.

    For details, see "SPARC M10 Systems System Operation and Administration Guide."

  3. Changing the cluster configuration (guest domain)
    Change the cluster configuration after performing the Live Migration.
    Execute the following command on one of the cluster nodes.

    # /etc/opt/FJSVcluster/bin/clovmmigrate -u source-ldom target-host
    source-ldom

    The target guest domain name to be migrated

    target-host

    The host name registered to the IP address or the /etc/inet/hosts file of the destination control domain
    Even if a guest domain is not migrated by such as cancelling the Migration, you need to specify a control domain. To do so, specify the host name registered to the IP address on the source control domain or the /etc/inet/hosts file.

    If you execute this command, the following cluster configurations are changed on all nodes:

    • The timeout value of the CF cluster interconnect (600 seconds to 10 seconds)

    • Change the setting of the shutdown facility (IP address of XSCF-LAN#0, IP address of XSCF-LAN#1, and the SF weight)

    • Starting the shutdown facility

  4. Checking the state of the shutdown facility (guest domain)
    Execute the following commands on each node of a guest domain to check if the cluster is configured correctly after the Live Migration.

    # /etc/opt/FJSVcluster/bin/clsnmpsetup -l
    # /opt/SMAW/bin/sdtool -s

    Note

    If TestFailed or InitFailed is displayed, the setting of the shutdown facility could have been unchanged.
    Go back to step 3 and try again.

    Example: When the Migration is performed for guest 2 in the two-node cluster between guest domains ("Figure 14.1 Cluster configuration example")

    guest2 # /etc/opt/FJSVcluster/bin/clsnmpsetup -l
    device-name cluster-host-name  PPAR-ID  domain-name  IP-address1     IP-address2     user-name      connection-type
    -------------------------------------------------------------------------------------------------------------------
    xscf        cfguest1           0        guest1       10.20.30.71     10.20.40.71     xuser          ssh
    xscf        cfguest2           1        guest2       10.20.30.73     10.20.40.73     xuser          ssh
    guest2 #                    ^^                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    guest2 #                                             The target XSCF IP address to be migrated
    guest2 #                       The target PPAT-ID to be migrated
    guest2 # /opt/SMAW/bin/sdtool -s
    Cluster Host    Agent                SA State      Shut State  Test State  Init State
    ------------    -----                --------      ----------  ----------  ----------
    cfguest1        SA_xscfsnmpg0p.so    Idle          Unknown     TestWorked  InitWorked
    cfguest1        SA_xscfsnmpg1p.so    Idle          Unknown     TestWorked  InitWorked
    cfguest1        SA_xscfsnmpg0r.so    Idle          Unknown     TestWorked  InitWorked
    cfguest1        SA_xscfsnmpg1r.so    Idle          Unknown     TestWorked  InitWorked
    cfguest2        SA_xscfsnmpg0p.so    Idle          Unknown     TestWorked  InitWorked
    cfguest2        SA_xscfsnmpg1p.so    Idle          Unknown     TestWorked  InitWorked
    cfguest2        SA_xscfsnmpg0r.so    Idle          Unknown     TestWorked  InitWorked
    cfguest2        SA_xscfsnmpg1r.so    Idle          Unknown     TestWorked  InitWorked
    guest2 #

Note

After performing the Migration, the following message which indicates the time is not synchronized between the cluster nodes may be printed in the switchlog or /var/adm/messages file.

(WRP, 34) Cluster host <host> is no longer in time sync with local node. Sane operation of RMS can no longer be guaranteed.
Further out-of-sync messages will appear in the syslog.

If this situation continues, the following message may be periodically printed in the /var/adm/messages file.

(WRP, 35) Cluster host <host> is no longer in time sync with local node. Sane operation of RMS can no longer be guaranteed.

This message stops once the time is synchronized. For details on the messages, see "PRIMECLUSTER Messages."