Top
PRIMECLUSTER  Installation and Administration Guide4.3

13.2.4 Creating Non-Global Zones

This section describes procedures for building non-global zones. You need to perform this section's procedure the same number of times as the number of necessary non-global zones. To create non-global zones, the following conditions must be satisfied.

If not otherwise specified in the subsequent parts of this section, implement the procedures for only the operational system if one is using a configuration which shares non-global zone images between cluster nodes. If using a configuration which does not share non-global zone images between cluster nodes, implement the procedures in all nodes.

13.2.4.1 Creating the Resource Pool

For each creation of a non-global zone, create a resource pool beforehand. For details on the procedure, see Oracle Solaris documents.

If building a cluster with a Solaris Zones environment, make the number of CPU cores allocated to the global zone two or more.

See

If using ZFS in local classes, see "If using ZFS with a local" of "A.2.38 If Using ZFS" in "PRIMECLUSTER Global Disk Services Configuration and Administration Guide."

13.2.4.2 Creating the Non-Global Zone

Using the zonecfg command, create the non-global zones. Create them by referring to the following example.

# zonecfg -z zone-a    *1

*1: "zone-a" is the zone name (it is the same below)

zone-a: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone-a> create (if the global zone and non-global zone types are the same)
zonecfg:zone-a> create -t SYSsolaris10 (If using Oracle Solaris 10 Zones on Oracle Solaris 11)
zonecfg:zone-a> set zonepath=/zone-a-system *2 zonecfg:zone-a> set autoboot=true (for warm-standby or single-node cluster) zonecfg:zone-a> set autoboot=false (for cold-standby)

*2: For /zone-a-system, specify the directory to which the zone-a images are allocated.

If not sharing images, specify the file system on the local system.

If sharing images, specify the mountpoint registered as an Fsystem resource.

zonecfg:zone-a> set limitpriv="default,proc_priocntl"
zonecfg:zone-a> add fs
zonecfg:zone-a~:fs> set dir=/oracle-data
zonecfg:zone-a~:fs> set special=/zone-a-oracle *3 zonecfg:zone-a~:fs> set type=lofs
zonecfg:zone-a~:fs> end

*3: For /zone-a-oracle, specify the directory of the Fsystem resource corresponding to the volume for zone-a Oracle data allocation.

zonecfg:zone-a> remove inherit-pkg-dir dir=/lib    *4
zonecfg:zone-a> remove inherit-pkg-dir dir=/platform *4
zonecfg:zone-a> remove inherit-pkg-dir dir=/sbin *4
zonecfg:zone-a> remove inherit-pkg-dir dir=/usr *4

*4: If creating non-global zones on Solaris 10 global zone, use "remove inherit-pkg-dir" and set things such that system files will not be inherited from the global zone and make this a whole root zone. For Solaris 11 global zone, this procedure is not required.

[If making the non-global zone's network mode a shared IP zone configuration]
zonecfg:zone-a> set ip-type=shared
zonecfg:zone-a> remove anet *5
zonecfg:zone-a> add net (If making it a shared IP zone configuration)
zonecfg:zone-a:net> set physical=e1000g0 *6

*5: If creating a shared IP zone on Solaris 11 global zone, the anet needs to be removed after changing the jp-type. For Solaris 10 global zone, this procedure is not required.

*6: If specifying a network interface multiplexed with GLS, specify the Primary interface for the corresponding Gls resource.

zonecfg:zone-a:net> set address=10.20.30.40/24
zonecfg:zone-a:net> end
[If making the non-global zone's network mode an exclusive IP zone configuration]
zonecfg:zone-a> set ip-type=exclusive
zonecfg:zone-a> add net
zonecfg:zone-a:net> set physical=e1000g0 *7
zonecfg:zone-a:net> end

*7: Specify the physical interface exclusive to Zones. Perform the IP address setup and physical interface multiplexing from within Zones.

zonecfg:zone-a> add net
zonecfg:zone-a:net> set physical=e1000g1 *8
zonecfg:zone-a:net> end

*8: If multiplexing the physical interface within Zones, it is necessary to specify two or more physical interfaces.

zonecfg:zone-a> verify
zonecfg:zone-a> commit
zonecfg:zone-a> exit

See

For details, see the manual for the zonecfg command and also Oracle Solaris documents.

Note

If using a shared IP zone configuration

For the IP address set up to the non-global zone, set up and IP address which is not being used with GLS. Perform the setup of the default gateway to the zone on the global zone. If one has set up the default gateway using the zonecfg command, the paths will be disabled when performing NIC switching with GLS.

13.2.4.3 OS Installation to the Non-Global Zone

Install the OS to the non-global zone.

If newly installing Solaris 10 or Solaris 11 to the non-global zone, perform Step 1 as below. If creating the non-global zone from an archive, perform Step 2 as below. For details, see Oracle Solaris documents.

  1. Newly Installing Solaris 10 or Solaris 11

    1. Check that the IPS package repository is set (only for Solaris 11)

      When installing Solaris to the non-global zone, the IPS package repository must have been set. Below is an example of checking that the IPS package repository has been set.

      # pkg publisher
      PUBLISHER                   TYPE     STATUS P LOCATION
      solaris                     origin   online F http://localhost/
    2. Install Solaris to the non-global zone using the zoneadm install command. Below is an example of installation of the non-global zone.

      # zoneadm -z zone-a install
      Preparing to install zone <zone-a>. Creating list of files to copy from the global zone. Copying <155078> files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize <1282> packages on the zone. Initialized <1282> packages on zone. Zone <zone-a> is initialized. Installation of <51> packages was skipped. The file </zone-a-system/root/var/sadm/system/logs/install_log> contains a log of the zone installation.
  2. If creating the non-global zone from an archive

    Create the non-global zone from an archive using the zoneadm install command. Below is an example for creating the non-global zone.

    # zoneadm -z zone-a install -u -a /var/tmp/zone-a-system.flar
    Log File: /var/tmp/ zone-a-system.install.1987.log Source: /Etude/zone-a-system.flar Installing: This may take several minutes...

    If PRIMECLUSTER is not installed to the migration source environment, proceed to "13.2.4.4 Non-Global Zone Startup and OS Setup."

    If PRIMECLUSTER is installed to the migration source environment, uninstall the PRIMECLUSTER and perform the procedure below.

    Startup the non-global zone under single-user mode.

    # zoneadm -z zone-a boot -s

    Mount the medium of PRIMECLUSTER with the same version as the migration source in the global zone. For 4.2A00 or earlier, the mountpoint needs to be the directory which can be referred from the non-global zone.

    Log in to the non-global zone.

    # zlogin -C zone-a

    Prepare for PRIMECLUSTER deletion in the non-global zone.

    1. Check if files other than the class.db exist under the /etc/opt/FJSVsdx/sysdb.d. Moreover, check that the inside of the class.db is empty.

      # cd /etc/opt/FJSVsdx/sysdb.d
      # ls
      class.db ...

      If files other than the class.db exist, delete by the rm command.

      If the information exists in the class.db file, delete the line.

      (the line which starts with # is a comment line.)

    2. Check if files other than the _adm, _diag exist under the /dev/sfdsk.

      # cd /dev/sfdsk
      # ls
      _adm _diag ...

      If files other than the _adm, _diag exist, delete by the rm-rf command.

    3. If 4.2A00 or previous version of PRIMECLUSTER is installed at the migration source, remove the FJSVsdx (GDS Basic Software) package.

      # pkgrm FJSVsdx
    4. Back up the preremove and postremove files of SMAWcf package.

      # cd /var/sadm/pkg/SMAWcf/install
      # cp preremove /var/tmp/preremove.orig
      # cp postremove /var/tmp/postremove.orig
    5. Update the preremove and postremove files of SMAWcf package with the following procedure.

      # cat /var/tmp/preremove.orig | sed -e 's,$CFCONFIG -u,echo " ",' | \
      sed -e 's,/opt/SMAW/bin/cipconfig -u,echo " ",' \
      > preremove
      # cat /var/tmp/postremove.orig | sed -e 's,^module_id.*,module_id="",' | \
      sed -e 's,out=`rem_drv.*,out=" "; echo " ",' \
      > postremove

    Uninstall PRIMECLUSTER from the non-global zone. Follow the "PRIMECLUSTER Installation Guide" to uninstall the PRIMECLUSTER with the same version as the migration source.
    For 4.2A00 or earlier, execute the uninstallation script in the non-global zone. For the uninstallation script, use the uninstallation script of the medium of PRIMECLUSTER mounted beforehand.

    If PRIMECLUSTER Wizard for Oracle, PRIMECLUSTER Wizard for NAS, and PRIMECLUSTER Wizard for NetWorker have been installed in the migration source environment, uninstall them from the non-global zone before uninstalling PRIMECLUSTER. For the procedure for uninstalling each Wizard product, see the installation guide of the same version as each Wizard product in the migration source.

    Post-processing after the PRIMECLUSTER uninstallation is performed in the non-global zone.

    Delete the backups of the preremove and postremove files created in the above procedure.

    # rm /var/tmp/preremove.orig /var/tmp/postremove.orig

    Stop the non-global zone.

    # shutdown -y -g0 -i0

    Unmount the medium of PRIMECLUSTER in the global zone.

13.2.4.4 Non-Global Zone Startup and OS Setup

Using the zoneadm boot command, start up the zone for which installation was performed. After that, acquire the console and perform Solaris OS setup (setup of node names, time zones, etc.).

# zoneadm -z zone-a boot
# zlogin -C zone-a

See

For details, see the manuals for the zoneadm command and zlogin command and also Oracle Solaris documents.

Note

Specify the host name to be set to the non-global zone with up to 11 characters including alphabets or the "-" and "_" symbols.

If monitoring applications in the non-global zone, proceed to "13.2.4.5 Installation of PRIMECLUSTER to the Non-Global Zone."

If not monitoring applications in the non-global zone, proceed to "13.2.4.13 Sharing Non-Global Zone Configuration Information."

13.2.4.5 Installation of PRIMECLUSTER to the Non-Global Zone

Perform installation of PRIMECLUSTER to the non-global zone. For details, refer to the "PRIMECLUSTER Installation Guide."

13.2.4.6 Global Zone Environment Setup (After Installation of PRIMECLUSTER to the Non-Global Zone)

  1. Revising Kernel Parameters

    This task is unnecessary if one is not installing PRIMECLUSTER to the non-global zone.

    Add the number of non-global zones to be created and make this the value for the kernel parameter detailed in the table below to the /etc/system for all of the global zones which comprise the cluster system. Then restart the global zone.

    # shutdown -y -g0 -i6
    Table 13.17 Kernel Parameters Requiring Revision

    Kernel Parameters

    Attribute

    Value

    Remarks

    shmsys:shminfo_shmmni

    Add

    30

    Performed in the resource database; the necessary value is per Zones

    semsys:seminfo_semmni

    Add

    20

    Performed in the resource database; the necessary value is per Zones

    Note

    Do not delete the following definitions written in the non-global zones' /etc/system.

    set semsys:seminfo_semmni=30

    set shmsys:shminfo_shmmni=130

    set in_sync=1

  2. Registering the GDS shared class volume

    This procedure is necessary if attempting to access the GDS shared class volume from the non-global zone.

    The procedure is different for Solaris 11 and Solaris 10.

    [For Solaris 11]

    Add the GDS shared class volume created in the global zone to the non-global zone, and then restart the non-global zone.

    Execute the following commands in the global zone.

    # zonecfg -z zone-a
    # zonecfg:zone-a>add device
    # zonecfg:zone-a:device>set match=/dev/sfdsk/class0001/rdsk/volume0001
    # zonecfg:zone-a:device>end
    # zonecfg:zone-a>add device
    # zonecfg:zone-a:device>set match=/dev/sfdsk/class0001/dsk/volume0001
    # zonecfg:zone-a:device>end
    # zonecfg:zone-a>verify
    # zonecfg:zone-a>commit
    # zonecfg:zone-a>exit
    # zlogin zone-a shutdown -y -g0 -i6

    (If the zone name is zone-a, the class name is class0001, and the volume name is volume0001)

    [For Solaris 10]

    Copy the special file of the GDS shared class volume under /<zonepath>/dev.

    Execute the following commands in the global zone.

    # cd /dev
    # tar cvf /var/tmp/dsk.tar sfdsk/class0001/dsk/volume0001
    # tar cvf /var/tmp/rdsk.tar sfdsk/class0001/rdsk/volume0001
    # cd /zone-a-system/dev
    # tar xvf /var/tmp/dsk.tar
    # tar xvf /var/tmp/rdsk.tar

    (If the zonepath is /zone-a-system, the class name is class0001, and the volume name is volume0001)

    Note

    The GDS volume special file copied in the procedure above will be deleted by the OS specifications if one detaches and then attaches a non-global zone. That being the case, re-perform this procedure after attaching a non-global zone.

  3. Creating the file system

    For Solaris 11, specify the file system type to be mounted in the non-global zone, and then restart the non-global zone.

    Execute the following commands in the global zone. For Solaris 10, do not execute these commands.

    # zonecfg -z zone-a
    # zonecfg:zone-a> set fs-allowed=hsfs,nfs,ufs,zfs
    # zonecfg:zone-a> verify
    # zonecfg:zone-a> commit
    # zonecfg:zone-a> exit
    # zlogin zone-a shutdown -y -g0 -i6

    (If the zone name is zone-a, and the file system type is hsfs, nfs, ufs, or zfs)

    Regardless of the OS version, create the file system to the volume in the non-global zones.

    Execute the following command in the non-global zone.

    # newfs /dev/sfdsk/class0001/rdsk/volume0001

    (If the class name is class0001, the volume name is volume0001, and the file system is UFS)

    Note

    Perform the creation of the above file system only from the one node first used.

  4. Setting the IP address of CIP

    When performing application monitoring, set the IP address of CIP according to the example below:

    • For shared IP zone

      Set up the following in the global zone.

      # zonecfg -z zone-a
      zonecfg:zone-a> add net
      zonecfg:zone-a:net> set address=127.0.0.2 *1
      zonecfg:zone-a:net> set physical=lo0
      zonecfg:zone-a:net> end
      zonecfg:zone-a> verify
      zonecfg:zone-a> commit
      zonecfg:zone-a> exit
    • For exclusive IP (Solaris 11)

      Execute the following command on the non-global zone. Set up the following in all non-global zones.

      # ipadm create-addr -T static -a local=127.0.0.2/8 lo0/cip *1
    • For exclusive IP (Solaris 10)

      Create /etc/hostname.lo0:1(*2) and enter the following.

      127.0.0.2 *1

      *1) Specify a loopback address which is not used by the system.

      *2) Use a non-existent file name.

    Add the address specified in /etc/inet/hosts after finishing above settings.

    127.0.0.2 xxxRMS

    "xxx" is the CF node name of non-global zone, which can be checked with cftool -l command on the non-global zone.

13.2.4.7 Setup of Web-Based Admin View for the Non-Global Zone

Perform this task in the non-global zone.

Refer to "4.2.3 Initial Setup of Web-Based Admin View," and perform the setup and startup for Web-Based Admin View. When doing so, specify the same non-global zone IP addresses as those for both the primary management server and the secondary management server specified with "4.2.3.1 Initial setup of the operation management server."

(Example: If the non-global zone IP address is 10.20.30.40)

# /etc/init.d/fjsvwvcnf stop
# /etc/init.d/fjsvwvbs stop
# /etc/opt/FJSVwvbs/etc/bin/wvSetparam primary-server 10.20.30.40
# /etc/opt/FJSVwvbs/etc/bin/wvSetparam secondary-server 10.20.30.40
# /etc/opt/FJSVwvbs/etc/bin/wvCntl start
# /etc/init.d/fjsvwvcnf start

After setup, use the procedure "4.3 Starting the Web-Based Admin View Screen" to confirm that one is able to start up the GUI screen.

13.2.4.8 Initial Setup of the Non-Global Zone Cluster Resource Management Facility

When connecting to the non-global zone set up with 13.2.4.7 Setup of Web-Based Admin View for the Non-Global Zone and starting up the Web-Based Admin View screen, refer to "5.1.3 Initial Setup of the Cluster Resource Management Facility" and "5.1.3.1 Initial Configuration Setup," and perform the initial configuration setup for the cluster resource management facility.

It is not necessary to perform CF and CIP setup, shutdown facility setup, or automatic configuration for the non-global zone.

Note

If performing initial configuration setup for the cluster resource management facility, the message below will be output onto the non-global zone console, but this will not be a problem for its operation.

/dev/rdsk/*: No such file or directory

Also, if initial configuration setup failed, it is possible that the non-global zone kernel parameters were insufficient. Refer to the "A.5 Kernel Parameter Worksheet" and correct the kernel parameter value. After restarting the non-global zone, perform resource database initialization using the clinitreset (1M) command and re-perform the initial configuration setup.

13.2.4.9 Setup of GLS in a Non-Global Zone

This procedure is necessary only if one is using the NIC switch mode with an exclusive IP zone configuration. If setting up GLS on a non-global zone, refer to the "PRIMECLUSTER Global Link Services Configuration and Administration Guide: Redundant Line Control Function" and perform the setup for multiplexing the physical interface.

Perform this section's tasks in all of the non-global zones which are to build the cluster system.

Figure 13.14 Example of an Environment Setup for if Configuring Between the Non-Global Zones with a Warm-standby Configuration

  1. System settings

    1-1) Define the IP address to be used and the host name to the /etc/inet/hosts file.

    10.20.30.42 zone-a0    # zone-a virtual IP(takeover IP)
    10.20.30.41 zone-a01   # zone-a physical IP
    10.20.30.43 swhub1     # primary monitoring destination HUB IP
    10.20.30.44 swhub2     # secondary monitoring destination HUB IP

    Note

    Setup the zone-a physical IP address such that it does not overlap with other non-global zone physical IP addresses.

    1-2) Write the host name defined above to the /etc/hostname.e1000g0 file.

    Content of /etc/hostname.e1000g0

    zone-a01

    1-3) Define the subnet mask to the /etc/inet/netmasks file.

    10.20.30.0 255.255.255.0
  2. Reboot

    Execute the following command and reboot the non-global zone. Perform this command from the global zone. After reboot, execute the ifconfig command to confirm that the e1000g0 is activated.

    # /usr/sbin/zlogin zone-a shutdown -y -g0 -i6
  3. Creating the virtual interface

    # /opt/FJSVhanet/usr/sbin/hanetconfig create -n sha0 -m d -i 10.20.30.42 -e 10.20.30.41 -t e1000g0,e1000g1

    Note

    Always be sure that the physical IP address defined to the option "-e" matches with the physical IP address set up to the /etc/hostname.e1000g0.

  4. Setup of standby patrol function

    # /opt/FJSVhanet/usr/sbin/hanetconfig create -n sha1 -m p -t sha0

    Information

    For GLS4.3A10 or later, the -a option can be omitted. In that case, the settings below are performed automatically.

    • The environment the MAC addresses of the active NIC and standby NIC are the same.

      The local MAC address based on the global address.

    • The environment MAC addresses of the active NIC and standby NIC are different.

      0:0:0:0:0:0

  5. Setup of HUB monitoring function

    # /opt/FJSVhanet/usr/sbin/hanetpoll create -n sha0 -p 10.20.30.43,10.20.30.44 -b off
  6. Creating the takeover virtual interface

    # /opt/FJSVhanet/usr/sbin/hanethvrsc create -n sha0

    Note

    This settings are not necessary for single-node cluster operations.

  7. Starting HUB monitoring

    # /opt/FJSVhanet/usr/sbin/hanetpoll on

13.2.4.10 Installing Middleware Products to Non-Global Zones

For the installation procedure and points of caution for each middleware product, refer to the respective middleware product manual.

13.2.4.11 Setup of Non-Global Zone RMS

Add the following lines to the "/opt/SMAW/SMAWRrms/bin/hvenv.local" file. If the "/opt/SMAW/SMAWRrms/bin/hvenv.local" file does not exist, create the file (create the file access privilege in 644), and add the following lines.

You can check the CF node name with the cftool -n command.

Example

When the CF node name is "zone-a"

# cftool -n
Node Number State Os Cpu zone-a 1 UP Solaris Sparc

13.2.4.12 Setup of Non-Global Zone Cluster Applications

This section explains the procedure for creating cluster applications on the non-global zone.

Perform the following procedure taking into account the cluster resources that are to be set up.

No.

Task Overview

Procedure necessary to configuration

1

2

3

4

5

6

7

1

Setup of the Cmdline resource

A

A

A

A

B

B

A

2

Setup of the Oracle resource

A

A

A

A

B

B

A

3

Setup of the NetWorker resource

A

A

A

A

B

B

A

4

Setup of the Netapp resource

A

A

A

A

B

B

A

5

Setup of the state transition procedure resources

A

A

A

A

B

B

A

6

Setup of the Gls resource

A

B

A

B

B

B

B

7

Setup of the Fsystem resource

A

A

B

B

B

B

A

8

Creation of the cluster applications

A

A

A

A

B

B

A

A: Perform as required, B: Unrequired

  1. Setup of the Cmdline resource

    For the method for setting up the Cmdline resource, refer to "6.7.1.1 Creating Cmdline Resourcesg Cmdline Resources."

  2. Setup of the Oracle resource

    Refer to the "PRIMECLUSTER Wizard for Oracle Configuration and Administration Guide" and perform the setup of the cluster resource.

  3. Setup of the Netapp resource

    Refer to the " PRIMECLUSTER Wizard for NetWorker 4.2 Configuration and Administration Guide" and perform the setup of the cluster resource.

  4. Setup of the Netapp resource

    Refer to the " PRIMECLUSTER Wizard for NAS Configuration and Administration Guide" and perform the setup of the cluster resource.

  5. Setup of the state transition procedure resources

    Refer to the middleware manual and set up the state transition procedure resources. As for the availability of middleware products and PRIMECLUSTER in combination in a non-global zone, contact field engineers.

  6. Setup of the Gls resource

    This procedure is necessary only if one is using an exclusive IP zone configuration.

    Refer to "6.7.1.4 Creating Gls Resources" and perform the setup of the Gls resource.

  7. Setup of the Fsystem resource

    This procedure is necessary if using a switching file system with a non-global zone.

    Refer to "6.7.1.2 Creating Fsystem Resources" and perform the setup of the Fsystem resource.

    Note that you cannot set ZFS for Fsystem resources in non-global zones.

  8. Creation of the cluster applications

    Create the cluster applications on non-global zones.

    For the method for creating the cluster applications, follow "6.7.2.1 Creating Standby Cluster Applications." However, there are the following differences in procedure:

    • Cluster application attributes

      • Set No to AutoStartUp, AutoSwitchOver, and HaltFlag. However, when the global zone is operated on a single-node cluster, set Yes to AutoStartUp.

      • If on a warm-standby configuration and wishing to put the cluster applications on the standby system's non-global zone into Standby mode, set the ClearFaultRequest to StandbyTransitions. For all other circumstances, set No to it.

      • Set NONE to Shutdown Priority.

      • Set 0 to Online Priority.

13.2.4.13 Sharing Non-Global Zone Configuration Information

If using cold-standby, stop the non-global zones in the operational system nodes.

# zlogin zone-a shutdown -i0 -g0 -y

If sharing non-global zone images in cold-standby operation, make it so that one is able to use the information for the non-global zones created thus far from the standby system's nodes as well.

Export the non-global zone configuration information with the operational system node.

# zonecfg -z zone-a export -f /var/tmp/zone-a.exp

Copy the output file (in the example above /var/tmp/zone-a.exp) to the standby system nodes.

Import the non-global zone into the standby system nodes.

# zonecfg -z zone-a -f /var/tmp/zone-a.exp

Note

When performing import, since it is not necessary to access the non-global zone's file system, do not perform an operation with the standby system nodes making the cluster application Online. Also, do not perform an operation which attaches or starts up the non-global zone.