Top
ServerView Resource Orchestrator Cloud Edition V3.0.0 Setup Guide

E.3.4 Setup

The setup procedure when using Hyper-V as server virtualization software is as follows:

  1. Register Resources

    1. Register VM Management Software

      When registering VM management software, CSV files that are created in advance during pre-setup preparations, are automatically registered in Resource Orchestrator as virtual storage resources.

      When configuring L-Server alive monitoring, domain users which belong to the Administrators group for all VM hosts must be specified for the login account for VM management software registered in Resource Orchestrator.

      For details on how to register VM management software, refer to "2.2 Registering VM Management Software" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".

    2. Register managed servers

      1. Register Chassis

        Refer to "2.4.1 Registering Chassis" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".

      2. Register Managed Servers (within Chassis)

        Refer to "2.4.2 Registering Blade Servers" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

      3. Register LAN Switch Blades

        Refer to "2.4.3 Registering LAN Switch Blades" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

    3. Preparations for networks (for manual network configuration)

      Preparation is necessary in the following cases.
      For details, refer to "Preparations" of "Connections with Virtual Networks Created in Advance".

      • When not performing network redundancy for L-Servers with blade servers

      • When using servers other than blade servers

    4. Network resources

      To register a network resource, specify a network pool when creating the network resource.

      By creating network resources in advance, if the NIC and network resources are connected when an L-Server is created, the following settings matching the network resource definition will be registered automatically.

      For details on automatic configuration of network resources, refer to "Automatic Network Configuration".

  2. Register Resources in Resource Pools

    1. Register VM host resources

      1. In the ROR console orchestration tree, right-click the target VM pool, and select [Register Resources] from the popup menu.

        The [Register Resources] dialog is displayed.

      2. Select the VM host to register.

      3. Click <OK>.

    2. Register virtual storage resources

      1. In the ROR console orchestration tree, right-click the target storage pool, and select [Register Resources] from the popup menu.

        The [Register Resources] dialog is displayed.

      2. Select the virtual storage resource to register.

      3. Click <OK>.

    3. Register network resources

      If the NIC and network resources are connected when an L-Server is created, a VLAN ID is automatically configured for the NIC of the VM guest, and connected to the virtual network.
      For details, refer to "Automatic Network Configuration".

      1. In the ROR console orchestration tree, right-click the target network pool, and select [Create Resource] from the popup menu.

        The [Create a network resource] dialog is displayed.

      2. Enter the items necessary for network resources.
        For details, refer to "7.3 Network Resources" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  3. Create an L-Server Template

    1. Export an L-Server template

      Refer to "8.2.1 Exporting a Template" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".

    2. Edit an L-Server template

      Refer to "8.2.2 Editing a Template" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".

    3. Import an L-Server template

      Refer to "8.2.3 Importing a Template" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".


Automatic Network Configuration

Network settings on Hyper-V differ depending on hardware (such as blade servers and rack mount servers), and whether network redundancy will be performed for L-Servers.

Automatic Network Configuration for Blade Servers

If the NIC and network resources are connected when an L-Server is created, the following settings will be registered automatically for the VM host that the L-Server will operate on.

In environments using the clustering function of VM management software, in order to enable the migration of VM guests and operation using the HA function, settings for LAN switch blades and virtual switches are performed automatically for all VM hosts comprising the cluster.

When not configuring the tagged VLAN automatically for the uplink port of network resources, use the ROR console to configure the VLAN settings of uplink ports. Right-click the LAN switch in the server resource tree, and select [Modify]-[Network Settings] from the popup menu.

For details, refer to "2.4.4 Configuring VLANs on LAN Switch Blades" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".

Note

After creating an L-Server, if VM hosts are added to the cluster afterwards, Resource Orchestrator network settings are not performed automatically.

Perform the same settings as the existing VM hosts in the cluster configuration for the LAN switch blades and virtual networks of the additional VM hosts.

Virtual network definition files for blade servers

  • When configuring the network for blade servers automatically

    It is not necessary to create a virtual network definition file.

  • When a virtual network definition file exists and no virtual network with a VLAN ID is defined

    The virtual network is automatically configured.

  • When a virtual network with a VLAN ID is defined, using a virtual network definition file

    It is necessary that a virtual network with the VLAN ID be manually configured beforehand.

    For details, refer to "Manual Network Configuration".


Default Blade Server Configuration to Support Automation of Network Configuration in Resource Orchestrator

The default blade server configuration to support automation of network configuration in Resource Orchestrator (server blades, specification of uplink ports for network resource, correspondence relation of numbers of LAN switch blades and physical network adapters, etc.) is shown in the following list. When there are no server NIC definitions, for network auto-configuration, a virtual network is created using the physical network adapter selected according to this list.

Table E.11 Default Blade Server Configuration for Network Auto-Configuration (for PRIMERGY BX900 S1 Chassis)

Server Blade

Specification of Uplink Port
(Location of LAN Switch Blade)

LAN Switch Blade to Use

Physical Network Adapter Number (*4)

BX920 S1
BX920 S2
BX922 S2

CB1 and CB2, or no specification for uplink port

PG-SW111
PG-SW112

3, 4

PG-SW109 (*1)
PG-SW201

1, 2

CB3 and CB4

PG-SW111
PG-SW112
PG-SW109
PG-SW201

5, 6

CB5 and CB6

PG-SW109

9, 10

CB7 and CB8

PG-SW111
PG-SW112

11, 12

PG-SW201

9, 10

BX924 S2

CB1 and CB2, or no specification for uplink port

PG-SW111
PG-SW112
PG-SW109
PG-SW201

1, 2

CB3 and CB4

PG-SW111
PG-SW112
PG-SW109
PG-SW201

3, 4

CB5 and CB6

PG-SW109

7, 8

CB7 and CB8

PG-SW111
PG-SW112

9, 10

PG-SW201

7, 8

BX960 S1

CB1 and CB2, or no specification for uplink port

PG-SW111
PG-SW112
PG-SW109
PG-SW201

11, 12

CB3 and CB4 (*2)

PG-SW111
PG-SW112
PG-SW109
PG-SW201

3, 4

CB5 and CB6 (*3)

PG-SW109

7, 8

CB7 and CB8 (*3)

PG-SW111
PG-SW112

9, 10

PG-SW201

7, 8

*1: When installing a PG-SW109 on CB1 or CB2, set the transmission speed at the down link port of PG-SW109 to 1 Gbps. For details on how to configure the settings, refer to the corresponding hardware manual.
*2: A LAN expansion card is mounted in expansion slot 1.
*3: A LAN expansion card is mounted in expansion slot 2.
*4: Configure a virtual network automatically on a virtual interface configured beforehand in a redundant configuration using individual physical network adapters. Configure the virtual interface on the managed server using one of following products beforehand.

Table E.12 Default Blade Server Configuration for Network Auto-Configuration (for PRIMERGY BX400 S1 Chassis)

Server Blade

Specification of Uplink Port
(Location of LAN Switch Blade)

LAN Switch Blade to Use

Physical Network Adapter Number (*3)

BX920 S2
BX922 S2

CB1 and CB2 (*1), or no specification for uplink port

PG-SW111
PG-SW112

3, 7

PG-SW109 (*2)
PG-SW201

2, 6

CB3 and CB4

PG-SW111
PG-SW112
PG-SW109
PG-SW201

9, 10

BX924 S2

CB1 and CB2 (*1), or no specification for uplink port

PG-SW111
PG-SW112
PG-SW109
PG-SW201

2, 4

CB3 and CB4

PG-SW111
PG-SW112
PG-SW109
PG-SW201

7, 8

*1: The same LAN switch blade model should be mounted in CB1 and CB2.
*2: When installing a PG-SW109 on CB1 or CB2, set the transmission speed at the down link port of PG-SW109 to 1 Gbps. For details on how to configure the settings, refer to the corresponding hardware manual.
*3: Configure a virtual network automatically on a virtual interface configured beforehand in a redundant configuration using individual physical network adapters. Configure the virtual interface on the managed server using one of following products beforehand.

Table E.13 Default Blade Server Configuration for Network Auto-Configuration (for PRIMERGY BX600 S3 Chassis)

Server Blade

Specification of Uplink Port
(Location of LAN Switch Blade)

LAN Switch Blade to Use

Physical Network Adapter Number (*1)

BX600 series servers

NET1 and NET2, or no specification for uplink port

PG-SW107

3, 4

NET3 and NET4

PG-SW104

7, 8

*1: Configure a virtual network automatically on a virtual interface configured beforehand in a redundant configuration using individual physical network adapters. Configure the virtual interface on the managed server using one of following products beforehand.

The numbers of physical network adapters given above can be checked on the details window of the LAN switch blade.

The MAC address (IP address) information of managed servers can be confirmed in "Hardware Maintenance" on the [Resource Details] tab.

Configure the Intel PROSet or PRIMECLUSTER GLS settings on the managed server in advance, using this MAC address information.

When the LAN switch blade is in IBP mode, create a virtual network on the virtual interface configured beforehand in the redundant configuration using the same physical network adapter as in the case of "no specification for uplink port" in the list above.

Performing the following procedure enables automatic network configuration using an arbitrary physical NIC.

  1. Create a server NIC definition and reflect it on Resource Orchestrator.

    1. Create a Server NIC Definition

      Edit the template file and create a server NIC definition.

    2. Reflect the Server NIC Definition

      Execute the rcxadm nicdefctl commit command to reflect the physical NIC configuration specified in the server NIC definition file on Resource Orchestrator.

    3. Confirm the Reflected Server NIC Definition

      Execute the rcxadm nicdefctl show command and confirm the server NIC definition has been reflected on Resource Orchestrator.

  2. Create the XML file that defines network resources, and then create the network resources.

    1. Create the XML File that Defines Network Resources

      Specify the physical LAN segment name that was specified for "PhysicalLANSegment name" in the server NIC definition file for "PhysicalLANSegment" in the XML file that defines network resources.

      In this case, specify auto="true" in the Network element.

    2. Create Network Resources

      Execute the rcxadm network create command specifying the XML file created in a.

See

  • For details on the server NIC definitions, refer to "2.11 Server NIC Definition" of the "Reference Guide (Resource Management) CE".

  • For details on the rcxadm nicdefctl command, refer to "1.7.16 rcxadm nicdefctl" of the "Reference Guide (Resource Management) CE".

  • For details on how to define network resources, refer to "2.5 Network Resources" in the "Reference Guide (Resource Management) CE".

  • For details on the rcxadm network command, refer to "1.3.5 rcxadm network" of the "Reference Guide (Resource Management) CE".

Note

  • If there are VM hosts that meet the following conditions, automation of network configuration is not supported in the chassis.

    • Network resources with uplink ports specified are used

    • There is a virtual network that uses a NIC in a VM host configuration pattern that differs from the ones supporting automation of network configuration

  • If there are VM hosts that meet the following conditions, automation of network configuration is not supported.

    • Network resources with no uplink port specified are used

    • There is a virtual network that uses a NIC in a VM host configuration pattern that differs from the ones supporting automation of network configuration

In the diagram, the default blade server configurations as described in the following configuration example when using a PRIMERGY BX900 S1 chassis are shown.

Table E.14 Configuration Example 1

Server blades

BX920 S2

Specification of uplink port

CB1 and CB2

LAN switch blade to use

PG-SW112

Virtual interface

PRIMECLUSTER GLS

Table E.15 Configuration Example 2

Server blades

BX920 S2

Specification of uplink port

Both "no specification for uplink port" and "CB3 and CB4" are specified

LAN switch blade to use

PG-SW109

Virtual interface

PRIMECLUSTER GLS

Figure E.14 Blade Server Diagram of Configuration Example 1

Figure E.15 Blade Server Diagram of Configuration Example 2


Manual Network Configuration

In the following cases, configure the network manually.

When a virtual network has already been manually configured and server virtualization software other than Hyper-V is being used with the same manager, set a different name from the one used by the virtual switch, virtual network, and virtual bridge on the other virtualization software.


Configuring Network Resources when Using a Physical Network Adapter Number that Differs from Configuration Patterns of VM Hosts which Support Automation of Network Configuration

When using a physical network adapter number that is different from the one used in the configuration patterns mentioned above, create networks using the following procedure.

  1. Create a virtual network with the same name (including upper and lower case characters) for all VM hosts comprising the cluster.

    This enables migration of VM guests between VM hosts. When using System Center 2012 Virtual Machine Manager as VM management software, only "External" can be used for the type of virtual network which is the connection destination for the VM guest.
    For details on how to create networks, refer to the SCVMM help.

  2. Configure LAN switches to enable communication using the tagged VLAN between virtual networks using the same name.

    Right-click the target LAN switch in the server resource tree on the ROR console, and select [Modify]-[Network Settings] from the popup menu.

    The [VLAN Settings] dialog is displayed.

  3. Configure a VLAN.

  4. Define supported virtual networks and VLAN IDs in the following definition file:

    Installation_folder\Manager\etc\customize_data\vnetwork_hyperv.rcxprop

    For details on definition file format, refer to "File Format for Virtual Network Definitions".

  5. Create Network Resources

    • From the GUI:

      1. In the [Create a network resource] dialog containing the VLAN ID that was specified in 2. and 4., uncheck the "Use configured virtual switches." checkbox and create a network resource.

    • From the Command-line:

      1. Create the XML file that defines network resources.

        Define the VLAN ID specified at 2. to 4. in the XML file.

        In this case, specify auto="false" in the Network tag.

      2. To create the network resource, execute the rcxadm network create command specifying the XML file created in a.

        The network resources are created.

See

  • For details on how to configure VLAN settings of LAN switch blade uplink ports, refer to "2.4.4 Configuring VLANs on LAN Switch Blades" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  • For details on the Network element and creation of XML files that define network resources, refer to "2.5 Network Resources" of the "Reference Guide (Resource Management) CE".

  • For details on the rcxadm network command, refer to "1.3.5 rcxadm network" of the "Reference Guide (Resource Management) CE".

Figure E.16 Network Diagram


Connections with Virtual Networks Created in Advance

When not performing network redundancy for L-Servers with blade servers, and in environments where blade servers are not used, only the function for configuring IP addresses and VLAN IDs on VM guest NICs and connecting NICs of VM guests is provided for virtual networks created in advance. Manually perform virtual network settings in advance.

Additionally, the following settings must be performed in advance.

Preparations

  1. Virtual network creation

    Create a virtual network with the same name (including upper and lower case characters) for all VM hosts comprising the cluster.
    This enables migration of VM guests between VM hosts. When using System Center 2012 Virtual Machine Manager as VM management software, only "External" can be used for the type of virtual network which is the connection destination for the VM guest.
    For details on how to create networks, refer to the SCVMM help.

  2. Configure the Virtual Network Communication

    Configure LAN switches to enable communication using the tagged VLAN between virtual networks using the same name.

    1. Right-click the target LAN switch in the server resource tree on the ROR console, and select [Modify]-[Network Settings] from the popup menu.

      The [VLAN Settings] dialog is displayed.

    2. Configure a VLAN.

  3. Define the Supported Virtual Network and VLAN ID

    Supported virtual networks and VLAN IDs are defined in the following definition file of Resource Orchestrator:

    Installation_folder\Manager\etc\customize_data\vnetwork_hyperv.rcxprop

    For details on definition file format, refer to "File Format for Virtual Network Definitions".

  4. Create Network Resources

    • From the GUI:

      In the [Create a network resource] dialog containing the VLAN ID that was specified in 2. and 3., uncheck the "Use configured virtual switches." checkbox and create a network resource.

    • From the Command-line:

      1. Create the XML file that defines network resources.

        Define the VLAN ID specified at 2. to 3. in the XML file.
        In this case, specify auto="false" in the Network tag.

      2. To create the network resource, execute the rcxadm network create command specifying the XML file created in a.

        The network resources are created.

See

  • For details on how to configure VLAN settings of LAN switch blade uplink ports, refer to "2.4.4 Configuring VLANs on LAN Switch Blades" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  • For details on the rcxadm network command, refer to "1.3.5 rcxadm network" of the "Reference Guide (Resource Management) CE".


Virtual NIC Automatic Configuration for VM Guests

Configure a VLAN on the virtual NIC of the VM guest, and connect with the virtual network.

If an image is specified, the IP address is automatically configured. For details on how to configure IP addresses automatically, refer to "Network (NIC)" of "10.3.1 [General] Tab" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

For rack mount or tower servers, an example of virtual NIC configuration and connection with virtual networks using network resources is given below:

Figure E.17 Virtual NIC Configuration and Virtual Network Connection Using Network Resources for Rack Mount or Tower Servers


File Format for Virtual Network Definitions

Describe the virtual network definition file in one line as below:

"Virtual Network Name"=VLAN ID[,VLAN ID...]

For the VLAN ID, a value from 1 to 4094 can be specified. When specifying a sequence of numbers, use a hyphen ("-") such as in "1-4094".

Example

"Network A"=10
"Network B"=21,22,23
"Network C"=100-200,300-400,500

Blank spaces before and after equal signs ("=") and commas (",") are ignored.

Describe the virtual network correctly, as the entry is case-sensitive.
Save files using the UTF-8 character code.

When there are multiple lines with the same virtual network name, all specified lines are valid.

When the same VLAN ID is included in a line with a different virtual network name, the first occurrence in the file is valid and the lines after it are ignored.

Example

"Network D"=11
"Network D"=12 (*1)
"Network E"=11,15 (*2)

*1: Same as when "Network D"=11,12.
*2: 11 is ignored.

An error occurs during L-Server creation if the definition of the VLAN ID of the network resource connected to the NIC cannot be found.


Configuration for MAC Address Pools

Use MAC address pools for System Center 2012 Virtual Machine Manager, in order to allocate MAC address to the NIC, when creating an L-Server connected with the NIC coordinating with System Center 2012 Virtual Machine Manager.

When not changing the default MAC address pool on System Center 2012 Virtual Machine Manager, or only one MAC address pool exists for a Hyper-V environment, use that MAC address pool.

When there is no MAC address pool for a Hyper-V environment on System Center 2012 Virtual Machine Manager, create a MAC address pool to allocate MAC addresses using System Center 2012 Virtual Machine Manager.

When there are multiple MAC address pools on System Center 2012 Virtual Machine Manager, use the following procedure to define the MAC address pool to use.

  1. Settings when using tenants in Resource Orchestrator

    When creating multiple host groups on System Center 2012 Virtual Machine Manager, use the following procedure to use the same tenant configurations as that of the host group.

    1. Create the same number of tenants as the number of host groups in Resource Orchestrator.

    2. Register the VM host located in each host group in the local pool of the corresponding tenant.

  2. Definition of a MAC address pool using an L-Server

    Define the MAC address pool to use when creating an L-Server in the MAC address pool definition file in Resource Orchestrator.

    When dividing MAC address pools for each tenant created in 1., define the MAC address pool used in each tenant in the MAC address definition file.

    When creating multiple host groups in System Center 2012 Virtual Machine Manager, create a definition for each tenant, and specify the MAC address pool allocated to the host group.

    For details on definition file format for MAC address pool, refer to "Definition File Format for MAC Address Pools".


Definition File Format for MAC Address Pools

Location of the Definition File

[Windows]
Installation_folder\Manager\etc\customize_data

Definition File Name

The definition file name can be used by dividing it into definitions that are available for each tenant and definitions that are common to the system.

If both a definition file for each tenant and a common definition file exist on the system, priority is given to the definitions indicated in the definition file for each tenant.

  • By Tenant

    scvmm_mac_pool_tenant_name.rcxprop

  • Common on System

    scvmm_mac_pool.rcxprop

Character Code

UTF-8

Line Break Code

CR/LF

Definition Configuration File Format

Key = Value

Table E.16 Definition File Items

Item

Description

Key

  • When one SCVMM is registered

    all

  • When multiple SCVMMs are registered

    scvmm[SCVMM registered name]

Value

Specify the name of a MAC address pool created in System Center 2012 Virtual Machine Manager.

When the MAC address pool name to specify includes blank spaces, enclose the MAC address pool name in double quotes ( " ).

For details on the character types available for MAC address pools, refer to the SCVMM manual.

Example

  • When only one SCVMM is registered, or when multiple SCVMMs are registered and the MAC address pool names used in each SCVMM are the same

    all = "MAC pool A"

  • When multiple SCVMMs are registered, and different MAC address pool names are used for each SCVMM

    scvmm[scvmm1] = "MAC pool A"
    scvmm[scvmm2] = "MAC pool B"

Note

  • When the VM management software to use is System Center Virtual Machine Manager 2008 R2, it is ignored even if the definition files exist.

  • If you edit and save a UTF-8 text file using Windows Notepad, the Byte Order Mark (BOM) is stored in the first three bytes of the file, and the information specified on the first line of the file will not be analyzed correctly. When using Notepad, specify the information from the second line.

  • More than one blank spaces or tabs at the beginning and end of the line, and before and after equal signs ("=").

  • When a line start with "#", that line is regarded as a comment.

  • The details of lines that are not based on the format above are ignored.

  • Only lower case is valid for "all", "scvmm[registration_name_of_SCVMM]", and "scvmm" used for keys If upper case characters are included, the string is ignored.

  • When the same key exists in the line, the definitions described in the last line are valid.

  • When both "all" key and "scvmm[registration_name_of_SCVMM]" key exist together, priority is given to the definitions for "scvmm[registration_name_of_SCVMM]" .

  • The definition file configurations are reflected without restarting the manager in Resource Orchestrator.


L-Server Creation

Use the following procedure to create L-Servers:

Note

When the created L-Server is not displayed in the orchestration tree

When using the following SCVMM functions for the VM of the created L-Server, after this, the L-Server will not be recognized or displayed in the orchestration tree, and operation of the L-Server becomes unavailable.

  • Saving in the library, deploying from the library

  • New template

  • Transfer the template to a cluster node that is not managed by Resource Orchestrator

When "copying" is performed, the copied VM is not recognized as an L-Server.

Information

Information created during L-Server creation

  • VM guests created as L-Servers have the following configuration:

    Disk and DVD
    First disk (system volume)

    Connected to the primary channel (0) of IDE device

    Second or later disk

    Connected to a SCSI adapter as data disk (*1)

    DVD drive

    Connected to the secondary channel (0) of IDE device

    *1: Cannot be used on guest OS's without the integrated service. Only boot disks connected to the IDE adapter can be used.

    Virtual network adapter

    When a guest OS that Hyper-V supports is specified, a converged network adapter will be used. When a different OS has been selected, an emulated network adapter will be added.

    For details on the guest OS's supported by Hyper-V, refer to the following Microsoft web site.

    Microsoft download web site

    URL: http://www.microsoft.com/windowsserver2008/en/us/hyperv-supported-guest-os.aspx (As of February 2012)

    CPU type

    "1.00GHz Pentium III Xeon" or "3.60 GHz Xeon (2 MB L2 cache)" (the SCVMM default value) is specified. The CPU type is used for internal operations of SCVMM, and it does not indicate the CPU performance.

    It is also different from the information displayed for the computer information of the guest OS.

  • For cloning images, only system volumes are collected and deployed.

    When registering a template created using SCVMM in the image pool of Resource Orchestrator, use a template created from a VM guest that has the system volume (a disk connected to primary channel (0) of the IDE device).

    In other configurations, deploying using Resource Orchestrator will create VM guests without system volumes.


Manual OS Installation

Manually install an OS, after configuring the DVD connection settings from the SCVMM management window.

To use a guest OS supported by Microsoft on Hyper-V, it is necessary to install a virtual guest service on the guest OS.

For details on virtual guest service installation, refer to the Help of SCVMM.


Collecting Cloning Images

Use the following procedure to collect cloning images:
After installing an OS, stop the target L-Server.

  1. Right-click the target L-Server in the orchestration tree, and select [Cloning]-[Collect] from the popup menu.

  2. Click <OK>.

A given cloning image (identified by its name attribute) can be managed by image version.

If a cloning image is created using VM management software, it can be used as is.

Note

  • If an L-Server is created with a specified Windows image, when deploying the image use Sysprep, provided by Microsoft, to re-configure the properties unique to the server. By executing Sysprep, the user information and OS setting information are reset.
    For details on Sysprep, refer to the information provided by Microsoft.

  • If stopping or restarting of the manager is performed during execution of Sysprep, the operation being executed will be performed after the manager is started.
    Until the process being executed is completed, do not operate the target resource.

  • When using MAK license authentication for activation of Windows Server 2008 image OS, Sysprep can be executed a maximum of three times. Since Sysprep is executed when creating L-Server with images specified or when collecting cloning images, collection of cloning images and creation of L-Servers with images specified cannot be performed more than four times. Therefore, it is recommended not to collect cloning images from L-Servers that have had cloning images deployed, but to collect them from a dedicated master server. The number is also included in the count when Sysprep is performed when a template is created using SCVMM.

  • Creation of L-Servers by specifying cloning images on which Windows 2000 Server and Windows 2000 Advanced Server are installed is not supported.

  • When using System Center 2012 Virtual Machine Manager as VM management software, only cloning images with high-availability attributes can be used in Resource Orchestrator.

Point

Images are stored in the SCVMM library.

Specify a library that has sufficient disk space available to store the collected images.

When "Automatic selection" is specified in the [Collect a Cloning Image] dialog, selection is made from libraries registered with SCVMM, but collection of images may fail, as the available disk space of libraries is not managed by SCVMM.

Resource Orchestrator uses SCVMM templates for collection of images.

When collecting images from L-Servers, a template is created using a name with the version number added to the image name. When retrieving the template created by the user as an image, the template is treated as an image.

In order to collect images of L-Servers, a work area equal to the size of the disk (the system volume, all data disks, snapshot, and configuration definition file) of the target of L-Server creation is necessary. This work area is released, when collection of images is complete.

When collecting images, data disks other than the system volume are deleted.

In Resource Orchestrator, the virtual hard disk of the primary channel (0) of the IDE device is managed as the system volume.

DVD drives other than the secondary channel (0) of the IDE device are deleted. A DVD drive is added to the secondary IDE channel (0) even if there is no DVD drive in the image. If DVD drives other than this drive exist, they are deleted.

Collection of images cannot be collected, when there are snapshots. Collect images after deleting snapshots. When creating checkpoints from the SCVMM management console or creating snapshots from Hyper-V manager, collection of images will fail.

When retrieving SCVMM templates created by users using SCVMM, manage them as follows:

Access Control Configuration File of Image Storage Location

By specifying unavailable library shared path names in the access control configuration file of the image storage destination in advance, cloning image storage destinations can be controlled based on user groups.

Storage Location of the Configuration File

[Windows]
Installation_folder\Manager\etc\customize_data

Configuration File Name

The configuration files can be divided into definitions that are available for each user group and definitions that are common to the system. When there are both types of files, the limitations of both are valid.

  • For User Groups

    library_share_user group name_deny.conf

  • Common on System

    library_share_deny.conf

Configuration File Format

In the configuration file, library shared path names are entered on each line.

Library_shared_path_name

Example

An example configuration file is indicated below:

\\rcxvmm1.rcxvmmshv.local\MSSCVMMLibrary
\\rcxvmm2.rcxvmmshv.local\lib


Deleting Cloning Images

Use the following procedure to delete cloning images:

  1. Select the target image pool on the orchestration tree.

    The [Resource List] tab is displayed.

  2. Right-click the cloning image to be deleted, and select [Delete] from the popup menu.

  3. Click <OK>.

    The cloning image is deleted.

When SCVMM template creation requirements are not met, or configuration changes are performed outside Resource Orchestrator, collection of images may fail.

When deleting cloning images, the corresponding templates in the SCVMM library are deleted.

When deleting these templates, related files (such as .vhd and .vfd) will remain in the SCVMM library as only template definition files are deleted.

When these related files are unnecessary, delete them individually from SCVMM.

Information

By creating the following setting file beforehand, when deleting cloning images, the related files that have no dependencies with other templates can be deleted at the same time.

Storage Location of the Configuration File

[Windows]
Installation_folder\Manager\etc\vm

Configuration File Name

delete_image_all_files_scvmm

Configuration File Format

It is not necessary to describe content in the configuration file.

As in the case of deletion of related files from the SCVMM management console, only the related files are deleted from the SCVMM library. The folder where the related files are stored will remain.


[OS] Tab Configuration

Enter the parameters to set for the OS when creating the L-Server. This setting is valid only if an image is specified in the [General] tab.

The setting process is performed the first time the L-Server is started. If an image name is not specified, it is not necessary to enter all these items.

Table E.17 List of Settings

Item

Windows

Description

Necessity of Entry

Values When Omitted

Host name/Computer name

Possible

L-Server Name

Enter the host name or computer name.
Enter a string of between 1 and 15 alphanumeric characters or hyphens ("-"), beginning with an alphanumeric character. The string cannot be composed solely of numbers.
If underscores ("_") or periods (".") are used in an L-Server name, they will be replaced with hyphens ("-"), because these characters cannot be used for host names or computer names.
If the basic information is not specified, the L-Server name is converted and set as indicated above.

Domain name

Possible

WORKGROUP (*1)

Enter the workgroup name. Settings for participation in a domain cannot be made.
Enter between 1 and 255 alphanumeric characters, hyphens ("-"), or periods ("."), using an alphabetic character for the first character.

DNS search path

Not Required

-

Enter a list of domain names to use for DNS searching, using between 1 and 32,767 characters. You can specify the same characters as the domain name.
To specify multiple domain names, use a blank space as the separator character.

Full name

Possible

WORKNAME (*1)

Enter the Windows full name using between 1 and 50 characters.
By default, the value defined in the OS property definition file is entered.
When the OS type is Windows Server 2008, Windows Server 2008 R2, Windows 7, or Windows Vista, a full name cannot be set for guest OS's.

Organization name

Possible

WORKORGANIZATION (*1)

Enter the organization name displayed in the Windows system properties using between 1 and 50 characters.
When the OS type is Windows Server 2008, Windows Server 2008 R2, Windows 7, or Windows Vista, an organization name cannot be set for guest OS's.

Product key

Essential

- (*1)

Omission not possible. Ensure that you specify a valid product key.

License mode

Not Required

-

Even if the license mode is specified, it is not configured in the guest OS.

Maximum number of connections

Not Required

-

Even if the maximum number of connections is specified, it is not configured in the guest OS.

Administrator password

Possible

- (*1)

Enter the same password as that specified for the local administrator account during L-Server creation.
When specifying a new password, the local administrator account will be overwritten.
Enter the password using between 1 and 128 alphanumeric characters or symbols.

Hardware clock configuration

Not Required

-

Specify one of the following:

  • UTC

  • Local (LOCAL)

Time zone

Possible

The same time zone as the OS of the manager

Specify the time zone of the OS.

*1: When the OS property definition file is specified, its values are configured.

Information

OS Property Definition File

By setting the default values in an OS property definition file in advance, the default values of the information on the [OS] tab, etc. are generated when creating an L-Server. Use the UTF-8 character code for OS property definition files.

Location of the Definition File

[Windows]
Installation_folder\Manager\etc\customize_data

[Linux]
/etc/opt/FJSVrcvmr/customize_data

Definition File Name

The definition file name can be used by dividing into definitions that are available for each user group and definitions that are common to the system. If the key of the definition file common to the system is the same as a definition file for a user group, priority is given to the values indicated in the definition file for the user group.

  • For User Groups

    os_setting_user_group_name.rcxprop

  • Common on System

    os_setting.rcxprop

Definition File Format

In the definition file, an item to define is entered on each line. Each line is entered in the following format.

Key = Value

When adding comments, start the line with a number sign ("#").

Definition File Items

Specify the following items in a definition file.

Table E.18 List of Items

Item

Key

Value

Remarks

Domain name

workgroup_name

(*1)

For Windows

domain_name

(*1)

For Linux

DNS search path

dns_search_path

(*1)

-

Full name

full_name

(*1)

-

Organization name

org_name

(*1)

-

Product key

product_key

(*1)

-

License mode

license_mode

Specify one of the following:

  • "seat"(number of connected clients)

  • "server"(in the unit of servers: number of servers used at the same time)

-

Maximum number of connections

license_users

(*1)

-

Administrator password

admin_password

(*1)

-

Hardware clock configuration

hwclock

Specify one of the following:

  • UTC

  • LOCAL

-

DNS server
(When configuring for each NIC on Windows) (*2)

nicN_dns_addressX

Specify the IP address using numeric values (between 0 and 255) and periods. (*2)

When not configuring a DNS server, specify a hyphen ("-").

For N, specify the NIC number.
For X, specify primary ("1") or secondary ("2").

DNS server
(When configuring all NICs using the same settings on Windows)

dns_addressX

Specify the IP address using numeric values (between 0 and 255) and periods.

For X, specify primary ("1") or secondary ("2").

Priority is given to nicN_dns_addressX specifications.

*1: For more information on this value, refer to "Table E.17 List of Settings".
*2: When omitting keys or values, use the value "dns_addressX" to configure the same values for the NIC definitions of all NICs on Windows.

Example

An example definition file is indicated below.

# Windows
workgroup_name = WORKGROUP
full_name = WORKNAME
org_name = WORKORGANIZATION
product_key = AAAA-BBBB-CCCC-DDDD
license_mode = server
license_users = 5
admin_password = xxxxxxxx
nic1_dns_address1 = 192.168.0.60
nic1_dns_address2 = 192.168.0.61
nic2_dns_address1 =
nic2_dns_address2 =

# Linux
domain_name = localdomain
dns_search_path = test.domain.com
hwclock = LOCAL
dns_address1 = 192.168.0.60
dns_address2 = 192.168.0.61
dns_address3 =

Information

VM Guest Administrator Account Settings Necessary When Creating an L-Server with an Image Specified

When creating an L-Server with an image specified, it is necessary to enter the "administrator password" as a parameter.

The entered "administrator password" is the one set for the Administrator of the built-in administrator account, but on some localized editions of Windows the account name may differ. In addition, when the client OS is Windows 7 or Windows Vista, on standard installations the built-in administrator account is disabled, and the user account created during installation becomes the administrator account.

When an L-Server is created with a cloning image that was collected from a localized edition of Windows or a client OS specified, it is necessary to either configure an administrator account for the administrator and set a password, or change the name of the administrator account with the "Administrator password" so that it fits the description given in the definition file below.

Note that when using a definition file, it is not possible to define different administrator ID settings for different versions of images.

Location of the Definition File

[Windows]
Installation_folder\Manager\etc\customize_data

Definition File Name

The definition file name can be used by dividing into definitions that are available for each user group and definitions that are common to the system. Search the definition file of each user group, from the start, for the administrator name corresponding to the image. When there is no corresponding definition, search in the system's common definition file.

Modification of the definition file is soon reflected, and it becomes valid for the creation of L-Servers from that point.

  • For User Groups

    image_admin_hyperv_user_group_name.rcxprop

  • Common on System

    image_admin_hyperv.rcxprop

Definition File Format

In the definition file, describe the image name and account name for which the administrator password has been configured on a single line.

Image_name = "Administrator_account_name"

The Administrator_account_name is displayed enclosed in double quotes ( " ).

Blank spaces and tabs other than those in the Administrator_account_name are ignored.

It is possible to use an asterisk ("*") as a wildcard in image names. By specifying an asterisk ("*") it is possible to create substitute strings for strings of indefinite length.

When creating an L-Server from an image, the corresponding image name is searched for from the start of the definition file, and the specified "Administrator password" will be set for the specified administrator account name.

It is necessary to create the definition files using the following line break code and character codes:

  • Line Break Code

    CR+LF(0x0d0a)

  • Character Code

    Shift-JIS in a Japanese environment, UTF-8 in other environments

Example

An example definition file is indicated below.

  • Image names and administrator account names are set in pairs.

    FR_WIN2003_001 = "Administrator"
    EN_WIN7_001 = "root"
    EN_WIN7_002 = "admin"

  • For image names that start with "FR_WIN", use "Administrator" as the name of the administrator account.

    FR_WIN* = "Administrator"

  • Use "Administrator" as the name of the administrator account for all images. When an image name is specified using only a wildcard, the definition after that line will be ignored.

    * = "Administrator"