This section explains how to define the storage environment settings required for a Resource Orchestrator setup.
This section explains the preparations for setting up storage.
When creating physical servers and virtual machines, it was difficult to smoothly provide servers as configuration of storage units and the storage network was necessary.
Using the following functions of Resource Orchestrator, servers can be provided smoothly.
Allocating Storage to a Virtual L-Server
There are two ways to allocate storage to a virtual L-Server:
Allocate disk resources (virtual disks) automatically created from virtual storage resources (datastores)
Through coordination with VM management software, virtual storage resources (such as the file systems of VM guests) that were created in advance are automatically detected by Resource Orchestrator. From the detected virtual storage resources, virtual storage resources meeting virtual L-Server specifications are automatically selected by Resource Orchestrator.
From the automatically selected virtual storage resources, disk resources (such as virtual disks) of the specified size are automatically created and allocated to the virtual L-Server.
[Xen]
GDS single disks can be used as virtual storage.
Allocate disk resources (raw devices or partitions) that were created in advance [KVM]
Create LUNs for the storage units.
LUNs are used for virtual L-Server disks. Create the same number of LUNs as that of necessary disks.
The size of each LUN must be larger than the size of each disk.
Make the VM host recognize the LUNs created in 1. as raw devices.
When migrating VM guests for virtual L-Servers, configure zoning and affinity to set LUNs as shared disks.
Partitions are also used for virtual L-Server disks. Create the same number of partitions as that of necessary disks. The size of each partition must be larger than the size of each disk.
Use the rcxadm disk command to register the raw devices or the partitions with Resource Orchestrator as disk resources.
When migrating VM guests for virtual L-Servers, register the raw devices or the partitions shared between multiple VM hosts as disk resources defined to be shared.
From the registered disk resources, disk resources meeting the virtual L-Server specifications are automatically selected and allocated to the L-Server by Resource Orchestrator.
Definition File Required when Using a Virtual L-Server
The definition file required when using a virtual L-Server is indicated below.
When configuring Thin Provisioning attributes on a storage pool
Refer to "Configuring Thin Provisioning Attributes for Storage Pools" in "12.2 Resource Pool Operations" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".
When using priority for resource selection on Thin Provisioning
Refer to "Configuration of Priority for Resource Selection on Thin Provisioning" of "E.1.1 Definition Files".
Allocating Storage to a Physical L-Server
There are two ways to allocate storage to a physical L-Server:
Allocate disk resources (LUNs) automatically created from virtual storage resources (RAID groups)
Through coordination with storage products, Resource Orchestrator automatically detects virtual storage resources that were created in advance.
From the detected virtual storage resources, Resource Orchestrator automatically selects virtual storage resources meeting physical L-Server specifications.
From the automatically selected virtual storage resources, create disk resources of the specified size and allocate them to the physical L-Server.
Allocate disk resources (LUNs) that were created in advance
Through coordination with storage products, Resource Orchestrator automatically detects disk resources that were created in advance.
From the detected disk resources, Resource Orchestrator automatically selects disk resources meeting physical L-Server specifications and allocates them to the L-Server.
The storage allocation method varies depending on the storage units being used.
For the storage allocation methods and storage types, refer to "Table 4.10 Storage Allocation Methods and Storage Types for Physical L-Servers" in "4.3.1.2 Storage Configuration".
For the storage units that can be connected with physical L-Servers, refer to "Table 1.61 Storage Units that can be Connected with L-Servers on Physical Servers" in "1.5 Hardware Environment".
Effective Utilization of Storage Using Thin Provisioning
Thin provisioning is technology for virtualizing storage capacities.
It enables efficient utilization of storage.
The function does not require the necessary storage capacity to be secured in advance, and can secure and extend the storage capacity according to how much is actually being used.
Thin provisioning can be achieved using the following two methods:
Method for using the thin provisioning of a storage unit
Resource Orchestrator can be coordinated with the thin provisioning of ETERNUS storage.
Method for using the thin provisioning of server virtualization software
Resource Orchestrator can be coordinated with VMware vStorage Thin Provisioning.
For details on linking with thin provisioning, refer to "4.3.1.2 Storage Configuration".
Effective Utilization of Storage Using Automatic Storage Layering
Automatic Storage Layering is a feature that monitors data access frequency in mixed environments that contain different storage classes and disk types. It then automatically relocates data to the most appropriate storage devices based on set data usage policies.
Resource Orchestrator can be coordinated with Automatic Storage Layering for ETERNUS storage. For details on coordination with Automatic Storage Layering, refer to "4.3.1.2 Storage Configuration".
Prerequisites when Creating a Physical L-Server
For details on the prerequisites when creating a physical L-Server, refer to "Prerequisites when Creating a Physical L-Server" in "4.3.1.2 Storage Configuration".
Storage Configuration when Using a Physical Server as an L-Server
For details on the storage configuration when using a physical server as an L-Server, refer to "Storage Configuration when Creating a Physical L-Server" in "4.3.1.2 Storage Configuration".
Storage resources are categorized into the following two types.
The resource registration method differs depending on the type of storage resource.
Virtual Storage Resources
When storage management software is registered to Resource Orchestrator, the storage information controlled by the storage management software is automatically obtained and detected as a virtual storage resource. Therefore, it is not necessary to register virtual storage resources individually.
Disk Resources
For disk resources created in advance such as LUNs, storage information is automatically obtained when storage management software is registered, and they are detected as disk resources. Therefore, it is not necessary to register disk resources individually.
Disks created in advance using storage management software can be managed as disk resources.
Detected virtual storage resources and disk resources must be registered to the storage pool.
For details on registering to a storage pool, refer to "7.5 Storage Resources" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".
Automatic Detection of Storage Resources
When addition or modification of storage is performed using storage management software or VM management software, periodic queries are made to the storage management software or VM management software to detect changes to the configuration/status of storage. The interval between regular updates varies according to the number of storage resources.
By right-clicking a storage resource on the ROR console orchestration tree and selecting [Update] on the displayed menu, the configuration/status of the storage management software and VM management software is refreshed without waiting for the regular update.
After that, perform registration in the storage pool.
Definition File Required when Using a Physical L-Server
The definition file required when using a physical L-Server is indicated below.
When using the following storage
ETERNUS Storage
EMC CLARiiON Storage
EMC Symmetrix DMX Storage
EMC Symmetrix VMAX Storage
Refer to "6.1.1 Creating Definition Files Combining Ports of SAN Storage".
When using ESC as storage management software
Refer to "Format Selection for the Names of Virtual Storage Resources and Disk Resources Managed by ESC" of "D.5.1 Definition Files".
When using EMC Navisphere Manager or EMC Solutions Enabler as storage management software
For details, refer to "Definition File for EMC Storage" of "D.5.1 Definition Files".
When configuring Thin Provisioning attributes on a storage pool
Refer to "Configuring Thin Provisioning Attributes for Storage Pools" in "12.2 Resource Pool Operations" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".
When using priority for resource selection on Thin Provisioning and Automatic Storage Layering
Refer to "Configuration of Priority for Resource Selection on Thin Provisioning on Thin Provisioning and Automatic Storage Layering" of "D.5.1 Definition Files".
When using dynamic LUN mirroring
For details, refer to "Creating Mirroring Definition Files for Dynamic LUN Mirroring" of "D.5.1 Definition Files".
This section explains how to configure a storage environment and define the settings required for Resource Orchestrator installation.
Storage Allocation Methods and Storage Types
The storage allocation method varies depending on the storage units being used.
The following storage allocation methods and storage types are available for physical L-Servers.
Allocation Method | Storage Type |
---|---|
Allocate disk resources automatically created from virtual storage resources |
|
Allocate disk resources that were created in advance |
|
Storage Configuration
Decide the storage configuration necessary for the system.
The storage configurations supported by Resource Orchestrator are as follow:
Server (VM host) Type | L-Server System Disk | L-Server Data Disk |
---|---|---|
Physical | SAN storage | SAN storage |
iSCSI storage (*1, *2) | iSCSI storage (*1, *3) | |
VMware | Storage configured for datastores of ESX/ESXi (VMFS Version 3 or later, or NFS mount) | |
Hyper-V | Storage configured for Cluster Shared Volumes (CSV) of MSFC |
*1: Available when ETERNUS storage and NetApp storage are used.
*2: When using Linux for a physical L-Server, and iSCSI storage for a system disk, it is not possible to create an L-Server using a cloning image.
*3: When creating an L-Server, iSCSI storage is not allocated to the L-Server as a data disk. Manually allocate the iSCSI storage to the L-Server, after starting the L-Server. Attaching or detaching iSCSI storage to or from an L-Server cannot be performed using Resource Orchestrator. Perform those operations manually. For details on data disk allocation for iSCSI storage, refer to "Information- Physical L-Server Data Disk for iSCSI Boot".
Information
Physical L-Server Data Disk for iSCSI Boot
When Using ETERNUS Storage
Using storage management software, the data disk can be accessed from managed servers by defining LUNs of the iSCSI boot disk and of the data disk in the same Affinity group.
When Using NetApp Storage
Using storage management software, the data disk can be accessed from managed servers by defining LUNs of iSCSI boot disk and of the data disk in the same igroup.
Linking with Thin Provisioning
Resource Orchestrator can be linked with the thin provisioning of storage units and server virtualization software.
Linking with the thin provisioning of ETERNUS storage
With ETERNUS storage, a virtual resource pool comprised of one or more RAID groups is called a Thin Provisioning Pool (hereinafter TPP).
Also, a virtual volume that shows a volume with a greater capacity than the physical disk capacity of the server is called a Thin Provisioning Volume (hereinafter TPV).
Capacity is allocated to TPVs from TPPs.
With Resource Orchestrator, TPPs can be managed as virtual storage resources.
The virtual storage resource of a TPP is called a virtual storage resource with thin provisioning attributes set.
The virtual storage resource of a RAID group is called a virtual storage resource with thick provisioning attributes set.
With Resource Orchestrator, ESC can be used to create a TPV in advance and manage that TPV as a disk resource.
The disk resource of a TPV is called a disk with thin provisioning attributes set.
The disk resource of an LUN is called a disk with thick provisioning attributes set.
Coordination with VMware vStorage Thin Provisioning
In VMware, a virtual disk with a thin provisioning configuration is called a thin format virtual disk.
With Resource Orchestrator, thin format virtual disks can be managed as disk resources.
A thin format virtual disk is called a disk with thin provisioning attributes set.
A thick format disk resource is called a disk with thick provisioning attributes set.
Storage resource management
With Resource Orchestrator, storage resources (virtual storage resources and disk resources) can be managed in a storage pool. Storage pools must take into account the existence of thin provisioning attributes.
The following resources can be registered in a storage pool with thin provisioning attributes set:
Virtual storage resources with thin provisioning attributes set
Disk resources with thin provisioning attributes set
The following resources can be registered in a storage pool without thin provisioning attributes set:
Virtual storage resources with thick provisioning attributes set
Disk resources with thick provisioning attributes set
[VMware]
Thin provisioning cannot be set for VMware datastores. Therefore, the following settings must be specified in Resource Orchestrator.
When creating disk resources from virtual storage resources registered in a storage pool with thin provisioning attributes set, set the thin format and allocate the disk resources to an L-Server.
When creating disk resources from virtual storage resources registered in a storage pool without thin provisioning attributes set, set the thick format and allocate the disk resources to an L-Server.
For the method to set thin provisioning for a storage pool, refer to "Configuring Thin Provisioning Attributes for Storage Pools" in "12.2 Resource Pool Operations" of the "User's Guide for Infrastructure Administrators (Resource Management) CE".
Note
[VMware]
When creating a virtual L-Server with a cloning image specified, the provisioning attribute of the cloning image takes preference over the provisioning attribute of the storage pool.
Coordination with Automatic Storage Layering
In Resource Orchestrator, coordination with Automatic Storage Layering for storage units is available.
Coordination with Automatic Storage Layering for ETERNUS Storage
In ETERNUS storage, the physical disk pool created using Automatic Storage Layering is called a Flexible TieR Pool (hereafter FTRP). The virtual volume created using Automatic Storage Layering is called a Flexible Tier Volume (hereafter FTV). FTV is allocated from FTRP.
In Resource Orchestrator, an FTRP can be managed as a virtual storage resource. The virtual storage resource for FTRP, similar to a TPP, is called a virtual storage resource for which the Thin Provisioning attribute has been configured.
In Resource Orchestrator, after creating an FTV using ESC, that FTV can be managed as a disk resource. The disk resource for FTV, similar to a TPV, is called a disk for which the Thin Provisioning attribute has been configured.
Management of FTRP and FTV
In Resource Orchestrator, FTRP and FTV can be managed as storage resources in storage pools.
FTRP and FTV are considered the same as TPP and TPV for Thin Provisioning. For details, refer to "Linking with Thin Provisioning".
Note
Users are recommended to operate the storage pool used for registering FTRP and FTV separately from the storage pool used for registering TPP and TPV.
When operating the storage in the same storage pool, the storage may not be operated by taking advantage of the properties, since the virtual storage to be selected will change depending on the amount of free space when allocating disks.
Storage Configuration when Creating a Physical L-Server
The storage configuration when creating a physical L-Server is indicated below.
When using a Fibre Channel connection, multiple storage units can be connected to a single L-Server (when VIOM connections are not supported, only one storage unit can be connected). When using an iSCSI connection, one storage unit can be connected to a single L-Server.
Sharing of storage between multiple L-Servers is supported.
Note
Local disks are not supported. Do not connect local disks.
For details on required VM management software and storage management software, refer to "1.4.2.2 Required Software".
For details on supported storage units and Fibre Channel switches, refer to "1.5 Hardware Environment".
Prerequisites when Creating a Physical L-Server
L-Servers support SAN boot and iSCSI boot configurations.
When using a physical server as an L-Server, it is necessary that connection using VIOM or HBA address rename is supported. For details on connection using VIOM or HBA address rename, refer to "4.3.1 Deciding the Storage Environment" and "4.3.2 Configuring the Storage Environment".
Usage methods of VIOM and HBA address rename differ depending on the hardware of managed servers used to configure a physical L-Server.
Blade Servers
Use VIOM.
Rack Mount Servers
Use HBA address rename.
For L-Server SAN storage paths and iSCSI storage paths, multipaths (two paths) are supported.
Configurations with two or less HBA ports on managed servers are supported.
When using the MMB firmware for which Fibre Channel card information cannot be obtained by blade servers, only configurations where Fibre Channel cards are mounted in expansion slot 2 are supported. The servers for which the information of Fibre Channel cards can be obtained are as follows:
PRIMERGY BX900 series servers
4.70 or later
PRIMERGY BX400 series servers
6.22 or later
In the case of blade servers, please do not set the following parameters during setup of VIOM.
WWN Address Range
MAC Address Range
HBA and Storage Unit Configuration
When designing systems, define the relationships of physical servers and HBA WWNs on servers, and the relationships of storage volumes and HBA WWNs on storage.
Configure SAN Storage Environments
SAN storage environment configurations differ according to the L-Server type in use, "Physical" or "Virtual".
When using a physical server as an L-Server, refer to "Appendix D Design and Configuration when Creating a Physical L-Server".
When using server virtualization software, refer to the information for the software being used in "Appendix E Design and Configuration for Creating Virtual L-Servers".
Configure iSCSI Storage Environments
When using iSCSI boot on physical L-Servers, create LUNs that can be connected to L-Servers in advance.
For details, refer to "D.3.1 When Using ETERNUS Storage" and "D.3.2 When Using NetApp FAS Storage".
Dynamic LUN Mirroring Settings
If dynamic LUN mirroring is to be used on the physical L-Server, make settings so that copying between ETERNUS storage machines is made possible.
For details on the configuration method, refer to the "ETERNUS SF AdvancedCopy Manager Operator's Guide for Copy Control Module".
System configuration requires that the relationship between physical servers and HBA WWNs from the perspective of the server, and the relationship between storage volumes and HBA WWNs from the perspective of storage devices be defined clearly.
An example where blades connect to storage devices via multiple paths using two HBA ports is shown below.
Refer to the storage device manual of each storage device for details.
Note
Resource Orchestrator does not support configurations where managed servers are mounted with three or more HBA ports.
Choosing WWNs
Choose the WWNs to use with the HBA address rename or VIOM function.
After WWNs have been chosen, associate them with their corresponding operating systems (applications) and physical servers (on the server side), and with corresponding volume(s) (on the storage side).
Using HBA address rename or VIOM, storage-side settings can be defined without prior knowledge of the actual WWN values of a server's HBAs. This makes it possible to design a server and storage system without having the involved physical servers on hand.
When HBA address rename is used, the value provided by the "I/O virtualization option" is used as the WWN.
When VIOM is used, set the WWN value with either one of the following values:
The value provided by the "I/O virtualization option"
The value selected automatically from the address range at VIOM installation
To prevent data damage by WWN conflict, you are advised to use the value provided by "I/O virtualization option".
Information
Specify the unique WWN value provided by the "I/O virtualization option". This can prevent unpredictable conflicts of WWNs.
Note
Do not use the same WWN for both HBA address rename and VIOM. If the same WWN is used, there is a chance data will be damaged.
The WWN format used by the HBA address rename and VIOM functions are as follows:
The "2x" part at the start of the provided WWN can define either a WWNN or a WWPN. Define and use each of them as follows.
20: Use as a WWNN
2x: Use as a WWPN
With HBA address rename, x will be allocated to the I/O addresses of HBA adapters in descending order.
I/O addresses of HBA adapters can be confirmed using the HBA BIOS or other tools provided by HBA vendors.
Note
With HBA address rename, as WWNs are allocated to the I/O addresses of HBAs in descending order, the order may not match the port order listed in the HBA.
For details, refer to "C.2 WWN Allocation Order during HBA address rename Configuration".
The WWN chosen here would be used for the system design of the servers and storage.
Server-side Design
WWNs are used in server-side design by assigning one unique to each server.
Storage-side Design
One or more volumes are chosen for each server, and the corresponding WWN assigned to each server in the server-side design is configured on the storage-side for those volumes.
Defining WWN settings for VIOM
VIOM should be configured first. Then, storage devices should also be configured in accordance with the WWN settings that were defined within VIOM.
System configuration requires that the relationship between physical servers and the IQN of the iSCSI adapter from the perspective of the server, and the relationship between storage volumes and the IQN of iSCSI from the perspective of storage devices, be defined clearly.
An example where blades connect to storage devices via multiple paths using two iSCSI interface ports is shown below.
Refer to the storage device manual of each storage device for details.
Choosing IQNs
Choose the IQNs to use with the iSCSI.
After IQNs have been chosen, associate them with their corresponding operating systems (applications) and physical servers (on the server side), and with corresponding volume(s) (on the storage side).
IQNs are made up of the following:
Type identifier "iqn."
Domain acquisition date
Domain name
Character string assigned by domain acquirer
IQNs must be unique.
Create a unique IQN by using the server name, or the MAC address provided by the "I/O virtualization option" that is to be allocated to the network interface of the server, as part of the IQN.
If IQNs overlap, there is a chance that data will be damaged when accessed simultaneously.
An example of using the virtual MAC address allocated by the "I/O virtualization option" is given below.
Example
When the MAC address is 00:00:00:00:00:FF
IQN iqn.2010-04.com.fujitsu:0000000000ff
The IQN chosen here would be used for the system design of the servers and storage.
Server-side Design
IQNs are used in server-side design by assigning one unique to each server.
Storage-side Design
One or more volumes are chosen for each server, and the corresponding IQN assigned to each server in the server-side design is configured on the storage-side for those volumes.