Top
ServerView Resource Orchestrator Cloud Edition V3.0.0 Setup Guide

4.3.1 Deciding the Storage Environment

This section explains how to define the storage environment settings required for a Resource Orchestrator setup.


4.3.1.1 Storage Environment Preparation

This section explains the preparations for setting up storage.

When creating physical servers and virtual machines, it was difficult to smoothly provide servers as configuration of storage units and the storage network was necessary.

Using the following functions of Resource Orchestrator, servers can be provided smoothly.


Allocating Storage to a Virtual L-Server

There are two ways to allocate storage to a virtual L-Server:

Definition File Required when Using a Virtual L-Server

The definition file required when using a virtual L-Server is indicated below.

Allocating Storage to a Physical L-Server

There are two ways to allocate storage to a physical L-Server:

Storage Allocation Methods and Storage Types

The storage allocation method varies depending on the storage units being used.
For the storage allocation methods and storage types, refer to "Table 4.10 Storage Allocation Methods and Storage Types for Physical L-Servers" in "4.3.1.2 Storage Configuration".

Storage Units that can be Connected with Physical L-Servers

For the storage units that can be connected with physical L-Servers, refer to "Table 1.61 Storage Units that can be Connected with L-Servers on Physical Servers" in "1.5 Hardware Environment".


Effective Utilization of Storage Using Thin Provisioning

Thin provisioning is technology for virtualizing storage capacities.

It enables efficient utilization of storage.

The function does not require the necessary storage capacity to be secured in advance, and can secure and extend the storage capacity according to how much is actually being used.

Thin provisioning can be achieved using the following two methods:

For details on linking with thin provisioning, refer to "4.3.1.2 Storage Configuration".


Effective Utilization of Storage Using Automatic Storage Layering

Automatic Storage Layering is a feature that monitors data access frequency in mixed environments that contain different storage classes and disk types. It then automatically relocates data to the most appropriate storage devices based on set data usage policies.

Resource Orchestrator can be coordinated with Automatic Storage Layering for ETERNUS storage. For details on coordination with Automatic Storage Layering, refer to "4.3.1.2 Storage Configuration".


Prerequisites when Creating a Physical L-Server

For details on the prerequisites when creating a physical L-Server, refer to "Prerequisites when Creating a Physical L-Server" in "4.3.1.2 Storage Configuration".


Storage Configuration when Using a Physical Server as an L-Server

For details on the storage configuration when using a physical server as an L-Server, refer to "Storage Configuration when Creating a Physical L-Server" in "4.3.1.2 Storage Configuration".

Storage resources are categorized into the following two types.
The resource registration method differs depending on the type of storage resource.

Disks created in advance using storage management software can be managed as disk resources.
Detected virtual storage resources and disk resources must be registered to the storage pool.
For details on registering to a storage pool, refer to "7.5 Storage Resources" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".


Automatic Detection of Storage Resources

When addition or modification of storage is performed using storage management software or VM management software, periodic queries are made to the storage management software or VM management software to detect changes to the configuration/status of storage. The interval between regular updates varies according to the number of storage resources.
By right-clicking a storage resource on the ROR console orchestration tree and selecting [Update] on the displayed menu, the configuration/status of the storage management software and VM management software is refreshed without waiting for the regular update.
After that, perform registration in the storage pool.


Definition File Required when Using a Physical L-Server

The definition file required when using a physical L-Server is indicated below.

4.3.1.2 Storage Configuration

This section explains how to configure a storage environment and define the settings required for Resource Orchestrator installation.


Storage Allocation Methods and Storage Types

The storage allocation method varies depending on the storage units being used.
The following storage allocation methods and storage types are available for physical L-Servers.

Table 4.10 Storage Allocation Methods and Storage Types for Physical L-Servers

Allocation Method

Storage Type

Allocate disk resources automatically created from virtual storage resources

  • ETERNUS Storage

  • NetApp FAS Storage

Allocate disk resources that were created in advance

  • ETERNUS Storage

  • NetApp FAS Storage

  • EMC CLARiiON Storage

  • EMC Symmetrix DMX Storage

  • EMC Symmetrix VMAX Storage


Storage Configuration

Decide the storage configuration necessary for the system.
The storage configurations supported by Resource Orchestrator are as follow:

Table 4.11 Supported Storage Configurations

Server (VM host) Type

L-Server System Disk

L-Server Data Disk

Physical

SAN storage

SAN storage

iSCSI storage (*1, *2)

iSCSI storage (*1, *3)

VMware

Storage configured for datastores of ESX/ESXi (VMFS Version 3 or later, or NFS mount)

Hyper-V

Storage configured for Cluster Shared Volumes (CSV) of MSFC

*1: Available when ETERNUS storage and NetApp storage are used.
*2: When using Linux for a physical L-Server, and iSCSI storage for a system disk, it is not possible to create an L-Server using a cloning image.
*3: When creating an L-Server, iSCSI storage is not allocated to the L-Server as a data disk. Manually allocate the iSCSI storage to the L-Server, after starting the L-Server. Attaching or detaching iSCSI storage to or from an L-Server cannot be performed using Resource Orchestrator. Perform those operations manually. For details on data disk allocation for iSCSI storage, refer to "Information- Physical L-Server Data Disk for iSCSI Boot".

Information

Physical L-Server Data Disk for iSCSI Boot

  • When Using ETERNUS Storage

    Using storage management software, the data disk can be accessed from managed servers by defining LUNs of the iSCSI boot disk and of the data disk in the same Affinity group.

  • When Using NetApp Storage

    Using storage management software, the data disk can be accessed from managed servers by defining LUNs of iSCSI boot disk and of the data disk in the same igroup.


Linking with Thin Provisioning

Resource Orchestrator can be linked with the thin provisioning of storage units and server virtualization software.

Coordination with Automatic Storage Layering

In Resource Orchestrator, coordination with Automatic Storage Layering for storage units is available.

Storage Configuration when Creating a Physical L-Server

The storage configuration when creating a physical L-Server is indicated below.

Note

Local disks are not supported. Do not connect local disks.

For details on required VM management software and storage management software, refer to "1.4.2.2 Required Software".
For details on supported storage units and Fibre Channel switches, refer to "1.5 Hardware Environment".


Prerequisites when Creating a Physical L-Server

HBA and Storage Unit Configuration

When designing systems, define the relationships of physical servers and HBA WWNs on servers, and the relationships of storage volumes and HBA WWNs on storage.


Configure SAN Storage Environments

SAN storage environment configurations differ according to the L-Server type in use, "Physical" or "Virtual".
When using a physical server as an L-Server, refer to "Appendix D Design and Configuration when Creating a Physical L-Server".
When using server virtualization software, refer to the information for the software being used in "Appendix E Design and Configuration for Creating Virtual L-Servers".


Configure iSCSI Storage Environments

When using iSCSI boot on physical L-Servers, create LUNs that can be connected to L-Servers in advance.
For details, refer to "D.3.1 When Using ETERNUS Storage" and "D.3.2 When Using NetApp FAS Storage".


Dynamic LUN Mirroring Settings

If dynamic LUN mirroring is to be used on the physical L-Server, make settings so that copying between ETERNUS storage machines is made possible.
For details on the configuration method, refer to the "ETERNUS SF AdvancedCopy Manager Operator's Guide for Copy Control Module".


4.3.1.3 HBA and Storage Device Settings

System configuration requires that the relationship between physical servers and HBA WWNs from the perspective of the server, and the relationship between storage volumes and HBA WWNs from the perspective of storage devices be defined clearly.
An example where blades connect to storage devices via multiple paths using two HBA ports is shown below.
Refer to the storage device manual of each storage device for details.

Note

Resource Orchestrator does not support configurations where managed servers are mounted with three or more HBA ports.

Figure 4.21 WWN System Design


Choosing WWNs

Choose the WWNs to use with the HBA address rename or VIOM function.
After WWNs have been chosen, associate them with their corresponding operating systems (applications) and physical servers (on the server side), and with corresponding volume(s) (on the storage side).
Using HBA address rename or VIOM, storage-side settings can be defined without prior knowledge of the actual WWN values of a server's HBAs. This makes it possible to design a server and storage system without having the involved physical servers on hand.

When HBA address rename is used, the value provided by the "I/O virtualization option" is used as the WWN.
When VIOM is used, set the WWN value with either one of the following values:

To prevent data damage by WWN conflict, you are advised to use the value provided by "I/O virtualization option".

Information

Specify the unique WWN value provided by the "I/O virtualization option". This can prevent unpredictable conflicts of WWNs.

Note

Do not use the same WWN for both HBA address rename and VIOM. If the same WWN is used, there is a chance data will be damaged.

The WWN format used by the HBA address rename and VIOM functions are as follows:

The "2x" part at the start of the provided WWN can define either a WWNN or a WWPN. Define and use each of them as follows.

With HBA address rename, x will be allocated to the I/O addresses of HBA adapters in descending order.
I/O addresses of HBA adapters can be confirmed using the HBA BIOS or other tools provided by HBA vendors.

Note

With HBA address rename, as WWNs are allocated to the I/O addresses of HBAs in descending order, the order may not match the port order listed in the HBA.

For details, refer to "C.2 WWN Allocation Order during HBA address rename Configuration".

The WWN chosen here would be used for the system design of the servers and storage.

Defining WWN settings for VIOM

VIOM should be configured first. Then, storage devices should also be configured in accordance with the WWN settings that were defined within VIOM.


4.3.1.4 iSCSI Interface and Storage Device Settings (iSCSI)

System configuration requires that the relationship between physical servers and the IQN of the iSCSI adapter from the perspective of the server, and the relationship between storage volumes and the IQN of iSCSI from the perspective of storage devices, be defined clearly.
An example where blades connect to storage devices via multiple paths using two iSCSI interface ports is shown below.
Refer to the storage device manual of each storage device for details.

Figure 4.22 IQN System Design


Choosing IQNs

Choose the IQNs to use with the iSCSI.
After IQNs have been chosen, associate them with their corresponding operating systems (applications) and physical servers (on the server side), and with corresponding volume(s) (on the storage side).
IQNs are made up of the following:

IQNs must be unique.
Create a unique IQN by using the server name, or the MAC address provided by the "I/O virtualization option" that is to be allocated to the network interface of the server, as part of the IQN.
If IQNs overlap, there is a chance that data will be damaged when accessed simultaneously.
An example of using the virtual MAC address allocated by the "I/O virtualization option" is given below.

Example

When the MAC address is 00:00:00:00:00:FF

IQN iqn.2010-04.com.fujitsu:0000000000ff

The IQN chosen here would be used for the system design of the servers and storage.