Top
ServerView Resource Orchestrator Cloud Edition V3.1.2 Design Guide
FUJITSU Software

2.5 Hardware Environment

The hardware conditions described in the table below must be met when using Resource Orchestrator.

Required Hardware Conditions for Managers and Agents

Table 2.71 Required Hardware

Software

Hardware

Remarks

Manager

PRIMERGY BX series servers
PRIMERGY RX series servers
PRIMERGY TX series servers

The CPU must be a multi-core CPU.
For details on the amount of memory necessary for Resource Orchestrator, refer to "2.4.2.8 Memory Size".
Please consider the amount of memory necessary for required software as well as the amount of memory necessary for Resource Orchestrator.

Agent

PRIMERGY BX620 S4
PRIMERGY BX620 S5
PRIMERGY BX620 S6
PRIMERGY BX920 S1
PRIMERGY BX920 S2
PRIMERGY BX920 S3
PRIMERGY BX920 S4
PRIMERGY BX922 S2
PRIMERGY BX924 S2
PRIMERGY BX924 S3
PRIMERGY BX924 S4
PRIMERGY BX960 S1
PRIMERGY BX2560 M1
PRIMERGY RX100 S5
PRIMERGY RX100 S6
PRIMERGY RX200 S4
PRIMERGY RX200 S5
PRIMERGY RX200 S6
PRIMERGY RX200 S7
PRIMERGY RX200 S8
PRIMERGY RX300 S4
PRIMERGY RX300 S5
PRIMERGY RX300 S6
PRIMERGY RX300 S7
PRIMERGY RX300 S8
PRIMERGY RX600 S4
PRIMERGY RX600 S5
PRIMERGY RX900 S1
PRIMERGY RX2520 M1
PRIMERGY RX2530 M1
PRIMERGY RX2540 M1
PRIMERGY RX4770 M1
PRIMERGY TX150 S6
PRIMERGY TX150 S7
PRIMERGY TX200 S5
PRIMERGY TX200 S6
PRIMERGY TX300 S4
PRIMERGY TX300 S5
PRIMERGY TX300 S6
PRIMEQUEST 1000 series servers
PRIMEQUEST 2000 series servers
Other PC Servers

  • When using servers other than PRIMERGY BX servers

    It is necessary to mount an IPMI-compatible (*1) server management unit (*2).

  • For Physical L-Servers

    The following servers cannot be used:

    • PRIMERGY TX series servers

    • PRIMERGY RX100 series servers

    • PRIMEQUEST 1000 series servers

    • PRIMEQUEST 2000 series servers

    • Other PC Servers

  • When using RHEL5-Xen as the server virtualization software

    Only PRIMEQUEST 1000 series servers are supported for managed servers.

  • When using the PRIMEQUEST 2000 series, the following server virtualization software are not supported.

    • VMware vSphere 4.1 or earlier

    • Citrix XenServer

    • OVM for x86 2.2

  • When using physical L-Servers for iSCSI boot

    • VIOM is required.

    • The iSCSI boot using CNA cannot be used. Please use NIC other than CNA.

  • When the destination of a physical L-Server is a PRIMERGY BX920 series or BX922 series server and LAN switch blades (PY-SWB104(PG-SW109) or PY-SWB101(PG-SW201)) are mounted in CB1 and CB2, only NIC1 and NIC2 can be used.

  • When PRIMERGY BX920 S3, BX920 S4, BX2560 M1, BX924 S3, or BX924 S4 is used, UMC mode cannot be available on CNA.

  • Rack mount servers supported by VIOM are the following:

    • PRIMERGY RX200 S7 or later

    • PRIMERGY RX300 S7 or later

    • PRIMERGY RX2520 M1 or later

    • PRIMERGY RX2530 M1 or later

    • PRIMERGY RX2540 M1 or later

SPARC Enterprise M series

  • Virtual L-Servers can be deployed.
    For details, refer to "E.6 Solaris Zones" and "C.7 Solaris Zones" in the "Setup Guide CE".

  • Configured virtual machines can be used by associating them with virtual L-Servers.

  • Configured physical servers can be used by associating them with physical L-Servers.
    For details, refer to "Chapter 18 Linking L-Servers with Configured Physical Servers or Virtual Machines" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  • Servers can be managed.
    For details, refer to the "Design Guide VE".

  • To use power consumption monitoring, the XCP version should be 1090 or later.

SPARC Enterprise T5120
SPARC Enterprise T5140
SPARC Enterprise T5220
SPARC Enterprise T5240
SPARC Enterprise T5440

  • Configured physical servers can be used by associating them with physical L-Servers.
    For details, refer to "Chapter 18 Linking L-Servers with Configured Physical Servers or Virtual Machines" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  • Servers can be managed.
    For details, refer to the "Design Guide VE".

  • The ILOM version should be 3.0 or later.

SPARC M10

  • Virtual L-Servers can be deployed.

    For details, refer to "E.7 OVM for SPARC" and "C.8 OVM for SPARC" in the "Setup Guide CE".

  • Configured virtual machines can be used by associating them with L-Servers.

  • Configured physical servers can be used by associating them with L-Servers.

    For details, refer to "Chapter 18 Linking L-Servers with Configured Physical Servers or Virtual Machines" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  • Servers can be managed.

    For details, refer to the "Design Guide VE".

HBA address rename setup service

Personal computers (*3)
PRIMERGY RX series servers
PRIMERGY BX series servers
PRIMERGY TX series servers
PRIMEQUEST
Other PC Servers

-

*1: Supports IPMI 2.0.
*2: This usually indicates a Baseboard Management Controller (hereinafter BMC). For PRIMERGY, it is called an integrated Remote Management Controller (hereinafter iRMC).


Functions Available for Agents

The functions that agents can use differ depending on the hardware being used.

Table 2.72 Function Availability List

Function

PRIMERGY Series Servers

PRIMEQUEST

SPARC M10/SPARC Enterprise

Other PC Servers

Blade Models

Rack Mount/Tower Models

Status monitoring

Yes

Yes

Yes

Yes

Yes (*1)

Power operations

Yes

Yes

Yes

Yes

Yes

Backup and restore (*2, *3)

Yes

Yes

Yes (*14)

No

Yes (*15)

Hardware maintenance

Yes

Yes (*4)

Yes (*4)

No

Yes (*15)

Maintenance LED

Yes

No

No

No

No

External management software

Yes

Yes

Yes

Yes

No

Server switchover

Backup and restore method (*3)

Yes

Yes

No

No

Yes (*15)

HBA address rename method (*3, *5)

Yes

Yes

No

No

No

VIOM server profile exchange method (*6)

Yes

Yes (*7)

No

No

No

Storage affinity switchover method

No

No

No

Yes (*8)

No

Cloning (*2, *3, *9)

Yes

Yes

Yes (*10)

No

Yes (*15)

HBA address rename (*3, *5)

Yes

Yes

No

No

No

VIOM coordination (*6)

Yes

Yes (*7)

No

No

No

VLAN settings

Yes

No

No

No

No

Pre-configuration

Yes

Yes

Yes

Yes

Yes

Power consumption monitoring

Yes (*11)

Yes (*12)

No

Yes (*13)

No

Yes: Use possible.
No: Use not possible.
*1: Server monitoring in coordination with server management software is not possible.
*2: When agents are operating on iSCSI disks, image operations are not possible for the following disk configurations.
Perform operation using a single iSCSI disk configuration.

*3: When using backup and restore, cloning, or HBA address rename, the NIC (or LAN expansion board) must support PXE boot.
*4: Maintenance LEDs cannot be operated.
*5: When using HBA address rename, the mounted HBA must be compatible with HBA address rename. Only configurations in which up to two HBA ports are mounted on a single managed server are supported.
*6: ServerView Virtual-IO Manager is required.
*7: VIOM coordination is available only when using rack mount servers that are supported by VIOM.
*8: In the following cases, only configurations in which up to eight HBA ports are mounted on a single managed server are supported.

For the list of functions by OS, refer to "Table 2.2 Functions Available for Each Target Operating System" in "2.2 Function Overview" in the "Design Guide VE".

*9: Cloning of Linux agents operating on iSCSI disks is not possible.
*10: Only PRIMEQUEST 1000 series servers are supported. Cloning is only available when Legacy boot is specified for the boot option. When UEFI is specified, cloning is unavailable.
*11: BX900 S1 chassis and BX920 S1, BX920 S2, BX920 S3, BX920 S4, BX2560 M1, BX922 S2, BX924 S2, BX924 S3, BX924 S4, and BX960 S1 servers are supported.
*12: Only rack mount models (RX200/300/600/2530/2540) are supported.
*13: Only SPARC Enterprise M3000 and SPARC M10-1/M10-4/M10-4S are supported.
*14: For the PRIMEQUEST 2000 series, backup and restore is only possible when using Windows managers. PXE boot is only supported by on-board LAN NICs.
*15: When using this function, contact Fujitsu technical staff.


Required Hardware for Admin Clients

The following hardware is required for admin clients:

Table 2.73 Required Hardware for Admin Clients

Software

Hardware

Remarks

Client

Personal computers
PRIMERGY RX series servers
PRIMERGY BX series servers
PRIMERGY TX series servers
Other PC Servers

-


Hardware Condition of Storage that can be Connected with Physical L-Server

When connecting storage units to the physical servers of L-Servers, the following storage units can be used:

Table 2.74 Storage Units that can Be Connected with L-Servers on Physical Servers

Hardware

Remarks

ETERNUS DX8000 series
ETERNUS DX8000 S2 series
ETERNUS DX600 S3
ETERNUS DX500 S3
ETERNUS DX400 series
ETERNUS DX400 S2 series
ETERNUS DX200 S3
ETERNUS DX200F
ETERNUS DX100 S3
ETERNUS DX90 S2
ETERNUS DX90
ETERNUS DX80 S2
ETERNUS DX80
ETERNUS DX60 S2
ETERNUS DX60
ETERNUS8000 series

Thin provisioning is available for the following storage units:

  • ETERNUS DX8000 series

  • ETERNUS DX8000 S2 series

  • ETERNUS DX600 S3

  • ETERNUS DX500 S3

  • ETERNUS DX400 series

  • ETERNUS DX400 S2 series

  • ETERNUS DX200 S3

  • ETERNUS DX200F

  • ETERNUS DX100 S3

  • ETERNUS DX90 S2

  • ETERNUS DX80 S2

For the following apparatuses, when disk resources are created with Resource Orchestrator, set the alias (if possible) based on the disk resource name in the LUN.

  • ETERNUS DX8000 S2 series

  • ETERNUS DX600 S3

  • ETERNUS DX500 S3

  • ETERNUS DX400 S2 series

  • ETERNUS DX200 S3

  • ETERNUS DX200F

  • ETERNUS DX100 S3

  • ETERNUS DX90 S2

  • ETERNUS DX80 S2

  • ETERNUS DX60 S2

On ETERNUS other than the above, the alias name is set as previously, that is the default value set on the ETERNUS.

For the following apparatuses, if an alias has been set for the LUN, the alias name is displayed.

  • ETERNUS DX8000 series

  • ETERNUS DX8000 S2 series

  • ETERNUS DX600 S3

  • ETERNUS DX500 S3

  • ETERNUS DX400 series

  • ETERNUS DX400 S2 series

  • ETERNUS DX200 S3

  • ETERNUS DX200F

  • ETERNUS DX100 S3

  • ETERNUS DX90 S2

  • ETERNUS DX90

  • ETERNUS DX80 S2

  • ETERNUS DX80

  • ETERNUS DX60 S2

  • ETERNUS DX60

Dynamic LUN mirroring can be used with Resource Orchestrator with the following apparatuses.

  • ETERNUS DX8000 S2 series

  • ETERNUS DX600 S3

  • ETERNUS DX500 S3

  • ETERNUS DX410 S2

  • ETERNUS DX440 S2

  • ETERNUS DX200 S3

  • ETERNUS DX200F

  • ETERNUS DX90 S2

When using the target units for the following options, Automatic Storage Layering can be used with Resource Orchestrator.

  • ETERNUS SF Storage Cruiser V16 Optimization Option

  • ETERNUS SF Storage Cruiser V15 Optimization Option

For the following devices, you should register devices to ETERNUS SF Storage Cruiser manager using the IP address in IPv4 form.

  • ETERNUS DX8000 S2 series

  • ETERNUS DX600 S3

  • ETERNUS DX500 S3

  • ETERNUS DX400 S2 series

  • ETERNUS DX200 S3

  • ETERNUS DX200F

  • ETERNUS DX100 S3

  • ETERNUS DX90 S2

  • ETERNUS DX80 S2

ETERNUS4000 series

Model 80 and model 100 are not supported.
Thin provisioning is not available for this series.

ETERNUS2000 series

When an alias name is configured for a LUN, the alias name is displayed.

NetApp FAS6000 series
NetApp FAS3100 series
NetApp FAS2000 series
NetApp V6000 series
NetApp V3100 series

Data ONTAP 7.3.3 or later
Data ONTAP 8.0.1 7-Mode

EMC CLARiiON CX4-120
EMC CLARiiON CX4-240
EMC CLARiiON CX4-480
EMC CLARiiON CX4-960
EMC CLARiiON CX3-10
EMC CLARiiON CX3-20
EMC CLARiiON CX3-40
EMC CLARiiON CX3-80
EMC VNX5100
EMC VNX5300
EMC VNX5500
EMC VNX5700
EMC VNX7500

Navisphere Manager and Access Logix must be installed on SP.

EMC Symmetrix DMX-3
EMC Symmetrix DMX-4
EMC Symmetrix VMAX

VolumeLogix must be installed on SP.

Storage Server on which FalconStor NSS operates

It should be a model on which FalconStor guarantees the operation of FalconStor NSS.
Please install FalconStor NSS in Storage Server.
The following versions of FalconStor NSS are supported.

  • V7.00

Storage unit connected with Storage Server on which FalconStor NSS operates

It should be a model on which FalconStor guarantees the operation of FalconStor NSS.

Fiber channel switch connected with Storage Server on which FalconStor NSS operates

It should be a model on which FalconStor guarantees the operation of FalconStor NSS.

When using storage management software, do not change or delete the content set for storage units by Resource Orchestrator.

When connecting storage units to the physical servers of L-Servers, the following Fibre Channel switches can be used:

Table 2.75 Fibre Channel Switches which can be Used when Connecting Storage Units with L-Servers on Physical Servers

Hardware

Remarks

Brocade series
ETERNUS SN200 series

-

PRIMERGY BX600 Fibre Channel switch blades

Connect fibre channel switch blades to the following connection blades:

  • NET3, NET4

PRIMERGY BX900 Fibre Channel switch blades

Connect fibre channel switch blades to the following connection blades:

  • CB5, CB6

PRIMERGY BX400 Fibre Channel switch blades

Connect fibre channel switch blades to the following connection blades:

  • CB3, CB4

Hardware Conditions of Storage that can be Connected to Virtual L-Servers

When connecting storage units to virtual L-Servers, the following storage units can be used:

[VMware]
Refer to "Supported Storage Configurations" in "E.1.3 Storage Preparations".

[Hyper-V]
Refer to "Supported Storage Configurations" in "E.2.3 Storage Preparations".

[Xen]
Refer to "Supported Storage Configurations" in "E.3.3 Storage Preparations".

[OVM for x86 2.2]
Refer to "Supported Storage Configurations" in "E.4.3 Storage Preparations".

[KVM]
Refer to "Supported Storage Configurations" in "E.5.3 Storage Preparations".

[Solaris Zones]
Refer to "Supported Storage Configurations" in "E.6.3 Storage Preparations".

[OVM for SPARC]
Refer to "Supported Storage Configurations" in "E.7.3 Storage Preparations".

[Citrix Xen]
Refer to "Supported Storage Configurations" in "E.8.3 Preparations for Storage".

[OVM for x86 3.2]
Refer to "Supported Storage Configurations" in "E.9.3 Preparations for Storage".


Network Hardware Conditions When Using Simplifying of Network Settings:

Refer to the following sections for the LAN switch blades that are available when using simplifying of network settings:

Table 2.76 Supported Network Devices

Hardware

Version

Function

Status Monitoring

Network Device Automatic Configuration

Network Device Configuration File Management

L2 switches (*1)

Fujitsu SR-X 300 series
Fujitsu SR-X 500 series

V01 or later

Yes

Yes

Yes

Cisco Catalyst 2900 series
Cisco Catalyst 2918 series
Cisco Catalyst 2928 series
Cisco Catalyst 2940 series
Cisco Catalyst 2950 series
Cisco Catalyst 2955 series
Cisco Catalyst 2960 series
Cisco Catalyst 2970 series
Cisco Catalyst 2975 series
Cisco Catalyst 3500 series
Cisco Catalyst 3550 series
Cisco Catalyst 3560 series
Cisco Catalyst 3750 series

IOS 12.2 or later

Yes

Yes

Yes

Cisco Nexus 5000 series (*3)

NX-OS V5.2

Yes

Yes

Yes

Brocade VDX 6710 series
Brocade VDX 6720 series
Brocade VDX 6730 series
Brocade VDX 6740
Brocade VDX 6740T

NOS 2.0 or later

Yes

Yes (*4)

No

Ethernet Fabric

Fujitsu PRIMERGY Converged Fabric Switch Blade (10 Gbps 18/8+2) (*5)

V01.00 or later

Yes

Yes

No

Fujitsu Converged Fabric Switch (*5)

V01.00 or later

Yes

Yes

No

Firewall (*2)

Fujitsu IPCOM EX IN series
Fujitsu IPCOM EX SC series

E20L10 or later

Yes

Yes

Yes

Fujitsu IPCOM VA LS series
Fujitsu IPCOM VA SC series

E20L21NF0301 or later

Yes

Yes (*6)

Yes

Fujitsu NS Appliance (*7)

-

Yes

Yes

Yes

Cisco ASA 5500 series (*8)

ASASoftware-8.3 or later

Yes

Yes

Yes

Server load balancer (*2)

Fujitsu IPCOM EX IN series
Fujitsu IPCOM EX LB series

E20L10 or later

Yes

Yes (*9)

Yes

Fujitsu IPCOM VA LS series
Fujitsu IPCOM VA LB series

E20L21NF0301 or later

Yes

Yes (*10)

Yes

Fujitsu NS Appliance (*7)

-

Yes

Yes

Yes

F5 Networks BIG-IP Local Traffic Manager series

BIG-IP V11.2

Yes

Yes

Yes

Management host

Fujitsu IPCOM VX series

E10L11 or later

Yes

No

No

Yes: Use possible.
No: Use not possible.

*1: L2 switches are essential in the following cases.

*2: Necessary when placing a firewall or a server load balancer on an L-Platform.
*3: Nexus 2000 series (except Nexus B22 Blade Fabric Extender) connected to Nexus 5000 series using a fabric interface are used as part of Nexus5000series (module related).
*4: Sample scripts for automatic configuration and operation are not provided. It is necessary to create the rulesets for configuring definitions.
*5: Network mode and host mode are supported as the operation mode.
*6: Sample scripts are supported by E20L30NF0201 or later. Sample scripts for automatic configuration are not available for the IPCOM VA SC series. It is necessary to create the rulesets for configuring definitions.
*7: This is not a hardware product but a software appliance.
For details on the hardware environment on which this software appliance can operate, refer to "1.4 Hardware Environment" in the "NS Option Instruction".
*8: Cisco ASA5505 is not supported.
*9: Sample scripts for automatic configuration and operation are not provided for the IPCOM EX LB series. It is necessary to create the rulesets for configuring definitions.
*10: Sample scripts are supported by E20L30NF0201 or later. Sample scripts for automatic configuration and operation are not available for the IPCOM VA LB series. It is necessary to create the rulesets for configuring definitions.


In addition, an L3 switch is necessary when using a separate admin LAN network for each tenant.


Hardware Conditions of Power Monitoring Devices

Table 2.77 Supported Power Monitoring Devices

Hardware

Remarks

Symmetra RM 4000VA
PG-R1SY4K/PG-R1SY4K2

The firmware version of the network management card is v2.5.4 or v3.0 or higher

Smart-UPS RT 10000
PY-UPAR0K/PG-R1SR10K

-

Smart-UPS RT 5000
PY-UPAC5K

-