Top
ServerView Resource Orchestrator Cloud Edition V3.1.0 Design Guide
ServerView

2.5 Hardware Environment

The hardware conditions described in the table below must be met when using Resource Orchestrator.

Required Hardware Conditions for Managers and Agents

Table 2.64 Required Hardware

Software

Hardware

Remarks

Manager

PRIMERGY BX series servers
PRIMERGY RX series servers
PRIMERGY TX series servers

The CPU must be a multi-core CPU.
For details on the amount of memory necessary for Resource Orchestrator, refer to "2.4.2.6 Memory Size".
Please consider the amount of memory necessary for required software as well as the amount of memory necessary for Resource Orchestrator.

Agent

PRIMERGY BX620 S4
PRIMERGY BX620 S5
PRIMERGY BX620 S6
PRIMERGY BX920 S1
PRIMERGY BX920 S2
PRIMERGY BX920 S3
PRIMERGY BX922 S2
PRIMERGY BX924 S2
PRIMERGY BX924 S3
PRIMERGY BX960 S1
PRIMERGY RX100 S5
PRIMERGY RX100 S6
PRIMERGY RX200 S4
PRIMERGY RX200 S5
PRIMERGY RX200 S6
PRIMERGY RX200 S7
PRIMERGY RX300 S4
PRIMERGY RX300 S5
PRIMERGY RX300 S6
PRIMERGY RX300 S7
PRIMERGY RX600 S4
PRIMERGY RX600 S5
PRIMERGY RX900 S1
PRIMERGY TX150 S6
PRIMERGY TX150 S7
PRIMERGY TX200 S5
PRIMERGY TX200 S6
PRIMERGY TX300 S4
PRIMERGY TX300 S5
PRIMERGY TX300 S6
PRIMEQUEST 1000 series servers
Other PC servers

  • When using servers other than PRIMERGY BX servers

    It is necessary to mount an IPMI-compatible (*1) server management unit (*2).

  • For Physical L-Servers

    The following servers cannot be used:

    • PRIMERGY TX series servers

    • PRIMERGY RX100 series servers

    • PRIMEQUEST 1000 series servers

    • Other PC servers

  • When using RHEL5-Xen as the server virtualization software

    Only PRIMEQUEST 1000 series servers are supported for managed servers.

  • When using physical L-Servers for iSCSI boot

    PRIMERGY BX900 and VIOM are required.

  • When the destination of a physical L-Server is a PRIMERGY BX920 series or BX922 series server and LAN switch blades (PY-SWB104(PG-SW109) or PY-SWB101(PG-SW201)) are mounted in CB1 and CB2, only NIC1 and NIC2 can be used.

  • When PRIMERGY BX920 S3 or BX924 S3 is used, Resource Orchestrator can only use Function 0 of each port.

SPARC Enterprise M series

  • Virtual L-Servers can be deployed.
    For details, refer to "E.6 Solaris Containers" and "C.7 Solaris Containers" in the "Setup Guide CE".

  • Configured virtual machines can be used by associating them with virtual L-Servers.

  • Configured physical servers can be used by associating them with physical L-Servers.
    For details, refer to "Chapter 18 Linking L-Servers with Configured Physical Servers or Virtual Machines" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  • Servers can be managed.
    For details, refer to the manuals for Virtual Edition.

  • To use power consumption monitoring, the XCP version should be 1090 or later.

SPARC Enterprise T5120
SPARC Enterprise T5140
SPARC Enterprise T5220
SPARC Enterprise T5240
SPARC Enterprise T5440

  • Configured physical servers can be used by associating them with physical L-Servers.
    For details, refer to "Chapter 18 Linking L-Servers with Configured Physical Servers or Virtual Machines" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

  • Servers can be managed.
    For details, refer to the manuals for Virtual Edition.

  • The ILOM version should be version 3.0 or later.

HBA address rename setup service

Personal computers (*3)
PRIMERGY RX series servers
PRIMERGY BX series servers
PRIMERGY TX series servers
PRIMEQUEST
Other PC servers

-

*1: Supports IPMI 2.0.
*2: This usually indicates a Baseboard Management Controller (hereinafter BMC). For PRIMERGY, it is called an integrated Remote Management Controller (hereinafter iRMC).

Note

The functions that agents can use differ depending on the hardware being used.

Table 2.65 Function Availability List

Function

PRIMERGY Series Servers

PRIMEQUEST

SPARC Enterprise

Other PC servers

Blade Models

Rack Mount/Tower Models

Status monitoring

Yes

Yes

Yes

Yes

Yes (*1)

Power operations

Yes

Yes

Yes

Yes

Yes

Backup and restore (*2)

Yes

Yes

Yes

No

Yes

Hardware maintenance

Yes

Yes (*3)

Yes (*3)

No

Yes (*3)

Maintenance LED

Yes

No

No

No

No

External management software

Yes

Yes

Yes

Yes

No

Server switchover

Backup and restore method

Yes

Yes

No

No

Yes

HBA address rename method (*4)

Yes

Yes

No

No

No

VIOM server profile exchange method

Yes (*5)

No

No

No

No

Storage affinity switchover method

No

No

No

Yes (*6)

No

Cloning (*2, *7)

Yes

Yes

Yes (*11)

No

Yes

HBA address rename (*4)

Yes

Yes

No

No

No

VIOM coordination

Yes (*5)

No

No

No

No

VLAN settings

Yes

No

No

No

No

Pre-configuration

Yes

Yes

Yes

Yes

Yes

Power consumption monitoring

Yes (*8)

Yes (*9)

No

Yes (*10)

No

Yes: Use possible.
No: Use not possible.
*1: Server monitoring in coordination with server management software is not possible.
*2: When agents are operating on iSCSI disks, image operations are not possible for the following disk configurations.
Perform operation using a single iSCSI disk configuration.

  • iSCSI disk + internal disk

  • iSCSI disk + SAN disk

*3: Maintenance LEDs cannot be operated.
*4: When using HBA address rename, the mounted HBA must be compatible with HBA address rename.
*5: ServerView Virtual-IO Manager is required.
*6: Only M3000 servers, SPARC Enterprise Partition Models and T5120/T5140/T5220/T5240/T5440 servers with undivided areas are supported. SPARC Enterprise Partition Models with divided areas are not supported.
*7: Cloning of Linux agents operating on iSCSI disks is not possible.
*8: Only BX900 S1 chassis and BX920 S1, BX920 S2, BX922 S2, BX924 S2, and BX960 S1 servers are supported.
*9: Only rack mount models (RX200/300/600) are supported.
*10: Only M3000 servers are supported.
*11: Cloning is available only when Legacy boot is specified for the boot option. When UEFI is specified, cloning is unavailable.


Required Hardware for Admin Clients

The following hardware is required for admin clients:

Table 2.66 Required Hardware for Admin Clients

Software

Hardware

Remarks

Client

Personal computers
PRIMERGY RX series servers
PRIMERGY BX series servers
PRIMERGY TX series servers
Other PC servers

-


Hardware Condition of Storage that can be Connected with Physical L-Server

When connecting storage units to the physical servers of L-Servers, the following storage units can be used:

Table 2.67 Storage Units that can be Connected with L-Servers on Physical Servers

Hardware

Remarks

ETERNUS DX8000 series
ETERNUS DX8000 S2 series
ETERNUS DX400 series
ETERNUS DX400 S2 series
ETERNUS DX90 S2
ETERNUS DX90
ETERNUS DX80 S2
ETERNUS DX80
ETERNUS DX60 S2
ETERNUS DX60
ETERNUS8000 series

Thin provisioning is available for the following storage units:

  • ETERNUS DX8000 series

  • ETERNUS DX8000 S2 series

  • ETERNUS DX400 series

  • ETERNUS DX400 S2 series

  • ETERNUS DX90 S2

  • ETERNUS DX80 S2

For the following apparatuses, when disk resources are created with Resource Orchestrator, set the alias (if possible) based on the disk resource name in the LUN.

  • ETERNUS DX8000 S2 series

  • ETERNUS DX400 S2 series

  • ETERNUS DX90 S2

  • ETERNUS DX80 S2

  • ETERNUS DX60 S2

On ETERNUS other than the above, the alias name is set as previously, that is the default value set on the ETERNUS.

For the following apparatuses, if an alias has been set for the LUN, the alias name is displayed.

  • ETERNUS DX8000 series

  • ETERNUS DX8000 S2 series

  • ETERNUS DX400 series

  • ETERNUS DX400 S2 series

  • ETERNUS DX90 S2

  • ETERNUS DX90

  • ETERNUS DX80 S2

  • ETERNUS DX80

  • ETERNUS DX60 S2

  • ETERNUS DX60

Dynamic LUN mirroring can be used with Resource Orchestrator with the following apparatuses.

  • ETERNUS DX8000 S2 series

  • ETERNUS DX410 S2

  • ETERNUS DX440 S2

  • ETERNUS DX90 S2

When using the target units for the following options, Automatic Storage Layering can be used with Resource Orchestrator.

  • ETERNUS SF Storage Cruiser V15 Optimization Option

ETERNUS4000 series

Model 80 and model 100 are not supported.
Thin provisioning is not available for this series.

ETERNUS2000 series

When an alias name is configured for a LUN, the alias name is displayed.

NetApp FAS6000 series
NetApp FAS3100 series
NetApp FAS2000 series
NetApp V6000 series
NetApp V3100 series

Data ONTAP 7.3.3 or later
Data ONTAP 8.0.1 7-Mode

EMC CLARiiON CX4-120
EMC CLARiiON CX4-240
EMC CLARiiON CX4-480
EMC CLARiiON CX4-960
EMC CLARiiON CX3-10
EMC CLARiiON CX3-20
EMC CLARiiON CX3-40
EMC CLARiiON CX3-80

Navisphere Manager and Access Logix must be installed on SP.

EMC Symmetrix DMX-3
EMC Symmetrix DMX-4
EMC Symmetrix VMAX

VolumeLogix must be installed on SP.

When using storage management software, do not change or delete the content set for storage units by Resource Orchestrator.

When connecting storage units to the physical servers of L-Servers, the following Fibre Channel switches can be used:

Table 2.68 Fibre Channel Switches which can be used when Connecting ETERNUS Storage, NetApp Storage, EMC CLARiiON Storage, and EMC Symmetrix DMX Storage with L-Servers on Physical Servers

Hardware

Remarks

Brocade series
ETERNUS SN200 series

-

PRIMERGY BX600 Fibre Channel switch blades

Connect fibre channel switch blades to the following connection blades:

  • NET3, NET4

PRIMERGY BX900 Fibre Channel switch blades

Connect fibre channel switch blades to the following connection blades:

  • CB5, CB6

PRIMERGY BX400 Fibre Channel switch blades

Connect fibre channel switch blades to the following connection blades:

  • CB3, CB4

Hardware Conditions of Storage that can be Connected to Virtual L-Servers

When connecting storage units to virtual L-Servers, the following storage units can be used:

[VMware]
Refer to "Supported Storage Configurations" in "E.1.3 Storage Preparations".

[Hyper-V]
Refer to "Supported Storage Configurations" in "E.2.3 Storage Preparations".

[Xen]
Refer to "Supported Storage Configurations" in "E.3.3 Storage Preparations".

[Oracle VM]
Refer to "Supported Storage Configurations" in "E.4.3 Storage Preparations".

[KVM]
Refer to "Supported Storage Configurations" in "E.5.3 Storage Preparations".

[Solaris Containers]
Refer to "Supported Storage Configurations" in "E.6.3 Storage Preparations".


Network Hardware Conditions When Using Simplifying of Network Settings:

Refer to the following sections for the LAN switch blades that are available when using simplifying of network settings:

Table 2.69 Supported Network Devices

Hardware

Version

L2 switches (*)

Fujitsu SR-X 300 series
Fujitsu SR-X 500 series

V01 or later

Cisco Catalyst 2900 series
Cisco Catalyst 2918 series
Cisco Catalyst 2928 series
Cisco Catalyst 2940 series
Cisco Catalyst 2950 series
Cisco Catalyst 2955 series
Cisco Catalyst 2960 series
Cisco Catalyst 2970 series
Cisco Catalyst 2975 series
Cisco Catalyst 3500 series
Cisco Catalyst 3550 series
Cisco Catalyst 3560 series
Cisco Catalyst 3750 series

IOS 12.2 or later

Cisco Nexus 2000 series (*3)
Cisco Nexus 5000 series (*3)

NX-OS V4.1 or later

Brocade VDX 6710 series (*3)
Brocade VDX 6720 series (*3)
Brocade VDX 6730 series (*3)

NOS 2.0 or later

Firewall (*2)

Fujitsu IPCOM EX IN series
Fujitsu IPCOM EX SC series

E20L10 or later

Fujitsu NS Appliance (*4)

-

Cisco ASA 5500 series (*5)

ASASoftware-8.3 or later

Server load balancer (*2)

Fujitsu IPCOM EX IN series
Fujitsu IPCOM EX LB series (*3)

E20L10 or later

F5 Networks BIG-IP Local Traffic Manager series

BIG-IP V11.2

* Note: L2 switches are essential in the following cases.

*2: Necessary when placing a firewall or a server load balancer on an L-Platform.

*3: Network device monitoring is supported but sample scripts for automatic configuration are not offered.

*4: This is not a hardware product but a virtual appliance.

*5: Cisco ASA5505 is not supported.


In addition, an L3 switch is necessary when using a separate admin LAN network for each tenant.


Hardware Conditions of Power Monitoring Devices

Table 2.70 Supported Power Monitoring Devices

Hardware

Remarks

Symmetra RM 4000VA
PG-R1SY4K/PG-R1SY4K2

The firmware version of the network management card is v2.5.4 or v3.0 or higher

Smart-UPS RT 10000
PY-UPAR0K/PG-R1SR10K

-

Smart-UPS RT 5000
PY-UPAC5K

-