Top
ServerView Resource Orchestrator Cloud Edition V3.3.0 Design Guide
FUJITSU Software

E.5.4 Storage Preparations (NAS Configurations)

In NAS configurations, libvirt storage pools are used.

This section explains the preparations for such environments.

Supported libvirt Storage Pool Configurations

The supported configuration is as follows:

Supported Format

qcow2 image files are supported.


Effective Utilization of Storage Using Thin Provisioning

qcow2 image files use sparse allocation.

Use this feature for thin provisioning.

Thin provisioning is technology for virtualizing storage capacities. It enables efficient utilization of storage.

The function does not require the necessary storage capacity to be secured in advance, and can secure and extend the storage capacity according to how much is actually being used.

In order to use NAS configurations in Resource Orchestrator, it is necessary to configure thin provisioning.

Register virtual storage resources in a storage pool with Thin Provisioning attributes set to allocate them as the thin formatted disk resources to an L-Server.

libvirt storage pools cannot be registered as virtual storage resources in a storage pool without thin provisioning attributes set.

Thick format disk resources cannot be created from virtual storage resources.

For how to set thin provisioning attributes for a storage pool, refer to "20.2 Creating" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".

For details on the display of storage capacity and the calculation method for the free space of storage pools, refer to "20.6 Viewing" in the "User's Guide for Infrastructure Administrators (Resource Management) CE".


Preparations for Storage Environments

In the NAS configuration where libvirt storage pools are used, the directory area specified as a target path in libvirt storage pools is recognized as a virtual storage resource.

qcow2 image files are stored in that directory area. They are treated as disk resources generated automatically from the virtual storage resource.

In a storage environment using libvirt storage pools, the following pre-configuration is necessary:

Configuration on the NFS Server

The shared directory is a directory specified as a source path in libvirt storage pools.

Image files used by VM guests are placed there.

Configure the shared directory so that it can be accessed from the VM hosts that are NFS clients.

Configuration conditions are as follows:

In addition, it is recommended to configure so that NFS I/O will be performed at the same time.


Restrictions on Shared Directories

Note

  • If the above restrictions are not followed, space management of virtual storage resources may not be possible and operations of Resource Orchestrator may fail.

  • Procedure to change the NFS version to version 3 on Red Hat Enterprise Linux 6

    Enable the RPCNFSDARGS="-N 4" line in the RPCNFSDARGS line /etc/sysconfig/nfs on the NFS server. Then, restart the NFS service on the NFS server to reflect the change.

  • For the disks mounted on the shared directories on the NFS server, ensure that those disks are not unmounted when the NFS server is restarted.


libvirt Storage Pool Configuration on KVM Hosts that are NFS Clients

On each VM host, create a libvirt storage pool with a shared directory specified as the source path.

When doing so, ensure that settings for the following items in the configuration definition for the libvirt storage pool are the same on each VM host:

Note

Do not include the following characters in the settings of the above items. If any of the following characters is included, that storage pool cannot be detected as a virtual storage resource.

  • Blank spaces (" ")

  • Double-byte characters

  • Yen marks ("\")

  • Double quotes (""")

  • Single quotes ("'")

  • Semicolons (";")

  • Parenthesis ("()")

Example

An example of the configuration definition for a libvirt storage pool is given below:

<pool type='netfs'>
  <name>rcxnfs</name>(1)
  <uuid>bd0a2edc-66fb-e630-d2c9-cafc85a9cd29</uuid>(2)
  <capacity>52844822528</capacity>
  <allocation>44229459968</allocation>
  <available>8615362560</available>
  <source>
    <host name='192.168.1.1'/>(3)
    <dir path='/root/rcx_nfs'/>(4)
    <format type='nfs'/>
  </source>
  <target>
    <path>/root/rcx_lib_pool</path>(5)
    <permissions>
      <mode>0700</mode>
      <owner>-1</owner>
      <group>-1</group>
    </permissions>
  </target>
</pool>

Figure E.22 Example of Configuration Definitions for a libvirt Storage Pool

NFS_1, NFS_2: Shared directories on the NFS server

libvirt Storage Pool Configuration Shared between Multiple VM Hosts

It is recommended to define libvirt storage pools in the same way on each VM host regarding the scope of access to shared directories.

Define libvirt storage pools in the same way on each VM host in each access scope of NFS_1 - NFS_4.

An example is shown below.

Figure E.23 libvirt Storage Pool Configuration Shared between Multiple VM Hosts

NFS_1 - NFS_4: Shared directories on the NFS server
L_SP1 - L_SP4: libvirt storage pools


Configuration Procedure

This section explains the recommended procedure to perform the above configuration.

  1. Create libvirt Storage Pools

    1. Create libvirt storage pools on a VM host.

      libvirt storage pools can be created using virt-manager or the virsh pool-define command.

      For details on how to use the virsh command and libvirt storage pools, refer to the following chapters in the "Virtualization Administration Guide".

      • Chapter 14. Managing guests virtual machines with virsh

      • Chapter 11. Storage concepts

      • Chapter 12. Storage pools

        URL: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/index.html

      Point

      When creating a libvirt storage pool using virt-manager, the configuration definition file is created automatically.

      When creating libvirt storage pools using the virsh pool-define command, prepare the libvirt storage pool configuration definition file beforehand.

      Store the libvirt storage pool configuration definition files in /etc/libvirt/storage/ on the VM host.

    2. Check the definitions for the following items in the configuration definition file ("<libvirt storage pool name>.xml") for the libvirt storage pool on each VM host:

      • Name

      • uuid

      • NFS Server IP

      • Source Path

      • Target Path

      Note

      Confirm that the definition for uuid exists.

      Confirm that <pool type = 'netfs'> is defined.

      Do not modify the configuration definition file name.

      When the file name is something other than "<libvirt storage name>.xml", the configuration definition file is not loaded when libvirtd is restarted and the definitions for the libvirt storage pool will disappear.

    3. Confirm that the libvirt storage pool definitions have been created.

      The following is an example of libvirt storage pool definitions displayed using a command:

      Example

      # virsh pool-list  --all <RETURN>
          Name                 State      Autostart
          ------------------------------------------------------------------
          default              active     yes
          rcxnfs               active     no
          nfstest              active     yes
    4. Confirm that "yes" is displayed as the information for Autostart.

      If "no" is displayed for Autostart, configure auto-start of the libvirt storage pool.

      The following is an example of the command to do so:

      Example

      virsh pool-autostart rcxnfs

      Note

      If the above configuration is not performed, when libvirtd is restarted, the libvirt storage pool may remain inactive and it may not be available for use as a virtual storage resource.

  2. Create the Configuration Definition on Each VM Host

    1. Create the same configuration definition as the one created in a. of step 1 on each VM host by executing the following virsh command:

      virsh pool-define Full_path_of_the_configuration_definition_of_the_libvirt_storage_pool

      Example

      virsh pool-define /etc/libvirt/storage/rcxnfs.xml

      Store the libvirt storage pool configuration definition files in /etc/libvirt/storage/ on the VM host.

      Note

      • If the libvirt storage pool configuration definition file is placed somewhere other than in /etc/libvirt/storage/ on the VM host, the configuration definition file is not loaded when libvirtd is restarted and definitions for the libvirt storage pool will disappear.

      • Create a libvirt storage pool on each VM host by executing the virsh pool-define command using the configuration definition file created in b. of step 1. When using the virt-manager command, uuid cannot be specified and it is not possible to match "uuid" setting in the configuration definition file of the libvirt storage pool between each KVM host.

    2. In the same way as in c. of step 1, confirm that the libvirt storage pool definitions have been created on each VM host.

    3. In the same way as in d. of step 1, confirm that "yes" is displayed as the information for Autostart.

      If "no" is displayed for Autostart, configure auto-start of the libvirt storage pool.

    4. Confirm that "active" is displayed as the information for State of the libvirt storage pool on each VM host. If "inactive" is displayed, start the libvirt storage pool.

Information

When a VM host is registered as storage management software, the directory area specified as the target path in the libvirt storage pool is recognized as a virtual storage resource.

The libvirt storage pool name is used for the virtual storage resource name.

  • However, when characters other than the following are included in a datastore name, they are replaced with hyphens ("-").

    • Numerals (0 to 9)

    • Alphabetical characters: upper case (A to Z), lower case (a to z)

    • Hyphens ("-") and underscores ("_")

  • libvirt storage pools with names containing multibyte characters are not detected.

  • When Resource Orchestrator detects multiple libvirt storage pools with the same name, the libvirt storage pool name followed by "_<serial number starting from 1>"(Example: "_1") is used as the virtual storage resource name.