Previous  |  Next  >  
Product: Volume Manager Guides   
Manual: Volume Manager 4.1 Administrator's Guide   

How VxVM Handles Storage Management

VxVM uses two types of objects to handle storage management: physical objects and virtual objects.

  • Physical Objects---Physical Disks or other hardware with block and raw operating system device interfaces that are used to store data.
  • Virtual Objects---When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks. Each volume records and retrieves data from one or more physical disks. Volumes are accessed by file systems, databases, or other applications in the same way that physical disks are accessed. Volumes are also composed of other virtual objects (plexes and subdisks) that are used in changing the volume configuration. Volumes and their virtual components are called virtual objects or VxVM objects.

Physical Objects---Physical Disks

A physical disk is the basic storage device (media) where the data is ultimately stored. You can access the data on a physical disk by using a device name to locate the disk. The physical disk device name varies with the computer system you use. Not all parameters are used on all systems. Typical device names are of the form c#t#d#, where:

  • c# specifies the controller
  • t# specifies the target ID
  • d# specifies the disk

The figure, Physical Disk Example, shows how a physical disk and device name (devname) are illustrated in this document. For example, device name c0t0d0 is the entire hard disk connected to controller number 0 in the system, with a target ID of 0, and physical disk number 0.

Physical Disk Example

Physical Disk Example

Click the thumbnail above to view full-sized image.

VxVM writes identification information on physical disks under VxVM control (VM disks). VxVM disks can be identified even after physical disk disconnection or system outages. VxVM can then re-form disk groups and logical objects to provide failure detection and to speed system recovery.

For HP-UX 11.x, all the disks are treated and accessed by VxVM as entire physical disks using a device name such as c#t#d#.

Disk Arrays

Performing I/O to disks is a relatively slow process because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read or write operations are done to individual disks, one at a time, the read-write time can become unmanageable. Performing these operations on multiple disks can help to reduce this problem.

A disk array is a collection of physical disks that VxVM can represent to the operating system as one or more virtual disks or volumes. The volumes created by VxVM look and act to the operating system like physical disks. Applications that interact with volumes should work in the same way as with physical disks.

How VxVM Presents the Disks in a Disk Array as Volumes to the Operating System illustrates how VxVM represents the disks in a disk array as several volumes to the operating system.

Data can be spread across several disks within an array to distribute or balance I/O operations across the disks. Using parallel I/O across multiple disks in this way improves I/O performance by increasing data transfer speed and overall throughput for the array.

How VxVM Presents the Disks in a Disk Array as Volumes to the Operating System

How VxVM Presents the Disks in a Disk Array as Volumes to the Operating System

Click the thumbnail above to view full-sized image.

Multipathed Disk Arrays

Some disk arrays provide multiple ports to access their disk devices. These ports, coupled with the host bus adaptor (HBA) controller and any data bus or I/O processor local to the array, make up multiple hardware paths to access the disk devices. Such disk arrays are called multipathed disk arrays. This type of disk array can be connected to host systems in many different configurations, (such as multiple ports connected to different controllers on a single host, chaining of the ports through a single controller on a host, or ports connected to different hosts simultaneously).For more detailed information, see Administering Dynamic Multipathing (DMP).

Device Discovery

Device Discovery is the term used to describe the process of discovering the disks that are attached to a host. This feature is important for DMP because it needs to support a growing number of disk arrays from a number of vendors. In conjunction with the ability to discover the devices attached to a host, the Device Discovery service enables you to add support dynamically for new disk arrays. This operation, which uses a facility called the Device Discovery Layer (DDL), is achieved without the need for a reboot.

This means that you can dynamically add a new disk array to a host, and run a command which scans the operating system's device tree for all the attached disk devices, and reconfigures DMP with the new device database. For more information, see Administering the Device Discovery Layer.

Enclosure-Based Naming

Enclosure-based naming provides an alternative to the disk device naming described in Physical Objects---Physical Disks. This allows disk devices to be named for enclosures rather than for the controllers through which they are accessed. In a Storage Area Network (SAN) that uses Fibre Channel hubs or fabric switches, information about disk location provided by the operating system may not correctly indicate the physical location of the disks. For example, c#t#d# naming assigns controller-based device names to disks in separate enclosures that are connected to the same host controller. Enclosure-based naming allows VxVM to access enclosures as separate physical entities. By configuring redundant copies of your data on separate enclosures, you can safeguard against failure of one or more enclosures.

In a typical SAN environment, host controllers are connected to multiple enclosures in a daisy chain or through a Fibre Channel hub or fabric switch as illustrated in Example Configuration for Disk Enclosures Connected via a Fibre Channel Hub/Switch.

Example Configuration for Disk Enclosures Connected via a Fibre Channel Hub/Switch

Example Configuration for Disk Enclosures Connected via a Fibre Channel Hub/Switch

Click the thumbnail above to view full-sized image.

In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure. For example, the device names for the disks in enclosure enc0 are named enc0_0, enc0_1, and so on. The main benefit of this scheme is that it allows you to quickly determine where a disk is physically located in a large SAN configuration.


Note   Note    In many advanced disk arrays, you can use hardware-based storage management to represent several physical disks as one logical disk device to the operating system. In such cases, VxVM also sees a single logical disk device rather than its component disks. For this reason, when reference is made to a disk within an enclosure, this disk may be either a physical or a logical device.

Another important benefit of enclosure-based naming is that it enables VxVM to avoid placing redundant copies of data in the same enclosure. This is a good thing to avoid as each enclosure can be considered to be a separate fault domain. For example, if a mirrored volume were configured only on the disks in enclosure enc1, the failure of the cable between the hub and the enclosure would make the entire volume unavailable.

If required, you can replace the default name that VxVM assigns to an enclosure with one that is more meaningful to your configuration. See Renaming an Enclosure for details.

In High Availability (HA) configurations, redundant-loop access to storage can be implemented by connecting independent controllers on the host to separate hubs with independent paths to the enclosures as shown in Example HA Configuration Using Multiple Hubs/Switches to Provide Redundant-Loop Access. Such a configuration protects against the failure of one of the host controllers (c1 and c2), or of the cable between the host and one of the hubs. In this example, each disk is known by the same name to VxVM for all of the paths over which it can be accessed. For example, the disk device enc0_0 represents a single disk for which two different paths are known to the operating system, such as c1t99d0 and c2t99d0.

To take account of fault domains when configuring data redundancy, you can control how mirrored volumes are laid out across enclosures as described in Mirroring across Targets, Controllers or Enclosures.

Example HA Configuration Using Multiple Hubs/Switches to Provide Redundant-Loop Access

Example HA Configuration Using Multiple Hubs/Switches to Provide Redundant-Loop Access

Click the thumbnail above to view full-sized image.

See Disk Device Naming in VxVM and Changing the Disk-Naming Scheme for details of the standard and the enclosure-based naming schemes, and how to switch between them.

Virtual Objects

Virtual objects in VxVM include the following:

The connection between physical objects and VxVM objects is made when you place a physical disk under VxVM control.

After installing VxVM on a host system, you must bring the contents of physical disks under VxVM control by collecting the VM disks into disk groups and allocating the disk group space to create logical volumes.


Note   Note    To bring the physical disk under VxVM control, the disk must not be under LVM control. For more information on how LVM and VM disks co-exist or how to convert LVM disks to VM disks, see the VERITAS Volume Manager Migration Guide

Bringing the contents of physical disks under VxVM control is accomplished only if VxVM takes control of the physical disks and the disk is not under control of another storage manager such as LVM.

VxVM creates virtual objects and makes logical connections between the objects. The virtual objects are then used by VxVM to do storage management tasks.


Note   Note    The vxprint command displays detailed information on existing VxVM objects. For additional information on the vxprint command, see Displaying Volume Information and the vxprint(1M) manual page.

Combining Virtual Objects in VxVM

VxVM virtual objects are combined to build volumes. The virtual objects contained in volumes are VM disks, disk groups, subdisks, and plexes. VERITAS Volume Manager objects are organized as follows:

  • VM disks are grouped into disk groups
  • Subdisks (each representing a specific region of a disk) are combined to form plexes
  • Volumes are composed of one or more plexes

The figure, Connection Between Objects in VxVM, shows the connections between VERITAS Volume Manager virtual objects and how they relate to physical disks. The disk group contains three VM disks which are used to create two volumes. Volume vol01 is simple and has a single plex. Volume vol02 is a mirrored volume with two plexes.

Connection Between Objects in VxVM

Connection Between Objects in VxVM

Click the thumbnail above to view full-sized image.

The various types of virtual objects (disk groups, VM disks, subdisks, plexes and volumes) are described in the following sections. Other types of objects exist in VERITAS Volume Manager, such as data change objects (DCOs), and cache objects, to provide extended functionality. These objects are discussed later in this chapter.

Disk Groups

A disk group is a collection of disks that share a common configuration, and which are managed by VxVM (see VM Disks). A disk group configuration is a set of records with detailed information about related VxVM objects, their attributes, and their connections. A disk group name can be up to 31 characters long.

In releases prior to VxVM 4.0, the default disk group was rootdg (the root disk group). For VxVM to function, the rootdg disk group had to exist and it had to contain at least one disk. This requirement no longer exists, and VxVM can work without any disk groups configured (although you must set up at least one disk group before you can create any volumes of otherVxVM objects). For more information about changes to disk group configuration, see Creating and Administering Disk Groups.

You can create additional disk groups when you need them. Disk groups allow you to group disks into logical collections. A disk group and its components can be moved as a unit from one host machine to another. The ability to move whole volumes and disks between disk groups, to split whole volumes and disks between disk groups, and to join disk groups is described in Reorganizing the Contents of Disk Groups.

Volumes are created within a disk group. A given volume and its plexes and subdisks must be configured from disks in the same disk group.

VM Disks

When you place a physical disk under VxVM control, a VM disk is assigned to the physical disk. A VM disk is under VxVM control and is usually in a disk group. Each VM disk corresponds to one physical disk. VxVM allocates storage from a contiguous area of VxVM disk space.

A VM disk typically includes a public region (allocated storage) and a private region where VxVM internal configuration information is stored.

Each VM disk has a unique disk media name (a virtual disk name). You can either define a disk name of up to 31 characters, or allow VxVM to assign a default name that takes the form diskgroup##, where diskgroup is the name of the disk group to which the disk belongs (see Disk Groups).

VM Disk Example shows a VM disk with a media name of disk01 that is assigned to the physical disk devname.

VM Disk Example

VM Disk Example

Click the thumbnail above to view full-sized image.

Subdisks

A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks. Each subdisk represents a specific portion of a VM disk, which is mapped to a specific region of a physical disk.

The default name for a VM disk is diskgroup## and the default name for a subdisk is diskgroup##-##, where diskgroup is the name of the disk group to which the disk belongs (see Disk Groups).

In the figure, Subdisk Example, disk01-01 is the name of the first subdisk on the VM disk named disk01.

Subdisk Example

Subdisk Example

Click the thumbnail above to view full-sized image.

A VM disk can contain multiple subdisks, but subdisks cannot overlap or share the same portions of a VM disk. Example of Three Subdisks Assigned to One VM Disk shows a VM disk with three subdisks. (The VM disk is assigned to one physical disk.)

Example of Three Subdisks Assigned to One VM Disk

Example of Three Subdisks Assigned to One VM Disk

Click the thumbnail above to view full-sized image.

Any VM disk space that is not part of a subdisk is free space. You can use free space to create new subdisks.

VxVM release 3.0 or higher supports the concept of layered volumes in which subdisks can contain volumes. For more information, see Layered Volumes.

Plexes

VxVM uses subdisks to build virtual objects called plexes. A plex consists of one or more subdisks located on one or more physical disks. For example, see the plex vol01-01 shown in Example of a Plex with Two Subdisks.

Example of a Plex with Two Subdisks

Example of a Plex with Two Subdisks

Click the thumbnail above to view full-sized image.

You can organize data on subdisks to form a plex by using the following methods:

  • concatenation
  • striping (RAID-0)
  • mirroring (RAID-1)
  • striping with parity (RAID-5)

Concatenation, striping (RAID-0), mirroring (RAID-1) and RAID-5 are described in Volume Layouts in VxVM.

Volumes

A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk device, but does not have the physical limitations of a physical disk device. A volume consists of one or more plexes, each holding a copy of the selected data in the volume. Due to its virtual nature, a volume is not restricted to a particular disk or a specific area of a disk. The configuration of a volume can be changed by using VxVM user interfaces. Configuration changes can be accomplished without causing disruption to applications or file systems that are using the volume. For example, a volume can be mirrored on separate disks or moved to use different disk storage.


Note   Note    VxVM uses the default naming conventions of vol## for volumes and vol##-## for plexes in a volume. For ease of administration, you can choose to select more meaningful names for the volumes that you create.

A volume may be created under the following constraints:

  • Its name can contain up to 31 characters.
  • It can consist of up to 32 plexes, each of which contains one or more subdisks.
  • It must have at least one associated plex that has a complete copy of the data in the volume with at least one associated subdisk.
  • All subdisks within a volume must belong to the same disk group.

  • Note   Note    You can use the VERITAS Intelligent Storage Provisioning (ISP) feature to create and administer application volumes. These volumes are very similar to the traditional VxVM volumes that are described in this chapter. However, there are significant differences between the functionality of the two types of volume that prevent them from being used interchangeably. Refer to the VERITAS Storage Foundation Intelligent Storage Provisioning Administrator's Guide for more information about creating and administering ISP application volumes.

In Example of a Volume with One Plex, volume vol01 has the following characteristics:

  • It contains one plex named vol01-01.
  • The plex contains one subdisk named disk01-01.
  • The subdisk disk01-01 is allocated from VM disk disk01.
  • Example of a Volume with One Plex

    Example of a Volume with One Plex

    Click the thumbnail above to view full-sized image.

In Example of a Volume with Two Plexes, a volume, vol06, with two data plexes is mirrored. Each plex of the mirror contains a complete copy of the volume data.

Example of a Volume with Two Plexes

Example of a Volume with Two Plexes

Click the thumbnail above to view full-sized image.

Volume vol06 has the following characteristics:

  • It contains two plexes named vol06-01 and vol06-02.
  • Each plex contains one subdisk.
  • Each subdisk is allocated from a different VM disk (disk01 and disk02).

For more information, see Mirroring (RAID-1).

 ^ Return to Top Previous  |  Next  >  
Product: Volume Manager Guides  
Manual: Volume Manager 4.1 Administrator's Guide  
VERITAS Software Corporation
www.veritas.com