PowerVM QuickStart III-Virtual Disk Setup & Management1.Virtual Disk Setup
2.Disk Redundancy in VIOS
3.Virtual Optical Media
1.Virtual Disk Setup
• Disks are presented to VIOC by creating a mapping between a physical disk or storage pool volume and the vhost adapter that is associated with the VIOC.
• Best practices configuration suggests that the connecting VIOS vhost adapter and the VIOC vscsi adapter should use the same slot number. This makes the typically complex array of virtual SCSI connections in the system much easier to comprehend.
• The mkvdev command is used to create a mapping between a physical disk and the vhost adapter.
Create a mapping of hdisk3 to the virtual host adapter vhost2.
mkvdev -vdev hdisk3 \
-vadapter vhost2 \
››› It is called wd_c3_hd3 for "WholeDisk_Client3_HDisk3". The intent of this naming convention is to relay the type of disk, where from, and who to.
Delete the virtual target device wd_c3_hd3
rmvdev -vtd wd_c3_hd3
Delete the above mapping by specifying the backing device hdisk3
rmvdev -vdev hdisk3
2.Disk Redundancy in VIOS
• LVM mirroring can be used to provide disk redundancy in a VIOS configuration. One disk should be provided through each VIOS in a redundant VIOS config to eliminate a VIOS as a single point of failure.
• LVM mirroring is a client configuration that mirrors data to two different disks presented by two different VIOS.
• Disk path redundancy (assuming an external storage device is providing disk redundancy) can be provided by a dual VIOS configuration with MPIO at the client layer.
• Newer NPIV (N Port ID Virtualization) capable cards can be used to provide direct connectivity of the client to a virtualized FC adapter. Storage specific multipathing drivers such as PowerPath or HDLM can be used in the client LPAR. NPIV adapters are virtualized using VIOS, and should be used in a dual-VIOS configuration.
• MPIO is automatically enabled in AIX if the same disk is presented to a VIOC by two different VIOS.
• LVM mirroring (for client LUNs) is not recommended within VIOS (ie: mirroring your storage pool in the VIOS). This configuration would provide no additional protections over LVM mirroring in the VIOC.
• Storage specific multipathing drivers can be used in the VIOS connections to the storage. In this case these drivers should not then be used on the client. In figure 1, a storage vendor supplied multipathing driver (such as PowerPath) would be used on VIOS 1 and VIOS 2, and native AIX MPIO would be used in the client. This configuration gives access to all four paths to the disk and eliminates both VIOS and any path as a singe point of failure.
Figure 1: A standard disk multipathing scenario.
3.Virtual Optical Media
• Optical media can be assigned to an LPAR directly (by assigning the controller to the LPAR profile), through VIOS (by presenting the physical CD/DVD on a virtual SCSI controller), or through file backed virtual optical devices.
• The problem with assigning an optical device to an LPAR is that it may be difficult to manage in a multiple-LPAR system and requires explicit DLPAR operations to move it around.
• Assigning an optical device to a VIOS partition and re-sharing it is much easier as DLPAR operations are not required to move the device from one partition to another. cfgmgr is simply used to recognize the device and rmdev is used to "remove" it from the LPAR. When used in this manner, a physical optical device can only be accessed by one LPAR at a time.
• Virtual media is a file backed CD/DVD image that can be "loaded" into a virtual optical device without disruption to the LPAR configuration. CD/DVD images must be created in a repository before they can be loaded as virtual media.
• A virtual optical device will show up as a "File-backed Optical" device in lsdev output.
Create a 15 Gig virtual media repository on the clienthd storage pool
mkrep -sp clienthd -size 15G
Extend the virtual repository by an additional 5 Gig to a total of 20 Gig
chrep -size 5G
Find the size of the repository
Create an ISO image in repository using .iso file
mkvopt -name fedora10 -file /mnt/Fedora-10-ppc-DVD.iso -ro
Create a virtual media file directly from a DVD in the physical optical drive
mkvopt -name AIX61TL3 -dev cd0 -ro
Create a virtual DVD on vhost4 adapter
mkvdev -fbo -vadapter vhost4 -dev shiva_dvd
››› The LPAR connected to vhost4 is called shiva. shiva_dvd is simply a convenient naming convention.
Load the virtual optical media into the virtual DVD for LPAR umlpar
loadopt -vtd shiva_dvd -disk fedora10iso
Unload the previously loaded virtual DVD (-release is a "force" option if the client OS has a SCSI reserve on the device.)
unloadopt -vtd um_dvd -release
List virtual media in repository with usage information
Show the file backing the virtual media currently in murugan_dvd
lsdev -dev um2_dvd -attr aix_tdev
Remove (delete) a virtual DVD image called AIX61TL3
rmvopt -name AIX61TL3
4.Virtual Optical Media
• Storage pools work much like AIX VGs (Volume Groups) in that they reside on one or more PVs (Physical Volumes). One key difference is the concept of a default storage pool. The default storage pool is the target of storage pool commands where the storage pool is not explicitly specified.
• The default storage pool is rootvg. If storage pools are used in a configuration then the default storage pool should be changed to something other than rootvg.
List the default storage pool
List all storage pools
List all disks in the rootvg storage pool
lssp -detail -sp rootvg
Create a storage pool called client_boot on hdisk22
mksp client_boot hdisk22
Make the client_boot storage pool the default storage pool
chsp -default client_boot
Add hdisk23 to the client_boot storage pool
chsp -add -sp client_boot hdisk23
List all the physical disks in the client_boot storage pool
lssp -detail -sp client_boot
List all the physical disks in the default storage pool
List all the backing devices (LVs) in the default storage pool
››› Note: This command does NOT show virtual media repositories. Use the lssp command (with no options) to list free space in all storage pools.
Create a client disk on adapter vhost1 from client_boot storage pool
mkbdsp -sp client_boot 20G
-bd lv_c1_boot \
Remove the mapping for the device just created, but save the backing device
rmbdsp -vtd vtscsi0 -savebd
Assign the lv_c1_boot backing device to another vhost adapter
mkbdsp -bd lv_c1_boot -vadapter vhost2
Completely remove the virtual target device ld_c1_boot
rmbdsp -vtd ld_c1_boot
Remove last disk from the sp to delete the sp
chsp -rm -sp client_boot hdisk22
Create a client disk on adapter vhost2 from rootvg storage pool
mkbdsp -sp rootvg 1g \
-bd murugan_hd1 \
-vadapter vhost2 \
››› The LV name and the backing device (mapping) name is specified in this command. This is different than the previous mkbdsp example. The -tn option does not seem to be compatible with all versions of the command and might be ignored in earlier versions of the command. (This command was run on VIOS 2.1) Also note the use of a consistent naming convention for LV and mapping - this makes understanding LV usage a bit easier. Finally note that rootvg was used in this example because of limitations of available disk in the rather small example system it was run on - Putting client disk on rootvg does not represent an ideal configuration.
PowerVM QuickStart Series:
PowerVM QuickStart I-Overview
PowerVM QuickStart II-VIOS Setup & Management
PowerVM QuickStart III-Virtual Disk Setup & Management
PowerVM QuickStart IV-Virtual Network Setup & Management
PowerVM QuickStart V-VIOS Device Management
PowerVM QuickStart VI-Key VIOS Commands
PowerVM QuickStart VII-Advanced VIOS Management