Tuesday, 23 April 2013

Assign path priorities to virtualized disks

Assign path priorities to virtualized disks

Automation method for load balancing SAN traffic

Summary:  This article discusses the method for assigning physical path priorities to virtual scsi paths on VIO servers based on even/odd numbers associated with each disk and each path to disk. The script is useful in a virtualized environment utilizing VIO servers with MPIO on the client LPARs. This script also provides a system administrator with the ability to load balance manually SAN traffic from a client LPARs between dual VIO servers and across all physical adapters on the VIO server.


This article discusses standardized procedures for prioritizing the storage communication on an AIX client LPAR utilizing dual VIO servers. Prioritizing the communication paths allows an administrator to maximize the utilization of all available SAN fabric bandwidth and to distribute the SAN traffic across all available hardware paths.
The methodology described can be utilized on a standalone AIX system or on AIX LPARs that obtain their storage from one or more VIO servers. In a VIO environment, the communication path priorities should be set on all VIO servers as well as the client AIX LPARs.
The methodology of maximizing the utilization of SAN fabric bandwidth is dependent upon the implementation of a naming and numbering scheme that will be fully described in this article. The automated procedure to assign priorities to each SAN fabric path is based upon the implementation of this naming and numbering scheme.
The first task is to perform an AIX installation; this can be from distribution media, mksysb media, or from the NIM server (the preferred method). It is suggested that a standard build be created and a mksysb image be saved from this standard build. Then using the NIM server, use this standard build mksysb to perform all AIX installations. Once installed, then use a configuration script to customize the standard build according to the individual system needs and requirements.
A recommended configuration for a virtualized environment includes redundant VIO servers, which provide redundant access to storage and networking devices. This means that each VIO server will be configured with multiple physical adapters for access to storage and multiple physical devices with access to networking. Typically, each VIO server is configured with two or three fiber channel devices providing access to storage.
Configuration of the virtual SCSI adapters requires the knowledge of disk layouts, as well as networking configuration. The virtual SCSI adapters require server and client side adapters to be configured on the HMC. The server side portion of the SCSI adapter, configured on the VIO, requires the definition of a frame wide unique "slot number". For high availability, the server side portion of the SCSI adapter must be configured on both VIO servers in a dual VIO configuration.
For each client LPAR that uses virtual disk or logical volumes, a client side virtual SCSI adapter must be configured on the HMC, one for each VIO server. The client side of the virtual SCSI adapter requires additional information and its settings to correspond with the server side of the SCSI adapter. Coordinating the slot numbers defined here will make debugging and tracking of problems much easier and is highly desired.
To define a slot numbering standard for VIO client/server environments, the slot numbers should be divided on an even/odd basis. Even numbered slots shall only be used on even numbered VIO servers; odd numbered slots shall only be used on odd numbered VIO servers. Thus providing an easy mechanism to determine which slot is served by which VIO server.
To keep track of each slot and to provide a recognizable pattern to slot numbering, a range of slot numbers has been arbitrarily selected for assignment with virtual SCSI adapters. This standard specifies the range of slot numbers to be those between 10 and 499.
Using the following example VIO server host names (in Table 1 below), virtual SCSI slot numbers can be assigned based on the ultimate purpose of the storage attached to the virtual SCSI adapter (in Table 2 below):

Table 1: Example LPAR names for VIO servers
VIO server nameDescriptionManaged frame name
mtxapvio00First VIO Server node on the frameServer-9119-590-SN12A345B
mtxapvio01Second VIO Server node on the frameServer-9119-590-SN12A345B

Table 2: VIO Server Slot Numbering
Virtual SCSI adapter slot numberVIO server numbering (even/odd)VIO server (Example)Purpose
100evenmtxapvio00Operating system storage
101oddmtxapvio01Operating system storage
102evenmtxapvio00Database/Application storage
103oddmtxapvio01Database/Application storage
104evenmtxapvio00Database/Application storage
105oddmtxapvio01Database/Application storage
106evenmtxapvio00Database/Application storage
107oddmtxapvio01Database/Application storage
110evenmtxapvio00Operating system storage
111oddmtxapvio01Operating system storage
112evenmtxapvio00Database/Application storage
113oddmtxapvio01Database/Application storage
114evenmtxapvio00Database/Application storage
115oddmtxapvio01Database/Application storage
116evenmtxapvio00Database/Application storage
117oddmtxapvio01Database/Application storage
120evenmtxapvio00Operating system storage
121oddmtxapvio01Operating system storage
122evenmtxapvio00Database/Application storage
123oddmtxapvio01Database/Application storage
124evenmtxapvio00Database/Application storage
125oddmtxapvio01Database/Application storage
126evenmtxapvio00Database/Application storage
127oddmtxapvio01Database/Application storage

Under this standard, virtual SCSI adapters that are assigned slot numbers ending in 0 or 1 are used for communication with operating system storage. Virtual SCSI adapters that are assigned slot numbers ending in 2, 3, 4, 5, 6, or 7 are used for communication with application or database storage. Finally, virtual SCSI adapters that are assigned slot numbers ending in 8 or 9 are used for communication with miscellaneous other storage.
Create the virtual SCSI adapters according to the standardized slot numbering scheme described above. It creates virtual SCSI adapters for all slots numbered between 10 and 499 for slots ending with numbers 0, 1, 2, 3, 8, and 9. The even numbered slots should be created on the even numbered VIO server and the odd numbered slots on the odd numbered VIO server. The purpose of segmenting the SCSI adapters on an even/odd basis is to provide the administrator with a mechanism to identify easily and maintain these resources. Using this standard, up to 49 client LPARs could be configured on any single frame. Slight modifications to this standard would allow as many client LPARs as desired to be configured. Table 3 below shows an example slot numbering sequence for dual VIO servers providing virtual SCSI adapters for three client LPARs:

Table 3: Client LPAR Virtual SCSI Adapter Slot Numbering
Client LPAR nameClient LPAR slots (from VIO)VIO Server mtxapvio00 slot number (even)VIO Server mtxapvio01 slot number (odd)VIO server assignment to LPAR
mtxapora00100/101100101Operating system storage
mtxapora00102/103102103Database storage
mtxapora00104/105104105Database storage
mtxapora00106/107106107Database storage
mtxapora01110/111110111Operating system storage
mtxapora01112/113112113Database storage
mtxapora01114/115114115Database storage
mtxapora01116/117116117Database storage
mtxapora02120/121120121Operating system storage
mtxapora02122/123122123Application storage
mtxapora02124/125124125Database storage
mtxapora02126/127126127Database storage

This standardized method of assigning slot numbers to virtual SCSI adapter greatly enhances the administrators ability to build, modify, and maintain the client LPARs and VIO servers. The administrator can immediately recognize which VIO server is providing active (or inactive) paths to storage from the client LPAR and where problems exist. The administrator is also able to take specific virtual SCSI adapters offline for maintenance or reconfiguration without effecting storage attached to other virtual adapters. Create the virtual SCSI adapters on each VIO server based on the even/odd numbering scheme; however, it is up to the administrator to assign these adapters to each client LPAR according to the standard described here.
Normally, LUNs from a storage array are assigned to the physical adapters associated with the dual VIO servers. An individual LUN would be assigned to both VIO servers through the virtual SCSI adapters and presented to a client LPAR. The Multi-Path I/O (MPIO) drivers on the client LPAR then recognize that a single LUN is presented through multiple virtual paths (one from each VIO Server), and a single "hdisk" is discovered by the "cfgmgr" command.
The LPAR to be configured will have multiple resources being provided by the dual VIOS including one or more "vscsi" adapters. Additional physical devices can be manually configured in the LPAR configuration, however this procedure only describes the configuration associated with the virtual devices.
Table 4: Virtual SCSI adapters on client LPAR
Adapter typeExample slotExample device namePurpose
Virtual SCSI adapter100vscsi0Operating system disks
Virtual SCSI adapter101vscsi1Operating system disks
Virtual SCSI adapter102vscsi2Data/Application disks
Virtual SCSI adapter103vscsi3Data/Application disks
Virtual SCSI adapter104vscsi4Data/Application disks
Virtual SCSI adapter105vscsi5Data/Application disks
Virtual SCSI adapter106vscsi6Data/Application disks
Virtual SCSI adapter107vscsi7Data/Application disks
Virtual SCSI adapter108vscsi8Miscellaneous disks
Virtual SCSI adapter109vscsi9Miscellaneous disks

The disk configuration described in this assumes that each virtualized disk has multiple paths to the storage SAN, one path from each VIO server. The default configuration of the Multi-Path I/O (MPIO) is to give all paths the same priority; this has the effect of directing all SAN traffic across the first VIO server. To distribute the storage communication traffic evenly across both VIO servers, each path in an MPIO configuration must be assigned a path priority. The goal is to distribute the load evenly across the VIO servers; however, it is undesirable for all "hdisk0" disks to have their highest priority path always point to the even numbered VIO server. Therefore, the following logic should be implemented when assigning path priorities:

  • Even numbered disk + even numbered client LPAR host name = highest priority path is the even numbered VIO server
  • Odd numbered disk + odd numbered client LPAR host name = highest priority path is the even numbered VIO server
  • Even numbered disk + odd numbered client LPAR host name = highest priority path is the odd numbered VIO server
  • Odd numbered disk + even numbered client LPAR host name = highest priority path is the odd numbered VIO server

This path priority logic is captured in a shell script called "vscsiPriority.ksh" and automatically prioritizes all disks on a VIO server and/or client LPAR. This script can be downloaded at the following URL:http://www.mtxia.com/js/Downloads/Scripts/Korn/Functions/vscsiPriority.txt.
The next task is to configure the virtual storage assigned to the client LPAR (illustrating the VSCSI slot numbering standards and why they are useful). The VSCSI slot numbers associated with each client LPAR is a known entity on the VIO server; however, on the client LPAR it is sufficient to recognize the only important part of the slot number is the last digit. Slots that end with a zero (0) or one (1) are used for operating system disks. Slots that end with a two (2) or three (3) are used for application and data disks. The slot ending in a two (2) is served by the VIO server whose host name is even numbered, the slot ending in a three (3) is served by the VIO server whose host name is odd numbered. Therefore, on the client LPAR, all disks whose parent VSCSI adapter has a slot number ending in a two (2) or three (3) can be automatically detected and assembled into a volume group. The disks within a volume group can then be divided into logical volumes and file systems. The following Korn shell statement utilizes the output from the "lscfg" command to obtain a list of disks associated with the VSCSI adapter at slot numbers ending in two(2) or three(3):

VGDISKS=$( lscfg -l 'hdisk*' | egrep -- '-C[0-9]*[23]-' | awk '{ print $1 }' | sort -n

The following Korn shell code defines several values that will be used during the automated virtual storage configuration.

# Determine the length of the host name
   # extract the last two characters of the host name, assumed to be a two digit number
# assign a resource group name which will be used to define the VG, LVs, 
     and file systems.
# Assign the volume group major number 
# Assign a unique identifier to use during the creation of the volume group
# Assign a unique identifier to use during the creation of the Log Logical Volume
# Assign a directory mount point for the file system using the resource group name
# Create the volume group using the previously defined values
mkvg -f -y ${RG}${VGID} -V ${VGMJ} ${VGDISKS}
# Create the log logical volume using the previously defined values
/usr/sbin/mklv -y ${RG}${LGID}lv -t jfs2log -a e "${RG}${VGID}" 1
# Determine the number of free physical partitions associated with the volume group
FREEPPS=$( print "a=0; $( lsvg -p ${RG}${VGID} | sed -e '1,2 d' | awk '{ print $4 }' | 
     sed -e 's/^/a=a+/' ); a" | bc )
# Assign the number of physical partitions to use for the application/data logical volume.
# Create the application/data logical volume
/usr/sbin/mklv -y ${RG}${LVID}lv \
-t jfs2 \
-x 5000 \
-a e \
"${RG}${VGID}" \
# Create the application/data file system
/usr/sbin/crfs -v jfs2 \
-d "${RG}${LVID}lv" \
-m "${MTPT}" \
-A y \
-p rw \
-a agblksize=4096 \
-a logname="${RG}${LGID}lv"
# Mount the newly created file system
mount /${RG}

The result of the previous commands is a fully configured file system mounted on a directory identified by the resource group name. The point to all of the shell script commands shown in this article is to re-enforce the business continuity mentality associated with standardized procedures for all aspects of system administration, including the build-out of networking and storage. Standardized procedures such as these lead quickly into process automation and data center automation.
With the automated build-out of networking and storage comes the ability to consider other components for process automation (such as application deployment, database deployment, workload manager configuration, high availability (HACMP) implementation, disaster recovery implementation, automated documentation, audit compliance, and audit response).
Prioritizing the SAN fabric communication paths provides multiple benefits:
  • Evenly distribute SAN traffic load across multiple VIO servers.
  • Evenly distribute SAN traffic load across multiple physical adapters.
  • Requires that naming standards be established and implemented that will enhance business continuity efforts.
  • Requires that vSCSI slot numbering standards be established and implemented that will enhance business continuity efforts.
  • Reduces hardware requirements by fully utilizing existing infrastructure.
  • Increases return on investment (ROI) by fully utilizing existing infrastructure.
Most importantly, establishing path priorities provides a standardized, repeatable, teachable methodology that can be maintained across multiple platforms and generations of administrators. A consistent approach to managing and distributing SAN traffic can be documented, tested, and tracked. IT management can use this method to encourage optimization of physical resources and obtain the highest return on investment from those resources.

Get products and technologies
  • Download the vscsiPriority.ksh shell script.
  • Try out IBM software for free. Download a trial version, log into an online trial, work with a product in a sandbox environment, or access it through the cloud. Choose from over 100 IBM product trials.

0 blogger-disqus:

Post a Comment