Showing posts with label Virtualization. Show all posts
Showing posts with label Virtualization. Show all posts

Wednesday, 7 January 2015

How to mirror VIOS Boot Disk?

Here is the procedure to mirror VIOs boot disk.
# lspv
NAME             PVID                 VG               STATUS
hdisk0           00c122d4341c6e62     rootvg           active
hdisk1           00cd55a4fg6b676f     None
hdisk2           00c5524409a99b77     None
Here hdisk0 is rootvg disk , now we need to check free disk.
you can use lspv -free command to check the un-mapped free disks.
$ lspv -free
NAME            PVID                                SIZE(megabytes)
hdisk1         00cd55a4fg6b676f                     256000
hdisk2         00c5524409a99b77                     256000
So In this case, hdisk1 is free and un-mapped . So we're going to use hdisk1 to mirror with hdisk0.

Add hdisk1 into rootvg:
# extendvg rootvg hdisk1 0516-1254 extendvg: Changing the PVID in the ODM.
Now mirror the disk but defer the automatic reboot:
$ mirrorios -defer hdisk1
Now check the boot list:
$ bootlist -mode normal -ls
hdisk0 blv=hd5 pathid=0
We only have hdisk0 at the moment.  Need to add hdisk1 into this:
$ bootlist -mode normal hdisk0 hdisk1
Check that worked:
$ bootlist -mode normal -ls
hdisk0 blv=hd5 pathid=0
hdisk1 blv=hd5 pathid=0
You now have a mirrored rootvg. 

Saturday, 11 October 2014

Run VIO commands from the HMC using "viosvrcmd" without VIOs Passwords

Recently  we got a situation where  in we don't know the passwords of  either padmin/root of VIOS  but need to run commands in VIOs.

Found an interesting command  in HMC  called "viosvrcmd",which will enble us to run commands on VIOs through HMC.
viosvrcmd -m managed-system {-p partition-name | --id partition-ID} -c "command" [--help]
Description: viosvrcmd issues an I/O server command line interface (ioscli) command to a virtual I/O server partition.

The ioscli commands are passed from the Hardware Management Console (HMC) to the virtual I/O server partition over an RMC session.

RMC does not allow interactive execution of ioscli commands.
-m    VIOs managed system name

-p    VIOs hostname

--id  The partion ID of the VIOs

Note:You must either use this option to specify the ID of the partition, or use the  -p option to specify the partition's name. The --id and the -p options are mutually exclusive.

-c    The I/O server command line interface (ioscli) command to issue to the virtual I/O      server partition.

Note: Command must be enclosed in double quotes. Also, command cannot contain the      semicolon (;), greater than (>), or vertical bar (|) characters.

--help  Display the help text for this command and exit.
Here is an example:
hscroot@umhmc:~> viosvrcmd -m umfrm570 -p umvio1 -c "ioslevel"
2.2.0.0
Since  we can't give the ; or > or |  in the command , if you need to process the output using filters , you can use that after "".
hscroot@umhmc:~> viosvrcmd -m umfrm570 -p umvio1 -c "lsdev -virtual" | grep vfchost0
vfchost0         Available   Virtual FC Server Adapter

What if  you want to run  command as root (oem_setup_env) ,  

got a method from internet
hscroot@umhmc:~> viosvrcmd -m umfrm570 -p umvio1 -c "oem_setup_env
> whoami"
root

You can  run in one shot like below

hscroot@umhmc:~> viosvrcmd -m umfrm570 -p umvio1 -c "oem_setup_env\n whoami"
root
If you need to run multiple commands , you can use them by assiging the commands to a variable and call the variable in place of the command parameter.
hscroot@umhmc:~>command=`printf  "oem_setup_env\nchsec -f /etc/security/lastlog -a unsuccessful_login_count=0 -s padmin"`

hscroot@umhmc:~>viosvrcmd -m umfrm570 -p umvio1 -c "$command"

Sunday, 8 June 2014

How to Remove a Virtual SCSI Disk

This document describes the procedure to remove a virtual disk in a volume group on a Virtual I/O Client, to map the virtual scsi disk to its corresponding backing device, and to remove the backing device from the Virtual I/O Server.  Please, read the entire document before proceeding.

This document applies to AIX version 5.3 and above.

In a Virtual I/O environment, the physical devices are allocated to the VIO server.  When there is a hardware failure (disk or adapter may go bad) on the VIO server, unless the VIO server has some type of redundancy, that will have an impact on the VIO client whose virtual disks are being served by the failing device.  The impact may be loss of connectivity to the virtual scsi disks, unless there is some type of redundancy (MPIO or LVM mirroring) on the client partition. 

This document does NOT apply to any of the following environments:
1. If the virtual disk is in a shared volume group (i.e HACMP, etc)
2. If the virtual disk is part of rootvg volume group.

 Removing a Physical Volume from a Volume Group

 The following steps are needed to remove a virtual disk from the VIO client, and they are later discussed in more detail:

1. Deallocate all the physical partitions associated with the physical volume in the volume group.
2. Remove the physical volume from the volume group
3. Map the virtual scsi disk on the VIO client partiton to the backing device on the VIO server.
4. Remove the virtual scsi disk definition from the device configuration database.
5. Remove the backing device.

At this point, a new virtual scsi can be added to the VIO client in place of the virtual disk that was removed in the case where this procedure was done as a result of a hardware failure on the VIO server partition.

 1. Deallocating the physical partitions

 In the following procedure, we will be using hdisk4 in the example, as the virtual scsi disk wanting to be removed from the VIO client.

First, we need to determine the logical volumes defined on the physical volume we want to remove. This can be done by running:

# lspv -l hdisk#            
where hdisk# is the virtual scsi disk to be removed.

Example:

# lspv -l hdisk4
hdisk4:
LV NAME          LPs      PPs      DISTRIBUTION       MOUNT POINT
fslv00               2          2          00..02..00..00..00    /test
loglv00             1          1          00..01..00..00..00    N/A
rawlv                 30        30         00..30..00..00..00    N/A

If the hdisk name no longer exists, and the disk is identifiable only by its 16-digit PVID (you might see this from the output of lsvg -p <VGname>), substitute the PVID for the disk name. For example:

# lspv -l 00c2b06ef8a9f98a

You may receive the following error:
     0516-320 : Physical volume 00c2b06ef8a9f98a is not assigned to
     a volume group.
If so, run the following command:
# putlvodm -p `getlvodm -v <VGname>` <PVID>
VGname refers to your volume group, PVID refers to the 16-digit physical volume identifier, and the characters around the getlvodm command are grave marks, the backward single quote mark. The lspv -l <PVID> command should now run successfully.  To determine the VGname associated with that physical volume use lspv hdisk#.
If another disk in the volume group has space to contain the partitions on this disk, and the virtual scsi disk to be replaced has not completely failed, the migratepv command may be used to move the used PPs on this disk. See the man page for the migratepv command on the steps to do this.
If the partitions cannot be migrated, they must be removed. The output of the lspv -l <hdisk#>, or lspv -l <PVID>, command indicates what logical volumes will be affected. Run the following command on each LV:
# lslv <LVname>
The COPIES field shows if the LV is mirrored. If so, remove the failed copy with:

# rmlvcopy <LVname> 1 <hdisk#>
hdisk# refers to all the disks in the copy that contain the failed disk. A list of drives can be specified with a space between each. Use the lslv -m <LVname> command to see what other disks may need to be listed in the rmlvcopy command. If the disk PVID was previously used with the lspv command, specify that PVID in the list of disks given to the rmlvcopy command.  The unmirrorvg command may be used in lieu of the rmlvcopy command. See the man pages for rmlvcopy and unmirrorvg, for additional information.
If the logical volume is not mirrored, the entire logical volume must be removed, even if just one physical partition resides on the drive to be replaced and cannot be migrated to another disk. If the unmirrored logical volume is a JFS or JFS2 file system, unmount the file system and remove it. Enter:
# umount /<FSname>
# rmfs /<FSname>

If the unmirrored logical volume is a paging space, see if it is active. Enter:
# lsps -a

If it is active, set it to be inactive on the next reboot.  Enter:
# chps -a n <LVname>

Then deactivate it and remove it remove it by entering:
# swapoff /dev/<LVname>
# rmps <LVname>

Remove any other unmirrored logical volume with the following command:
# rmlv <LVname> 

2. Remove the physical volume from the volume group.

 In the case where the virtual scsi disk to be replaced is the only physical volume in the volume group, then remove the volume group, via:

# exportvg <VGname>

This will deallocate the physical partitions and will free up the virtual disk.  Then, remove the disk definition, as noted on step 3.

In the case where there are more than one physical volumes.  Using either the PVID or the hdisk name, depending on which was used when running lspv -l in the preceding discussion, run one of the following:

# reducevg <VGname> <hdisk#>
# reducevg <VGname> <PVID>

If you used the PVID value and if the reducevg command complains that the PVID is not in the device configuration database, run the following command to see if the disk was indeed successfully removed:

# lsvg -p <VGname>

If the PVID or disk is not listed at this point, then ignore the errors from the reducevg command.

3. How to map the virtual scsi disk (on the client partiton) to the physical disk (on the server partition)

 In the following example, we are going to determine the mapping of virtual scsi disk, hdisk4

On the VIO client:

The following command shows the location of hdisk4:

# lscfg -vl hdisk4
  hdisk4           U9117.570.102B06E-V1-C7-T1-L810000000000  Virtual SCSI Disk Drive

where V1 is the LPAR ID (in this case 1), C7 is the slot# (in this case 7), and L81 is the LUN ID. 
Take note of these values.

Next, determine the client SCSI adapter name, by ‘grep’ing for the location of hdisk4's parent adapter, in this case, V1-C7-T1:

# lscfg -v|grep V1-C7-T1
  vscsi4           U9117.570.102B06E-V1-C7-T1                Virtual SCSI Client Adapter
        Device Specific.(YL)........U9117.570.102B06E-V1-C7-T1
  hdisk4           U9117.570.102B06E-V1-C7-T1-L810000000000  Virtual SCSI Disk Drive

where vscsi4 is the client SCSI adapter.

On the HMC:

Run the following command to obtain the LPAR name associated with the LPAR ID

# lshwres -r virtualio --rsubtype scsi -m <Managed System Name> --level lpar

To get the managed system name, run
# lssyscfg -r sys -F name

Then, look for the "lpar_id" and "slot_num" noted earlier.  In our case, the VIO client lpar id is 1 and the slot # is 7.

In the following example, the managed system name is Ops-Kern-570.  The VIO client partition name is kern1.
The VIO Server partition name is reg33_test_vios.

# lshwres -r virtualio --rsubtype scsi -m Ops-Kern-570 --level lpar
...
lpar_name=kern1,lpar_id=1,slot_num=7,state=1,is_required=0,adapter_type=client,
remote_lpar_id=11,remote_lpar_name=reg33_test_vios,remote_slot_num=23,backing_devices=none
...
Take note of the remote_lpar_id (11) and the remote_slot_num (23).  Then, in the same output, look for a line that corresponds to "lpar_id 11, slot # 23
...
lpar_name=reg33_test_vios,lpar_id=11,slot_num=23,state=1,is_required=0,adapter_type=server,
remote_lpar_id=any,remote_lpar_name=,remote_slot_num=any,backing_devices=none
...
So in this case, VIO server reg33_test_vios is serving virtual scsi disk, hdisk4, on the VIO client, kern1.
            
On the VIO Server:

Go to the VIO Server associated with the LPAR ID obtained in the previous step, in our case reg33_test_vios.
As padmin, run the following command to display the mapping, which should match the mapping obtained from the HMC obtained above.

$ lsmap -all|grep <VIO server lpar ID>-<VIOS slot#>

For example,
$ lsmap -all|grep V11-C23
where V11 is the VIO server lpar_id and C23 is the slot #

The cmd will return something similar to

vhost21         U9117.570.102B06E-V11-C23                    0x00000001

In this case, vhost21 is the server SCSI adapter mapped to our VIO client lpar id 1 (0x00000001).

Next, list the mapping for the vhost# obtained previously.

$ lsmap -vadapter vhost21
SVSA               Physloc                                                Client Partition ID
---------------         --------------------------------------------    ------------------
vhost21            U9117.570.102B06E-V11-C23     0x00000001

VTD                  virdisk01                      
LUN                  0x8100000000000000
Backing device clientlv01                     
Physloc               

Take note of the VTD and Backing device name.  In this case, the backing device mapped to virtual scsi disk, hdisk4, is logical volume, clientlv01, and it is associated with Virtual Target Device, virdisk01.

4. Remove the virtual scsi disk definition from the device configuration database on the VIO client

 To remove the vscsi definition, run

# rmdev -dl hdisk#

Ensure you know the backing device associated with the virtual scsi disk being removed prior to issuing the rmdev command.  That information will be needed in order to do clean up on the server partition.  Refer to the section "How to map the virtual scsi disk (on the client partition) to the physical disk (on the server partitions)".

 5. Remove the backing device on the VIO server

 The peripheral device types or backing devices currently supported are
·                logical volume
·                physical volume
·               optical device starting at v1.2.0.0-FP7 (but not currently supported on System i)

Prior to removing the backing device, the virtual target device must be removed first. To do so, run the following as padmin:

$ rmdev -dev <VTD name>
$ rmlv <LVname>

or you can remove both the VTD and logical volume in one command by running:

$ rmvdev -vtd <VTD name> -rmlv

In the case where the backing device is a physical volume, then, removing the virtual target device completes this document.

If you need to determine the physical device and volume group that the logical volume belongs to, you can issue the following commands prior to running rmlv or rmvdev.
$ lslv -pv <LVname>    List the physical volume that the logical volume specified resides on.
$ lslv <LVname>          Shows the characteristics of the logical volume, including the volume group name, # of mirrored copies, etc.

In our example, the backing device is a logical volume, clientlv01, and it resides on the physical device, hdisk3:

$ lslv -pv clientlv01
clientlv01:N/A
PV                COPIES        IN BAND       DISTRIBUTION 
hdisk3            080:000:000   100%          000:080:000:000:000

$ rmdev -dev virdisk01
virdisk01 deleted

$ rmlv clientlv01
Warning, all data contained on logical volume clientlv01 will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? y
rmlv: Logical volume clientlv01 is removed.

Related Documentation

Virtual I/O Server Website
http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html

Relevant Links in Documentation Tab:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/home.html
·                     IBM System p Advanced POWER Virtualization Best Practices Redbook
·                     IBM System Hardware Information Center
·                     VIOS Commands Reference

Sunday, 23 February 2014

How to collect test case for VIOs Issues

How to Collect Testcase for Problem Determination on a PowerVM Virtual I/O Server Environment

How to collect test case for VIOs Issues
This article  describes how to gather a testcase on a PowerVM Virtual I/O Server (VIOS) environment to diagnose issues related to

  • Virtual SCSI
  • Virtual Ethernet
  • Virtual Fibre Channel (NPIV) and/or
  • Active Memory Sharing (AMS) also known as virtual memory
Note: This applies to VIOS level 2.2 and above.
There are some know issues you can check here  Link

Testcase collection involves below steps


NOTE: If the testcase being collected is to diagnose issues specific to virtual SCSI, it is very important that you FIRST collect the snap from the VIO server(s), and then the AIX client(s) in question.

1. How to gather snap from VIO server(s)

How to gather snap from VIO server(s)
Login to VIO server, as padmin, and collect snap by running
$ snap

Upon completion, this will create /home/padmin/snap.pax.Z. If this file exists prior to running snap command, it will be deleted and recreated.

Repeat this for any additional VIO server.

Next, rename the compressed file using the following naming convention if you are submitting snaps for more than one vio server:
$ mv snap.pax.Z vios#.snap.pax.Z

2.How to gather snap from AIX client(s)

How to gather snap from AIX client(s)
Login to AIX server, as root and run

# snap -r (this will remove any previous snap)
# snap vfc_client_collect (for virtual fiber channel adapter mapping)
# snap -ac (this will create /tmp/ibmsupt/snap.pax.Z)

Rename the compressed file using a naming convention similar to the the VIO sever snap (i.e vioc#.snap.pax.Z).

Repeat this for any additional AIX client. 

3.1 How to gather virtual mapping (lshwres) data from HMC

The following procedure requires access to an AIX host with secure shell (ssh) configuration to access the HMC. Note: the HMC must be configured to accept ssh connections.
How to gather virtual mapping (lshwres) data from HMC
Login to AIX server, as root, and issue

# script -a /tmp/lshwres.out
# ssh -l hscroot

Once logged in to the HMC issue
# lshmc -V
# lssyscfg -r sys -F name (to list all managed system names)

Run the appropriate lshwres command that relates to your problem

    To gather virtual SCSI mapping, run
    # lshwres -r virtualio -m --rsubtype scsi --level lpar

    To gather virtual fiber channel (NPIV) mapping, run
    # lshwres -r virtualio -m --rsubtype fc --level lpar

    To gather virtual memory sharing (AMS) mapping, run
    # lshwres -r mempool -m
    # lshwres -r mempool -m --rsubtype pgdev

    To gather virtual ethernet mapping, run
    # lshwres -r virtualio -m --rsubtype eth --level lpar

# exit (from the HMC)
# exit (to end the script) 

3.2 How to gather virtual mapping (lshwres) data from IVM

How to gather virtual mapping (lshwres) data from IVM
Login to IVM (VIOS) server, as padmin, and run

$ lssyscfg -F name -r sys (to list all managed system names)
$ oem_setup_env
# script -a /tmp/lshwres.out
# su - padmin

Run the appropriate lshwres command that relates to your problem

    To gather virtual SCSI mapping, run
    $ lshwres -r virtualio -m --rsubtype scsi --level lpar

    To gather virtual fiber channel (NPIV) mapping, run
    $ lshwres -r virtualio -m --rsubtype fc --level lpar

    To gather virtual memory sharing (AMS) mapping, run
    # lshwres -r mempool -m
    # lshwres -r mempool -m --rsubtype pgdev

    To gather virtual ethernet mapping, run
    $ lshwres -r virtualio -m --rsubtype eth --level lpar

$ exit (from padmin shell)
# exit (to end the script) 

4. How to package the testcase

Once the files on section 1-3 have been created, ftp them to a host (i.e. AIX server) to compress them into a single file as follows:
How to package the testcase

In AIX host, as root:

# mkdir /tmp/viotc (ftp the files to this directory)
# cd /tmp/viotc
# ls -la (ensure all files are listed: vio server snap(s), client snap, and lshwres.out)

To create a single file, you can use either the pax or tar command:

# pax -wf pmr#.branch#.countrycode.pax ./* OR
# tar -cvf pmr#.branch#.countrycode.tar ./*

For example, if your pmr is 12345.999.000 (where 12345 is the pmr#, 999 is the branch#, and 000 is the US country code), you would do something similar to the following

    # pwd
    /tmp/viotc

    # ls -la
    total 44848
    drwxr-xr-x 2 root system 256 Sep 04 08:27 .
    drwxrwxrwt 12 bin bin 4096 Sep 04 08:27 ..
    -r--r--r-- 1 root system 880 Sep 04 08:07 lshwres.out
    -rw------- 1 root system 7648755 Sep 04 08:10 vioc1.snap.pax.Z
    -rw------- 1 root system 7648755 Sep 04 08:09 vios1.snap.pax.Z
    -rw------- 1 root system 7648755 Sep 04 08:09 vios2.snap.pax.Z

    # pax -wf 12345.999.000.pax -x pax ./*
    # ls -la
    ...
    -rw-r--r-- 1 root system 22958080 Sep 04 08:36 12345.999.000.pax
    ...
    This is the file you need to send in.

    OR

    # tar -cvf 12345.999.000.tar ./*
    a ./lshwres.out 2 blocks.
    a ./vioc1.snap.pax.Z 14939 blocks.
    a ./vios1.snap.pax.Z 14939 blocks.
    a ./vios2.snap.pax.Z 14939 blocks.
    # ls -la
    ...
    -rw-r--r-- 1 root system 22958080 Sep 04 08:29 12345.999.000.tar

5. Where to submit the testcase

Where to submit the testcase
ftp testcase.software.ibm.com
login: anonymous
password:
ftp> cd /toibm/aix
ftp> prompt
ftp> binary
ftp> put .pax
ftp> quit

To Upload the Testcase via Secure File Transfer

-> Go to https://testcase.software.ibm.com/
-> click on toibm, then aix
-> browse for the file and click on 'Upload File (Binary)'

Known Issues:

1. snap svCollect hangs gathering vasi data
IZ90645 - VIOS 2.2 (AIX 6100-06)
IZ91752 - VIOS 2.2.1.x (AIX 6100-07) 

Friday, 18 October 2013

Backup and Restore the Virtual I/O Server

How to backup and restore the Virtual I/O Server

This document describes different methods to backup and restore the Virtual I/O Server.


Backing up the Virtual I/O Server

There are 4 different ways to backup/restore the Virtual I/O Server as illustrated in the following table.

Backup method
Restore method
To tape
From bootable tape
To DVD
From bootable DVD
To remote file system
From HMC using the NIMoL facility and installios
To remote file system
From an AIX NIM server

Backing up to a tape or DVD-RAM

To backup the Virtual I/O Server to a tape or a DVD-RAM, the following steps must be performed

  1. check the status and the name of the tape/DVD drive
lsdev | grep rmt (for tape)
lsdev | grep cd (for DVD)

  1. if it is Available, backup the Virtual I/O Server with the following command
backupios –tape rmt#
backupios –cd cd#

If the Virtual I/O Server backup image does not fit on one DVD, then the backupios command provides instructions for disk replacement and removal until all the volumes have been created. This command creates one or more bootable DVDs or tapes that you can use to restore the Virtual I/O Server.

Backing up the Virtual I/O Server to a remote file system by creating a nim_resources.tar file

The nim_resources.tar file contains all the necessary resources to restore the Virtual I/O Server,including the mksysb image, the bosinst.data file, the network boot image, and SPOT resource.

The NFS export should allow root access to the Virtual I/O Server, otherwise the backup will fail with permission errors.

To backup the Virtual I/O Server to a filesystem, the following steps must be performed

  1. Create a mount directory where the backup file will be written
mkdir /backup_dir
  1. Mount the exported remote directory on the directory created in step 1.
mount server:/exported_dir /backup_dir
  1. Backup the Virtual I/O Server with the following command
backupios –file /backup_dir

The above command creates a nim_resources.tar file that you can use to restore the Virtual I/O Server from the HMC.

Note:The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.

The backupios command empties the target_disk_data section of bosinst.data and sets RECOVER_DEVICES=Default. This allows the mksysb file generated by the command to be cloned to another logical partition. If you plan to use the nim_resources.tar image to install to a specific disk, then you need to repopulate the target_disk_data section of bosinst.data and replace this file in the nim_resources.tar. All other parts of the nim_resources.tar image must remain unchanged.

Procedure to modify the target_disk_data in the bosinst.data:

  1. Extract from the nim_resources.tar the bosinst.data
tar -xvf nim_resources.tar ./bosinst.data
  1. The following is an example of the target_disk_data stanza of the bosinst.data generated by backupios.
target_disk_data:
LOCATION =
SIZE_MB =
HDISKNAME =
  1. Fill the value of HDISKNAME with the name of the disk to which you want to restore to
  2. Put back the modified bosinst.data in the nim_resources.tar image
tar -uvf nim_resources.tar ./bosinst.data

If you don't remember on which disk your Virtual I/O Server was previously installed, you can also view the original bosinst.data and look at the target_disk_data stanza.

Use the following steps

  1. extract from the nim_resources.tar the bosinst.data
tar -xvf nim_resources.tar ./bosinst.data
  1. extract the mksysb from the nim_resources.tar
tar -xvf nim_resources.tar ./5300-00_mksysb
  1. extract the original bosinst.data
restore -xvf ./5300-00_mksysb ./var/adm/ras/bosinst.data
  1. view the original target_disk_data
grep -p target_disk_data ./var/adm/ras/bosinst.data
           The above command displays something like the following:

target_disk_data:                                    
PVID = 00c5951e63449cd9                          
PHYSICAL_LOCATION = U7879.001.DQDXYTF-P1-T14-L4-L0
CONNECTION = scsi1//5,0                          
LOCATION = 0A-08-00-5,0                          
SIZE_MB = 140000                                 
HDISKNAME = hdisk0  
  1. replace ONLY the target_disk_data stanza in the ./bosinst_data with the original one                            
  2. add the modified file to the nim_resources.tar
tar -uvf nim_resources.tar ./bosinst.data

Backing up the Virtual I/O Server to a remote file system by creating a mksysb image

You could also restore the Virtual I/O Server from a NIM server. One of the ways to restore from a NIM server is from the mksysb image of the Virtual I/O Server. If you plan to restore the Virtual I/O Server from a NIM server from a mksysb image, verify that the NIM server is at the latest release of AIX.

To backup the Virtual I/O Server to a filesystem the following steps must be performed

  1. Create a mount directory where the backup file will be written
mkdir /backup_dir
  1. Mount the exported remote directory on the just created directory
mount NIM_server:/exported_dir /backup_dir
  1. Backup the Virtual I/O Server with the following command
backupios –file /backup_dir/filename.mksysb -mksysb

Restoring the Virtual I/O Server

As there are 4 different ways to backup the Virtual I/O Server, so there are 4 ways to restore it.

Restoring from a tape or DVD

To restore the Virtual I/O Server from tape or DVD, follow these steps:

  1. specify the Virtual I/O Server partition to boot from the tape or DVD by using the bootlist command or by altering the bootlist in SMS menu.
  2. insert the tape/DVD into the drive.
  3. from the SMS menu, select to install from the tape/DVD drive.
  4. follow the installation steps according to the system prompts.

Restoring the Virtual I/O Server from a remote file system using a nim_resources.tar file

To restore the Virtual I/O Server from a nim_resources.tar image in a file system, perform the following steps:

  1. run the installios command without any flag from the HMC command line.
a)      Select the Managed System where you want to restore your Virtual I/O Server from the objects of type "managed system" found by installios command.
b)      Select the VIOS Partition where you want to restore your system from the objects of type "virtual I/O server partition" found
c)      Select the Profile from the objects of type "profile" found.
d)     Enter the source of the installation images [/dev/cdrom]: server:/exported_dir
e)      Enter the client's intended IP address: <IP address of the VIOS>
f)       Enter the client's intended subnet mask: <subnet of the VIOS>
g)      Enter the client's gateway: <default gateway of the VIOS>
h)      Enter the client's speed [100]: <network speed>
i)        Enter the client's duplex [full]: <network duplex>
j)        Would you like to configure the client's network after the installation [yes]/no?
k)      Select the Ethernet Adapter used for the installation from the objects of type "ethernet adapters" found.

  1. when the restoration is finished, open a virtual terminal connection (for example, using telnet) to the Virtual I/O Server that you restored. Some additional user input might be required.

Note: The ability to run the installios command from the NIM server against the nim_resources.tar file is enabled with APAR IY85192.

Restoring the Virtual I/O Server from a remote file system using a mksysb image

To restore the Virtual I/O Server from a mksysb image in a file system using NIM, complete the following tasks:

  1. define the mksysb file as a NIM object, by running the nim command.
nim -o define -t mksysb -a server=master –a location=/export/ios_backup/filename.mksysb objectname
objectname is the name by which NIM registers and recognizes the mksysb file.

  1. define a SPOT resource for the mksysb file by running the nim command.
nim -o define -t spot -a server=master -a location=/export/ios_backup/
SPOT -a source=objectname SPOTname
SPOTname is the name of the SPOT resource for the mksysb file.

  1. install the Virtual I/O Server from the mksysb file using the smit command.
smit nim_bosinst
The following entry fields must be filled:
“Installation type” => mksysb
“Mksysb” =>  the objectname chosen in step1
“Spot” => the SPOTname chosen in step2

  1. start the Virtual I/O Server logical partition.
a)      On the HMC, right-click the partition to open the menu.
b)      Click Activate. The Activate Partition menu opens with a selection of partition profiles. Be sure the correct profile is highlighted.
c)      Select the Open a terminal window or console session check box to open a virtual terminal (vterm) window.
d)     Click (Advanced...) to open the advanced options menu.
e)      For the Boot mode, select SMS.
f)       Click OK to close the advanced options menu.
g)      Click OK. A vterm window opens for the partition.
h)      In the vterm window, select Setup Remote IPL (Initial Program Load).
i)        Select the network adapter that will be used for the installation.
j)        Select IP Parameters.
k)      Enter the client IP address, server IP address, and gateway IP address. Optionally, you can enter the subnet mask. After you have entered these values, press Esc to return to the Network Parameters menu.
l)        Select Ping Test to ensure that the network parameters are properly configured. Press Esc twice to return to the Main Menu.
m)    From the Main Menu, select Boot Options.
n)      Select Install/Boot Device.
o)      Select Network.
p)      Select the network adapter whose remote IPL settings you previously configured.
q)      When prompted for Normal or Service mode, select Normal.
r)       When asked if you want to exit, select Yes.

Integrated Virtualization Manager (IVM) Consideration


If your Virtual I/O Server is managed by the IVM, prior to backup of your system, you need to backup your partition profile data for the management partition and its clients as IVM is integrated with Virtual I/O Server, but the LPARs profile is not saved with the backupios command.

There are two ways to perform this backup:

From the IVM Web Interface

1)      From the Service Management menu, click Backup/Restore
2)      Select the Partition Configuration Backup/Restore tab
3)      Click Generate a backup

From the Virtual I/O Server CLI

1)      Run the following command
bkprofdata -o backup

Both these ways generate a file named profile.bak with the information about the LPARs configuration. While using the Web Interface, the default path for the file is /home/padmin. But if you perform the backup from CLI, the default path will be /var/adm/lpm. This path can be changed using the –l flag. Only ONE file can be present on the system, so each time the bkprofdata is issued or the Generate a Backup button is pressed, the file is overwritten.

To restore the LPARs profile you can use either the GUI or the CLI

From the IVM Web Interface

1)      From the Service Management menu, click Backup/Restore
2)      Select the Partition Configuration Backup/Restore tab
3)      Click Restore Partition Configuration

From the Virtual I/O Server CLI

1)      Run the following command
rstprofdata –l 1 –f /home/padmin/profile.bak

It is not possible to restore a single partition profile. In order to restore LPARs profile, none of the LPARs profile included in the profile.bak must be defined in the IVM.

Troubleshooting

Error during information gathering

In the case where after you specify the System Managed and the profile,the HMC is not able to find a network adapter:
  1. Check if the profile has a physical network adapter assigned
  2. Check if there is an hardware conflict with other running partition
  3. Check if the status of the LPAR is not correct (must be Not Activated)

Error during NIMOL initialization

  1. nimol_config ERROR: error from command /bin/mount < remoteNFS> /mnt/nimol
mount:< remoteNFS>  failed, reason given by server: Permission denied
probably the remote FS is not correctly exported.
  1. nimol_config ERROR: Cannot find the resource SPOT in /mnt/nimol.
probably you have specified a NFS which doesn’t contain a valid nim_resources.tar or the nim_resources.tar is a valid file but it doesn’t have valid permission for “others”

Error during lpar_netboot

In the case where the LPAR fails to power on
  1. Check if there is an hardware conflict with other running partition
  2. Check if the status of the LPAR is not correct (must be Not Activated)
In the case of Bootp failure
If the NIMOL initialization was successful
  1. Check if there is a valid route between the HMC and the LPAR
  2. Check that you have insert valid information during the initial phase

Error during BOS install phase

Probably there is a problem with the disk used for the installation

  1. Open a Vterm and check if the system is asking to select a different disk
  2. power off the LPAR, modify the profile to use another storage unit and restart installation


Saturday, 6 July 2013

PowerVM QuickStart VII-Advanced VIOS Management

PowerVM QuickStart VII-Advanced VIOS Management

Performance Monitoring:

Retrieve statistics for ent0
entstat -all ent0
Reset the statistics for ent0
entstat -reset ent0
View disk statistics (every 2 seconds)
viostat 2
Show summary for the system in stats
viostat -sys 2
Show disk stats by adapter (useful to see per-partition (VIOC) disk stats)
viostat -adapter 2
Turn on disk performance counters
chdev -dev sys0 -attr iostat=true
• The topas command is available in VIOS but uses different command line (start) options. When running, topas uses the standard AIX single key commands and may refer to AIX command line options.
View CEC/cross-partition information
topas -cecdisp

Performance Monitoring:

Create a mksysb file of the system on a NFS mount
backupios -file /mnt/vios.mksysb -mksysb
Create a backup of all structures of (online) VGs and/or storage pools
savevgstruct vdiskvg (Data will be saved to /home/ios/vgbackups)
List all (known) backups made with savevgstruct
restorevgstruct -ls
Backup the system (mksysb) to a NFS mounted filesystem
backupios -file /mnt
• sysplan files can be created from the HMC per-system menu in the GUI or from the command line using mksysplan.
• Partition data stored on the HMC can be backed up using (GUI method): per-system pop-up menu -> Configuration -> Manage Partition Data -> Backup
 

VIOs Security:

List all open ports on the firewall configuration
viosecure -firewall view
To view the current security level settings
viosecure -view -nonint
Change system security settings to default
viosecure -level default
To enable basic firewall settings
viosecure -firewall on
List all failed logins on the system
lsfailedlogin
Dump the Global Command Log (all commands run on system)
lsgcl



PowerVM QuickStart Series:

PowerVM QuickStart I-Overview
PowerVM QuickStart II-VIOS Setup & Management
PowerVM QuickStart III-Virtual Disk Setup & Management
PowerVM QuickStart IV-Virtual Network Setup & Management
PowerVM QuickStart V-VIOS Device Management
PowerVM QuickStart VI-Key VIOS Commands
PowerVM QuickStart VII-Advanced VIOS Management

PowerVM QuickStart VI-Key VIOS Commands

PowerVM QuickStart VI-Key VIOS Commands

VIOS commands are documented by categories on this InfoCenter page.

The lsmap command:

•Used to list mappings between virtual adapters and physical resources.
List all (virtual) disks attached to the vhost0 adapter
lsmap -vadapter vhost0
List only the virtual target devices attached to the vhost0 adapter
lsmap -vadapter vhost0 -field vtd
This line can be used as a list in a for loop
lsmap -vadapter vhost0 -field vtd -fmt :|sed -e "s/:/ /g"
List all shared ethernet adapters on the system
lsmap -all -net -field sea
List all (virtual) disks and their backing devices
lsmap -all -type disk -field vtd backing
List all SEAs and their backing devices
lsmap -all -net -field sea backing
Additional lsmap information


The mkvdev command:

• Used to create a mapping between a virtual adapter and a physical resource. The result of this command will be a "virtual device".

Create a SEA that links physical ent0 to virtual ent1
mkvdev -sea ent0 -vadapter ent1 -default ent1 -defaultid 1
›››   The -defaultid 1 in the previous command refers to the default VLAN ID for the SEA. In this case it is set to the VLAN ID of the virtual interface (the virtual interface in this example does not have 802.1q enabled).
›››   The -default ent1 in the previous command refers to the default virtual interface for untagged packets. In this case we have only one virtual interface associated with this SEA.
Create a disk mapping from hdisk7 to vhost2 and call it wd_c1_hd7
mkvdev -vdev hdisk7 -vadapter vhost2 -dev wd_c1_hd7

Remove a virtual target device (disk mapping) named vtscsi0
rmvdev -vtd vtscsi0

• Additional mkvdev information







PowerVM QuickStart Series:

PowerVM QuickStart I-Overview
PowerVM QuickStart II-VIOS Setup & Management
PowerVM QuickStart III-Virtual Disk Setup & Management
PowerVM QuickStart IV-Virtual Network Setup & Management
PowerVM QuickStart V-VIOS Device Management
PowerVM QuickStart VI-Key VIOS Commands
PowerVM QuickStart VII-Advanced VIOS Management