Showing posts with label LVM. Show all posts
Showing posts with label LVM. Show all posts

Friday, 28 March 2014

A Simple Way to Send Multiple Line Commands Over SSH

 A Simple Way to Send Multiple Line Commands Over SSH

Below are three methods to send multiple line commands over SSH. The first method is a quick overview of running remote commands over SSH, the second method uses the bash command to run remote commands over SSH, and the third method uses HERE documents to run remote commands over SSH. Each have their limitations which I will cover.

Contents

Running Remote Commands Over SSH

To run one command on a remote server over SSH:
ssh $HOST ls
To run two commands on a remote server over SSH:
ssh $HOST 'ls; pwd'
To run the third, fourth, fifth, etc. commands on a remote server over SSH keep appending commands with a semicolon inside the single quotes.

But, what if you want to remotely run many more commands, or if statements, or while loops, etc., and make it all readable?
#!/bin/bash
ssh $HOST '
ls

pwd

if true; then
    echo "This is true"
else
    echo "This is false"
fi

echo "Hello world"
'
The above shell script works but begins to break if local variables are added.

For example, the following shell script will run, but the local variable HELLO will not be parsed inside the remote if statement:
#!/bin/bash

HELLO="world"

ssh $HOST '
ls

pwd

if true; then
    echo $HELLO
else
    echo "This is false"
fi

echo "Hello world"
'
In order to parse the local variable HELLO so it is used in the remote if statement, read onto the next section.

Using SSH with the BASH Command

As mentioned above, in order to parse the local variable HELLO so it is used in the remote if statement, the bash command can to be used:
#!/bin/bash

HELLO="world"

ssh $HOST bash -c "'
ls

pwd

if true; then
    echo $HELLO
else
    echo "This is false"
fi

echo "Hello world"
'"
Perhaps you want to use a remote sudo command within the shell script:
#!/bin/bash

HELLO="world"

ssh $HOST bash -c "'
ls

pwd

if true; then
    echo $HELLO
else
    echo "This is false"
fi

echo "Hello world"

sudo ls /root
'"
When the above shell script is run, everything will work as intended until the remote sudo command which will throw the following error:
sudo: sorry, you must have a tty to run sudo
This error is thrown because the remote sudo command is prompting for a password which needs an interactive tty/shell. To force a pseudo interactive tty/shell, add the -t command line switch to the ssh command:
#!/bin/bash

HELLO="world"

ssh -t $HOST bash -c "'
ls

pwd

if true; then
    echo $HELLO
else
    echo "This is false"
fi

echo "Hello world"

sudo ls /root
'"
With a pseudo interactive tty/shell available, the remote sudo command’s password prompt will be displayed, the remote sudo password can then be entered, and the contents of the remote root’s home directory will be displayed.

However, recently I needed to run a specific remote sed command over SSH to find and delete one line and the subsequent three lines and another specific remote sed command over SSH to find a line and insert another line with some text above it, so I naturally tried using the bash method mentioned above:
#!/bin/bash

ssh $HOST bash -c "'
cat << EOFTEST1 > /tmp/test1
line one
line two
line three
line four
EOFTEST1

cat << EOFTEST2 > /tmp/test2
line two
EOFTEST2

sed -i -e '/line one/,+3 d' /tmp/test1

sed -i -e '/^line two$/i line one' /tmp/test2
'"
Everytime I would run the above shell script, I would get the following error:
sed: -e expression #1, char 5: unterminated address regex
However, the same commands work when run by themselves:
ssh $HOST "sed -i -e '/line one/,+3 d' /tmp/test1"
ssh $HOST "sed -i -e '/^line two$/i line one' /tmp/test2"
I thought the problem may be because of single quotes within single quotes. The bash command above requires everything to be wrapped in single quotes and a sed command requires the regular expression to be wrapped in single quotes as well. As mentioned in the BASH manual, “a single quote may not occur between single quotes, even when preceded by a backslash”.

However, I debunked this single quote theory being my problem because running a simple remote sed search and replace command inside of the bash command worked just fine:
#!/bin/bash

ssh $HOST bash -c "'

echo "Hello" >> /tmp/test3

sed -i -e 's/Hello/World/g' /tmp/test3
'"
I can only assume the problem with the specific remote sed commands is something with the syntax that I have not yet figured out.

Despite all this, I eventually figured out that the specific remote sed commands I wanted to run would work when using SSH with HERE documents.

Using SSH with HERE Documents

As mentioned above, the specific remote sed commands I wanted to run did work when using SSH with HERE documents:
ssh $HOST << EOF
cat << EOFTEST1 > /tmp/test1
line one
line two
line three
line four
EOFTEST1

cat << EOFTEST2 > /tmp/test2
line two
EOFTEST2

sed -i -e '/line one/,+3 d' /tmp/test1

sed -i -e '/^line two$/i line one' /tmp/test2
EOF
Despite the remote sed commands working, the following warning message was thrown:
Pseudo-terminal will not be allocated because stdin is not a terminal.
To stop this warning message from appearing, add the -T command line switch to the ssh command to disable pseudo-tty allocation (a pseudo-terminal can never be allocated when using HERE documents because it is reading from standard input):
ssh -T $HOST << EOF
cat << EOFTEST1 > /tmp/test1
line one
line two
line three
line four
EOFTEST1

cat << EOFTEST2 > /tmp/test2
line two
EOFTEST2

sed -i -e '/line one/,+3 d' /tmp/test1

sed -i -e '/^line two$/i line one' /tmp/test2
EOF
With this working, I later discovered remote sudo commands that require a password prompt will not work with HERE documents over SSH.
ssh $HOST << EOF
sudo ls /root
EOF
The above ssh command will throw the following error if the SSH user you are logging into requires a password when using the remote sudo command:
Pseudo-terminal will not be allocated because stdin is not a terminal.
user@host's password: 
sudo: no tty present and no askpass program specified
However, the remote sudo command will work if the SSH user’s sudo settings allow that user to use sudo without a password by setting user ALL=(ALL) NOPASSWD:ALL in /etc/sudoers.

References

What’s the Cleanest Way to SSH and Run Multiple Commands in Bash?
Chapter 19. Here Documents

Saturday, 11 January 2014

Practical Guide to AIX "Logical Volume (LV)" Management

Contents:

Logical Volume Types:

    - journaled file systems: (jfs/jfs2)
    - log logical volume: used by jfs/jfs2
    - dump logical volume: used by system dump, to copy selected areas of kernel data when a unexpected system halt occurs
    - boot logical volume: contains the initial information required to start the system
    - paging logical volume: used by the virtual memory manager to swap out pages of memory
    - raw logical volumes: these will be controlled by the appl. (it will not use jfs/jfs2)

1) Logical Volume Creation:

mklv <vg> <# of PP's> <pv>
mklv -y <lv name> <vg> <# of PP's> <pv>
## Create a mirrored named logical volume
mklv -y <lv> -c <copies 2 or 3> <vg> <# of PP's> <pv>
## create a JFSlog logical Volume
mklv -y <lv name> -t jfslog <vg> <# of PP's> <pv>

2)List/Display Logical Volume:

lslv lvname       displays information about the logical volume
lslv -l lvname    displays on which physical volumes is the lv resides
lslv -p <hdisk>   displays the logical volume allocation map for the disk (shows used, free, stale for each physical partition)
lslv -p <hdisk> <lv> displays the same as above, just the given lv's partitions will be showed by numbers

    Open          Indicates active if LV contains a file system  
    Closed        Indicates inactive if LV contains a file system  
    Syncd         Indicates that all copies are identical  
    Stale         Indicates that copies are not identical  
lsvg  -l rootvg                              Display info about all LVs in rootvg
lsvg -o |lsvg -il                            Display info about all LVs in all VGs
lslv -m lvname    displays the logical partitions (LP) and their corresponding physical partitions (PP)

  • MAX Lps: If LV created is larger than 512 MB (128 * 4), then this field needs to be upped by the following formula: (LV size in megabytes) / PP size = MAX LP count 900 MB / 4 = 225 The command to change the LP count to 225 for an LV named pick is: chlv -x 225 pick
  • COPIES:
    • value is 1= original copy
    • value is 2= first mirrored copy
    • value is 3 = second mirrored copy
  • STALE PPs: If COPIES > 1 and STALE PPs > 0, means that a mirrored LP is not available or current with other LPs.
  • INTER-POLICY:
    • If set to MIN, an LV will only reside on 1 drive
    • If set to MAX, an LV can span multiple LVs. Distributes an LV among more than 1 PV.
  • INTRA-POLICY: Has 3 values (edge, middle, center). When an LV is created, it will be assigned 1 of the 3 allocation strategies listed above.
  • EFFICENCY: Represents the efficency with which PPs are allocated based on the 3 possible states of the intra-policy.
  • RELOCATABLE : If yes, then the 'reorgvg' command is allowed to move the LV to a new position on the current PV or be placed on another PV.
  • SCHEDULING POLICY: If set to PARALLEL, insures writes to mirrored copies are performed to seperate PVs in parallel.
  • WRITE-VERIFY: If set to NO, will not perform a follow-up read to each write for verification.
  • MIRROR WRITE CONSISTENCY: When enabled, suffer a 20% performance penality. Use the syncvg -v command to resync disk drives in a VG that loses a PV.

3) Changing Logical Volume:

"chlv" is the command to do any changes to existing logical volume.

# Enable the bad-block relocation policy
chlv -b [y|n] <lv>
To change the type of logical volume lv03, enter:
chlv   -t copy  lv03
(i)Extend/Increase Logical Volume:
extendlv <lv> <additonal # of PP's>
extendlv <lv> <size of volume in B||M|G>

 (ii)Reduce Logical Volumes:
There is no direct command , you can use file system command "chfs" to reduce the lv indirectly.

4)Move/Migrate Logical Volume:

migratepv -l <lv> <old pv> <new pv>  ==> Moving lp to pp mappings to one pv to another pv
migratelp testlv/1/2 hdisk5/123   ==> migrates testlv's data from the 1st lp's second copy to hdisk5 on pp 123
         (output of lspv -M hdiskx can be used:lvname:lpnumber:copy, this sequence is needed)
         (if it is not mirrorred than easier this way: migratelp testlv/1 hdisk3)
         (if it is mirrorres and we use the above commande, than 1st copy will be used: testlv/1/1...)

5)Adding a mirror copy to a un-mirrored LV:

mklvcopy -s n <lv> <copies 2 or 3> <pv>

6)Mirroring ALL logical Volumes in a VG:

mirrorvg <vg> <pv>

7)Removing a mirror copy to a un-mirrored LV:

  rmlvcopy <lv> <copies 1 or 2>
  rmlvcopy <lv> <copies 1 or 2> <pv>      (specified pv)
  unmirrorvg <vg> <pv>

8)Synchronize logical volume:

syncvg -p <pv>
syncvg -v <vg>
syncvg -l <lv>

9)Renaming logical volume:

chlv -n <new lv name> <old lv name>

Changes the name of LV from lv00 to informixlv. If there is a filesystem mounted on top of that LV, the file system must be unmounted and the LV must be in a closed state for this command to work.
For further detailed explanation , you may refer this link  "renaming logical volume" post.

10)Remove Logical Volume:

rmlv <lv>

11)Display the LVCB:

LVCB (Logical Volume Control Block)
First 512 byte of each logical volume in normal VGs (In big VGs it moved partially into the VGDA, and for scalable VGs completely.)(traditionally it was the fs boot block) The LVCB stores the attributes of the LV. Jfs does not access this area.
# getlvcb -AT <lvname>                  <--shows the LVCB of the lv

12)Changing Maximum Number of LPs in LV:

Sometimes you met situation where in when you try to increase file system or extending logical volume , you may get error like below.

Error Message:  0516-787 extendlv: Maximum allocation for logical volume <LV_Name> is 512.
Maximum number of LPs for the logical volume has been exceeded - must increase the allocation
Calculate the number of LPs needed = LV Size in MB / LP size in MB
chlv -x <new_max_lps> <logical_volume>

13)Copy Logical Volume:

Copy the contents from one LV to a new or existing LV
i)To copy the contents of logical volume fslv03 to a new logical volume, type:
cplv fslv03
The new logical volume is created, placed in the same volume group as fslv03, and named by the system.
ii) To copy the contents of logical volume fslv03 to a new logical volume in volume group vg02, type:  start of change
cplv  -v vg02 fslv03  where fslv03 is source logical volume name. It is mandatory field.
 The new logical volume is created, named, and added to volume group vg02.
iii)To copy the contents of logical volume lv02 to a smaller, existing logical volume, lvtest, without requiring user confirmation, type:
cplv  -e lvtest  -f lv02

14) Miscellaneous LV Commands:

Resynchronizing a logical volume:
1. root@umaix: / # lslv hd6 | grep IDENTIFIER
LV IDENTIFIER:      00c2a5b400004c0000000128f907d534.2
2. lresynclv -l 00c2a5b400004c0000000128f907d534.2

Tuesday, 7 January 2014

Rename & Moving AIX Logical Volume

I would like to discuss about two scenarios  here.



I. Renaming LV in the Same Volume Group:

Steps:

Let us consider , here we need to rename testlv_old  to testlv_new
1: Unmount the FS associated with the testlv_old  ( lets us say  /testfs)
 #umount  /testfs
2: Rename the logical volume type
# chlv -n  testlv_new testlv_old
3:  Verify  dev parameters of the mount point of the filesystem associates with the logical volume in the /etc/filesystems file to match the new name of the logical volume
4: Note If you rename a JFS log , you will be prompted to run chfs on all filesystems that was renamed
5: remount the filesystem type
mount /testfs
6: Verfication
# df -k /testfs
# ls -ltr /testfs
# cd /testfs  point / then create an file using the touch comand

II. Moving LV  to different Volume Group with Different Name

assumptions:

"testlv_old" logical volume belongs to "testvg1" and mounted on "/testfs"
You need to create a samemount point "/testfs" with different logical volume name "testlv_new" and on different vg "testvg2"

Steps:

Pre-Checks:
[root@umlapr root]# lsvg -l testvg1
testvg1:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
testlv_old          jfs2       702     702    1    open/syncd    /testfs

[root@umlapr root]# ls -ld /testfs
drwxr_xr_x   8 tadmin  tgroup   4096 Sep 19 09:57 /testfs

[root@umlapr root]#df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/testlv_old   351.00    127.58   64%      143     1% /testfs
[root@umlapr root]
1: Take backup of the /testfs either locally or whichever the backtools you are using ( eg: TSM /netbackup)
  if you are taking backup locally  make sure you use '-p' option with 'cp' command
  #cp -pr /testfs/*  /backup/testfs_back/
2:Unmount the FS associated with the testlv_old ,here its  "/testfs"
  #umount /testfs
3: Change mount point name  in /etc/filesystem
/testfs:
        dev             = /dev/testlv_old
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        options         = rw
        account         = false
      
to
/testfs_old:
        dev             = /dev/testlv_old
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        options         = rw
        account         = false
4: Create new lv "testlv_new"
#mklv -y <lv name=""> -t jfslog <vg> <# of PP's>
#mklv -y testlv_new -t jfs2 testvg2  702   ( 702 may change depending upon the pp size of the new vg "testvg2" , here we assugmed both testvg1 and testvg2 have same pp size)
5: Create Filesystem with name "/testfs"
#crfs -v jfs2 -d <lv> -m <mountpoint> -A yes
#crfs -v jfs2 -d testlv_new -m /testfs -A yes
above command creates an entry into /etc/filesystems like below
/testfs:
        dev             = /dev/testlv_new
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        options         = rw
        account         = false
6: Mount the filesystem /testfs
#mount /testfs
7: Change the ownership and permisson on the mount point
  #chown -R tadmin:tgroup /testfs
  #chmod 775 /testfs
8: Restore the backup data taken previously
if you taken it locally
#cp -pr /backup/testfs_back/* /testfs
9: validate lv and fs
[root@umlapr root]# lsvg -l testvg2
testvg2:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
testlv_new          jfs2       702     702    1    open/syncd    /testfs

[root@umlapr root]# df -g /testfs
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/testlv_new    351.00    127.58   64%      143     1% /testfs
[root@umlapr root]

Thats it, I hope this article will help you

Saturday, 28 December 2013

Practical Guide to AIX "Volume Group Management"

Folks I am going to discuss about practical examples and real time usefull commands about AIX Volume Group Management.


Contents:


1)Volume Group Creation:

mkvg -y <vg> -s <PP size> <pv>  (normal volume group)
mkvg -y datavg -s 4 hdisk1

Use below options to creat Big & Scalable volume groups.

-B Creates a Big-type volume group
-S Creates a Scalable-type volume group.

Note: the PP size will be the size of the physical partition size you want  1, 2, (4), 8, 16, 32, 64, 128, 256, 512, 1024MB

2) List/Display Volume Group:

lsvg
lsvg <vg> (detailed)
lsvg -l <vg> (list all logical volumes in goup)
lsvg -p <vg> (list all physical volumes in group)
lsvg -o (lists all varied on)
lsvg -M <vg> (lists all PV, LV, PP deatils of a vg (PVname:PPnum LVname: LPnum :Copynum))
lsvg -o | lsvg -ip        lists pvs of online vgs
lsvg -o | lsvg -il        lists lvs of online vgs
lsvg -n <hdisk>           shows vg infos, but it is read from the VGDA on the specified disk (it is useful to compare it with different disks)

## Details volume group info for the hard disk
lqueryvg -Atp <pv>
lqueryvg -p <disk> -v (Determine the VG ID# on disk)
lqueryvg -p <disk> -L (Show all the LV ID#/names in the VG on disk)
lqueryvg -p <disk> -P (Show all the PV ID# that reside in the VG on disk)

3)Extending Volume Group:

#extendvg <vg> <pv>
#extendvg myvg hdisk5

4)Reducing Volume Group:

#reducevg -d <vg> <pv>
## removes the PVID from the VGDA when a disk has vanished without using the reducevg command
#reducevg <vg> <PVID>

5) Mirror Volume Group:

We can do mirroring in AIX, using mirrorvg command and we can create max of three copy of mirror.

If we have two PV’s in rootvg, now we want mirror, Data and OS installed in hdisk0 and now we want to mirror hdisk0 to hdisk1. Then your command will be
# mirrorvg –S –m rootvg hdisk1

S – Backgroup mirror
-m - exact (force) mirror
NOTE: in mirrored VG quorum should be off line because quorum is not recommended for mirror.

6)Un-Mirror Volume Group: 

Using Unmirror command we can Unmirror the VG
#unmirrorvg rootvg hdisk1
PV hdisk1 is removed from rootvg mirror.

7)Synchronize Volume Group:

Using Syncvg command we can sync the mirrored Vg and LV copy information’s

If we want to sync lvcopy
#syncvg –l lvname

#syncvg –l testlv
After executing the above command, testlv copy get sync with lv copied PV

If we want to sync mirrored PV’s
#syncvg –v rootvg
The above sync the mirrored PV’s in rootvg

8) Un-Lock Volume Group:

# chvg -u <vgname>          unlocks the volume group (if a command core dumping, or the system crashed and vg is left in locked state)
(Many LVM commands place a lock into the ODM to prevent other commands working on the same time.)

9)Re-Organise Volume Group:

# reorgvg   <vgname>
rearranges physical partitions within the vg to conform with the placement policy (outer edge...) for the lv.
(For this 1 free pp is needed, and the relocatable flag for lvs must be set to 'y': chlv -r...)

10) VarryOn Volume Group:

This is just for VG activation; some times clients want to deactivate VG for project restriction. After that we want to activate the VG for further data access

Suppose we want to activate testvg, then you should follow like below
#lsvg
rootvg
datavg
testvg
The above command shows what are VG’s available
#lsvg –o
rootvg
datavg
The above commands shows only online(active)  VG’s because testvg is offline so we have to activate testvg using "varyonvg". This makes us enable to mount the filesystems which were created on top of the testvg.

#varryonvg testvg

#lsvg –o
rootvg
datavg
testvg
Now above command is display the testvg.

11)Varryoff Volume Group:

This is just for VG deactivation; some clients want to deactivate VG for project Restriction. Suppose customer want deactivate testvg then your command will be
#lsvg –o
rootvg
datavg
testvg

#varryoff testvg

#lsvg –o
rootvg
datavg
The above command displays only two online VG’s and it will not show testvg because testvg is offline VG.

12) Rename Volume Group:

#varyoffvg <old vg name>
#lsvg -p <old vg name> (obtain disk names)
#exportvg <old vg name>
#import -y <new vg name> <pv>
#varyonvg <new vg name>
#mount -a

13) Exporting Volume Group:

Using exportvg command we can export VG (including all the PV’s) from one server to another server.

If you have ServerA, in this server has datavg with two PV’s. Now we want export datavg to ServerB

Before exporting the datavg, we should Varryoff the datavg, i.e. datavg is moved to offline.
#varryoff datavg (Varryoff the datavg)
#exportvg datavg (VG information removed from ODM
Now datavg is exported from the ServerA, after this run the following command to verify the export.
#lsvg
It won’t show datavg name. Because datavg is exported.

Then you should remove PV from the configuration
#rmdev –dl hdisk3
#rmdev –dl hdisk4
After that we can remove the PV’s from ServerA for import datavg to ServerB.

14)Importing Volume Group:

Using importvg command we can import the datavg to ServerB

First you should connect hdisk3, hdisk4, in ServerB then, run the
#cfgmgr (for hard disk detection)
Then check the PV’s installed or not using lspv command
#lspv (it will display the installed PV’s) if hdisk3, hdisk4 is available then PV’s are configured properly.
Then run the command importvg for import the datavg
#importvg –y datavg hdisk3 (VG information is added in ODM)
#importvg –y datavg hdisk4 (VG information is added in ODM)
NOTE:If ServerB has VG with same name datavg, This case we can rename the importing VG datavg to other name,
#importvg –y newdatavg hdisk3
#importvg –y newdatavg hdisk4
Like this we can import.

After importing the datavg, we no need to Varryon datavg, automatically it will Varryon while importing.

15)Removing Volume Group:

#varyoffvg <vg>
#exportvg <vg>
Note: the export command nukes everything regarding the volume goup in the ODM and /etc/filesystems

16) Check Volume Group Type:

Run the lsvg command on the volume group and look at the value for MAX PVs. The value is 32 for normal, 128 for big, and 1024 for scalable volume group.
VG type     Maximum PVs    Maximum LVs    Maximum PPs per VG    Maximum PP size
Normal VG     32              256            32,512 (1016 * 32)      1 GB
Big VG        128             512            130,048 (1016 * 128)    1 GB
Scalable VG   1024            4096           2,097,152               128 GB
If a physical volume is part of a volume group, it contains 2 additional reserved areas. One area contains both the VGSA and the VGDA, and this area is started from the first 128 reserved sectors (blocks) on the disk. The other area is at the end of the disk, and is reserved as a relocation pool for bad blocks.

17)Changing Normal VG to Big VG:

If you reached the MAX PV limit of a Normal VG and playing with the factor (chvg -t) is not possible anymore you can convert it to Big VG.

It is an online activity, but there must be free PPs on each physical volume, because VGDA will be expanded on all disks:
root@um-lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         23          00..00..00..00..23
hdisk4            active            1023        0           00..00..00..00..00

root@um-lpar: / # chvg -B bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk4 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        2 partitions and run chvg again.

In this case we have to migrate 2 PPs from hdisk4 to hdsik3 (so 2 PPs will be freed up on hdisk4):

root@um-lpar: / # lspv -M hdisk4
hdisk4:1        bblv:920
hdisk4:2        bblv:921
hdisk4:3        bblv:922
hdisk4:4        bblv:923
hdisk4:5        bblv:924
...

root@um-lpar: / # lspv -M hdisk3
hdisk3:484      bblv:3040
hdisk3:485      bblv:3041
hdisk3:486      bblv:3042
hdisk3:487      bblv:1
hdisk3:488      bblv:2
hdisk3:489-511

root@um-lpar: / # migratelp bblv/920 hdisk3/489
root@um-lpar: / # migratelp bblv/921 hdisk3/490

root@um-lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         21          00..00..00..00..21
hdisk4            active            1023        2           02..00..00..00..00

If we try again changing to Big VG, now it is successful:
root@um-lpar: / # chvg -B bbvg
0516-1216 chvg: Physical partitions are being migrated for volume group
        descriptor area expansion.  Please wait.
0516-1164 chvg: Volume group bbvg2 changed.  With given characteristics bbvg2
        can include up to 128 physical volumes with 1016 physical partitions each.

If you check again, freed up PPs has been used:
root@um-lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            509         0           00..00..00..00..00
hdisk3            active            509         17          00..00..00..00..17
hdisk4            active            1021        0           00..00..00..00..00

18)Changing Normal (or Big) VG to Scalable VG:

If you reached the MAX PV limit of a Normal or a Big VG and playing with the factor (chvg -t) is not possible anymore you can convert that VG to Scalable VG. A Scalable VG allows a maximum of 1024 PVs and 4096 LVs and a very big advantage that the maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis.

!!!Converting to Scalable VG is an offline activity (varyoffvg), and there must be free PPs on each physical volume, because VGDA will be expanded on all disks.
root@um-lpar: / # chvg -G bbvg
0516-1707 chvg: The volume group must be varied off during conversion to
        scalable volume group format.

root@um-lpar: / # varyoffvg bbvg
root@um-lpar: / # chvg -G bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk2 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        18 partitions and run chvg again.


After migrating some lps to free up required PPs (in this case it was 18), then changing to Scalable VG is successful:
root@um-lpar: / # chvg -G bbvg
0516-1224 chvg: WARNING, once this operation is completed, volume group bbvg
        cannot be imported into AIX 5.2 or lower versions. Continue (y/n) ?
...
0516-1712 chvg: Volume group bbvg changed.  bbvg can include up to 1024 physical volumes with 2097152 total physical partitions in the volume group.

19) Check VGDA (Volume Group Descriptor Area):

It is an area on the hard disk (PV) that contains information about the entire volume group. There is at least one VGDA per physical volume, one or two copies per disk. It contains physical volume list (PVIDs), logical volume list (LVIDs), physical partition map (maps lps to pps)
# lqueryvg -tAp hdisk0                                <--look into the VGDA (-A:all info, -t: tagged, without it only numbers)
Max LVs:        256
PP Size:        27                                    <--exponent of 2:2 to 7=128MB
Free PPs:       698
LV count:       11
PV count:       2
Total VGDAs:    3
Conc Allowed:   0
MAX PPs per PV  2032
MAX PVs:        16
Quorum (disk):  0
Quorum (dd):    0
Auto Varyon ?:  1
Conc Autovaryo  0
Varied on Conc  0
Logical:        00cebffe00004c000000010363f50ac5.1   hd5 1       <--1: count of mirror copies (00cebff...c5 is the VGID)
                00cebffe00004c000000010363f50ac5.2   hd6 1
                00cebffe00004c000000010363f50ac5.3   hd8 1
                ...
Physical:       00cebffe63f500ee                2   0            <--2:VGDA count 0:code for its state (active, missing, removed)
                00cebffe63f50314                1   0            (The sum of VGDA count should be the same as the Total VGDAs)
Total PPs:      1092
LTG size:       128
...
Max PPs:        32512

20)Mirroring rootvg (after disk replacement):

1. disk replaced -> cfgmgr           <--it will find the new disk (i.e. hdisk1)
2. extendvg rootvg hdisk1            <--sometimes extendvg -f rootvg...
(3. chvg -Qn rootvg)                 <--only if quorum setting has not yet been disabled, because this needs a restart
4. mirrorvg -s rootvg                <--add mirror for rootvg (-s: synchronization will not be done)
5. syncvg -v rootvg                  <--synchronize the new copy (lsvg rootvg | grep STALE)
6. bosboot -a                        <--we changed the system so create boot image (-a: create complete boot image and device)
                                     (hd5 is mirrorred, no need to do it for each disk. ie. bosboot -ad hdisk0)
7. bootlist -m normal hdisk0 hdisk1  <--set normal bootlist
8. bootlist -m service hdisk0 hdisk1 <--set bootlist when we want to boot into service mode
(9. shutdown -Fr)                    <--this is needed if quorum has been disabled
10.bootinfo -b                       <--shows the disk  which was used for boot

21)Miscellaneous VG Commands:

getlvodm -j <hdisk>       get the vgid for the hdisk from the odm
getlvodm -t <vgid>        get the vg name for the vgid from the odm
getlvodm -v <vgname>      get the vgid for the vg name from the odm
getlvodm -p <hdisk>       get the pvid for the hdisk from the odm
getlvodm -g <pvid>        get the hdisk for the pvid from the odm
lqueryvg -tcAp <hdisk>    get all the vgid and pvid information for the vg from the vgda (directly from the disk)
                          (you can compare the disk with odm: getlvodm <-> lqueryvg)
synclvodm <vgname>        synchronizes or rebuilds the lvcb, the device configuration database, and the vgdas on the physical volumes
redefinevg                it helps regain the basic ODM informations if those are corrupted (redefinevg -d hdisk0 rootvg)
readvgda hdisk40          shows details from the disk

Thursday, 26 December 2013

Practical Guide To AIX "Paging Space Management"

In this article we are going to discuss about , how to manage paging in AIX OS.

Contents:

1. Create/Add Paging Space

To add paging space we need to use "mkps" command. Equivalent smitty faspath command is "smitty mkps".
There will be an entry added in "/etc/swapspaces" file when you create a paging space if you mention  optiion "-a".

mkps [ -t lv | [ps_helper psname] ] [ -a ] [ -n ] [-c ChksumSize]-s LogicalPartitions VolumeGroup [ PhysicalVolume ]

Eg:

To create a paging space in volume group myvg that has four logical partitions and is activated immediately and at all subsequent system restarts, enter:
#mkps  -a  -n  -s4 myvg

  • These are "man lsps options"

    Item Description
    -a Specifies that the paging space is configured at subsequent restarts.
    -c Specifies the size of the checksum to use for the paging space, in bits. Valid options are 0 (checksum disabled), 8, 16 and 32. If -c is not specified it will default to 0.
    -n Activates the paging space immediately.
    -s LogicalPartitions Specifies the size of the paging space and the logical volume to be made in logical partitions.
    -t Specifies the type of paging space to be created. One of the following variables is required: 
    lv
    Specifies that a paging space of type logical volume should be created on the system.
    nfs
    Specifies that a paging space of type NFS should be created on the system.
    ps_helper
    Name of the helper program for a third party device.
    psname
    Name of the device entry for paging space.

Example /etc/swapspaces file:

# cat  /etc/swapspacesExample /etc/swapspaces file:
* /etc/swapspaces
*
* This file lists all the paging spaces that are automatically put into
* service on each system restart (the 'swapon -a' command executed from
* /etc/rc swaps on every device listed here).
*
* WARNING: Only paging space devices should be listed here.
*
* This file is modified by the chps, mkps and rmps commands and referenced
* by the lsps and swapon commands.

hd6:
        dev = /dev/hd6

paging00:
        dev = /dev/paging00

paging01:
        dev = /dev/paging01

2. List Paging Space:

The lsps command displays the characteristics of paging spaces, such as the paging space name, physical volume name, volume group name, size, percentage of the paging space used, whether the space is active or inactive, and whether the paging space is set to automatic.

Equivalent smitty faspath command is "smitty lsps".

lsps { -s | [ -c | -l ] { -a | -t { lv | nfs | ps_helper} | PagingSpace } }

The following examples show the use of lsps command with various flags to obtain the paging space information. The -c flag will display the information in colon format and paging space size in physical partitions.
# lsps -a -c
#Psname:Pvname:Vgname:Size:Used:Active:Auto:Type
paging00:hdisk1:rootvg:20:1:y:y:lv
hd6:hdisk1:rootvg:64:1:y:y:lv

# lsps -a
Page Space  Physical Volume   Volume Group    Size   %Used  Active  Auto  Type
paging00    hdisk1            rootvg          80MB       1     no   yes    lv
hd6         hdisk1            rootvg         256MB       1     yes   yes    lv

# lsps -s
Total Paging Space   Percent Used
      256MB               1%

To display the characteristics of paging space myps using the helper program foo enter the following commad:
lsps -t foo myps

This displays the characteristics for all paging spaces and provides a listing similar to the following listing:
Page Space      Physical Volume   Volume Group    Size %Used Active  Auto  Type
myps             mydisk            myvg            512MB     1    yes   yes    lv

3. Change Paging Space:

You can change only the following two attributes for a paging space logical volume.

chps [ -s LogicalPartitions | -d LogicalPartitions ] [ -a { y | n } ] PagingSpace

Increasing Paging Space:

Eg:
#chps -s xx yyy (Where xx is the number of logical partitions to add and yyy identifies the logical volume)

#chps -s 10 hd6   ( adds 10 logical partitions to the logical volume hd6, which results in adding 1280 Mb ( lp size=128MB) to the paging space)

Decrease/Shrink Paging Space:

# chps -d 4 hd6
shrinkps: Temporary paging space paging00 created.
shrinkps: Dump device moved to temporary paging space.
shrinkps: Paging space hd6 removed.
shrinkps: Paging space hd6 recreated with new size.
shrinkps: Resized and original paging space characteristics differ,
check the lslv command output.

Note:You should not shrink the paging beyond the system requirement to avoid system crash.

Activate or deactivate a paging space for the next reboot:

To define the paging00 paging space as configured and active at subsequent system restarts,
chps -a -y  paging00

To define the paging00 paging space as configured and inactive at subsequent system restarts,
chps -a -n  paging00

4. Activate or deactivate a paging space:

"swapon" is the command used to activate the currently defined inactive paging. When a "swapon" ran on a particular paging space it will read the details from the
file "/etc/swapspaces"

swapon -a | devicename 

Examples:
To cause all devices present in the /etc/swapspaces file to be made available, enter:
swapon  -a

All devices present in the /etc/swapspaces file are now available.

To cause the /dev/paging03 and /dev/paging04 devices to be available for paging and swapping, enter:
swapon /dev/paging03 /dev/paging04

The /dev/paging03 and /dev/paging04 devices are now available.

Inorder to make the paging space active , you need to use "swapoff" command.
swapoff DeviceName { DeviceName ...}

5. Remove paging space:

The rmps command removes an inactive paging space, means whenever you  need to remove a paging space you should make it inactive by using "swapoff".

rmps[ -t ps_helper] PagingSpace

Examples:
To remove PS01 paging space, run the following command:
#rmps PS01

This removes the PS01 paging space.


Thursday, 8 August 2013

Rename Disks-IBM AIX OS

Renaming Disks AIX OS

Scenario 1:

Sometimes disks will drop into the server in an unsatisfactory manner.  That is to say, the naming of the disks will not be ideal.  Let's look at neatening that up.

A reliable way to avoid this is to shutdown VIOS1 and physically remove the disks that you want assigned to it.  That way, there's no way VIOS2 can see them when you bring it up.

You're working on a shiny new 9117-MMC in a dual VIOS (we'll call them VIOS1 and VIOS2) configuration.  You've installed VIOS1 from DVD and are now installing VIOS2.  You have already assigned the DVD drive to the VIOS2 but, in doing so, have revealed the disks from VIOS1 to VIOS2.  When the install is complete and you remove the adapter(s) which enabled VIOS2 to see the disks and DVD drive in VIOS1, you will be left with a configuration similar to below:

# lsdev -Cc disk
hdisk0 Defined 00-08-00 SAS Disk Drive
hdisk1 Defined 00-08-00 SAS Disk Drive
hdisk2 Available 00-08-00 SAS Disk Drive
hdisk3 Available 00-08-00 SAS Disk Drive
hdisk4 Available 00-08-00 SAS Disk Drive
hdisk5 Available 00-08-00 SAS Disk Drive

hdisk0 and hdisk1 are clearly the remnants of the disks from VIOS1 detected during the install of VIOS2.  We'll want to remove them so we're left with:
# lsdev -Cc disk
hdisk2 Available 00-08-00 SAS Disk Drive
hdisk3 Available 00-08-00 SAS Disk Drive
hdisk4 Available 00-08-00 SAS Disk Drive
hdisk5 Available 00-08-00 SAS Disk Drive
And then we can work on renumbering the disks.

Scenario 2:

In particular in large clustered environments where it is sometimes very important to have the same disk and network device names in sync across all nodes in a cluster. And besides, it’s a lot easier  to verify a cluster configuration if the hdisk names are all the same. Matching PVIDs works but it requires a lot more effort! For example, knowing that hdisk123 is the same device on all nodes makes life easier than scanning lspvoutput for a PVID like 00f6048868b4gead.  Of course you can script things to make this easier but it would be great if you didn’t need to do this and that there was a way to rename devices as needed, without resorting to unsupported methods.

There are two different ways for this depending upon the operating system version.

Prior AIX 7.1v Procedure

Let's remove hdisk0 and hdisk1:
# rmdev -l hdisk0 -dR
# rmdev -l hdisk1 -dR

We would now be left with:
# lspv
hdisk2          00c5538409a99b66                   rootvg          active
hdisk3
hdisk4
hdisk5

In order to put these names straight, we need to remove these disks also.  It's worth nothing here that inevitably one of these four disks will be your root volume.  You can't remove or rename that one just yet.
# rmdev -l hdisk3 -dR
# rmdev -l hdisk4 -dR
# rmdev -l hdisk5 -dR

So now:
# lspv
hdisk2          00c5538409a99b66                   rootvg          active

Run cfgmgr:
# cfgmgr
# lspv
hdisk0          00c55384341c6e62                    None      
hdisk1          00cd55a4ae6b676f                    None      
hdisk2          00c5538409a99b66                    rootvg         active
hdisk3          00c553844356f733                    None      

Now you can mirror hdisk2 to hdisk0:
# extendvg rootvg hdisk0
# exit
$ mirrorios -defer hdisk0

When that completes:
# lspv
hdisk0          00c55384341c6e62                    rootvg         active
hdisk1          00cd55a4ae6b676f                    None    
hdisk2          00c5538409a99b66                    rootvg         active
hdisk3          00c553844356f733                    None

Further Juggling and Dump Movement

Sometimes you may want hdisk0 and hdisk1 to be in rootvg.  Here's how to accomplish that.
$ unmirrorios hdisk2

# lspv
hdisk0          00c55384341c6e62                    rootvg         active
hdisk1          00cd55a4ae6b676f                    None    
hdisk2          00c5538409a99b66                    rootvg    
hdisk3          00c553844356f733                    None

hdisk2 is now not mirrored so we can remove it from rootvg:
# reducevg rootvg hdisk2
<error>

This will fail because the sysdumpdev is still set to a volume on hdisk2.  We need to remove this and set it back up later on.

Check the size of the current dump space:
# lsvg -l rootvg | grep sysdump
lg_dumplv           sysdump    4       4       1    open/syncd    N/A

# sysdumpdev -e
0453-041 Estimated dump size in bytes: 233413017

Check the location of the dump device:
# sysdumpdev -l
primary              /dev/lg_dumplv
secondary            /dev/sysdumpnull
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    FALSE
dump compression     ON
type of dump         traditional

This shows us that there is a primary dump device called /dev/lg_dumplv and no secondary dump device.  Since that is the case, we will also add a second boot device in case the primary is not available.

Let's clear the dump configuration:
# sysdumpdev -Pp /dev/sysdumpnull
# sysdumpdev -Ps /dev/sysdumpnull

Check that:
# sysdumpdev -l
primary              /dev/sysdumpnull
secondary            /dev/sysdumpnull
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    FALSE
dump compression     ON
type of dump         traditional

Now we can remove hdisk2 from rootvg.  This may warn you that there is a volume on this device.  The volume will most likely be the dump volume.  If that's the case, you can carry on and remove it.  If it is any other volume then stop and find out what it is.

# reducevg rootvg hdisk2

Create two new volumes for sysdumps:
# mklv -t sysdump -y sysdump1 rootvg 4 hdisk0
# mklv -t sysdump -y sysdump2 rootvg 4 hdisk1

Configure the new dump devices:
# sysdump -Pp /dev/sysdump1
# sysdump -Ps /dev/sysdump2

Check that:
# sysdumpdev -l
primary              /dev/sysdump1
secondary            /dev/sysdump2
copy directory       /var/adm/ras
forced copy flag     TRUE
always allow dump    FALSE
dump compression     ON
type of dump         traditional

Now we want to add hdisk1 into rootvg and mirror:
# extendvg rootvg hdisk1
$ mirrorios -defer hdisk1

Set boot list:
$ bootlist -mode normal hdisk0 hdisk1

Since we have been deferring the restart, do it now:
$ shutdown -restart

Hopefully everything should come up fine and you should have this:
# lspv
hdisk0          00c55384341c6e62                    rootvg         active
hdisk1          00cd55a4ae6b676f                    rootvg         active    
hdisk2          00c5538409a99b66                    None    
hdisk3          00c553844356f733                    None

2. Renaming devices in AIX 7.1

Well, this is no longer an issue for AIX.

Starting with AIX 7.1, you can now easily rename devices. A new command called rendev was introduced to allow AIX administrators to rename devices as required.

From the man page:

The rendev command enables devices to be renamed. The device to be renamed, is specified with the -l flag, and the new desired name is specified with the -n flag.

The new desired name must not exceed 15 characters in length. If the name has already been used or is present in the /dev directory, the operation fails. If the name formed by appending  the new name after the character r is already used as a device name, or appears in the /dev directory, the operation fails.

 If the device is in the Available state, the rendev command must unconfigure the device before renaming it. This is similar to the operation performed by the rmdev -l Name command. If the unconfigure operation fails, the renaming will also fail. If the unconfigure succeeds, the rendev command will configure the device, after renaming it, to restore it to the Available state. The -u flag may be used to prevent the device from being configured again after it is renamed.

 Some devices may have special requirements on their names in order for other devices or applications to use them. Using the rendev command to rename such a device may result in the device being unusable. Note: To protect the configuration database, the rendev command cannot be interrupted once it has started. Trying to stop this command before completion, could result in a corrupted database.

Here are some examples of using the rendev command on AIX 7.1 (GA) system. In the first example I will rename hdisk3 to hdisk300. Note: hdisk3 is not in use (busy).

If the disk had been allocated to a volume group, I would have needed to varyoff the volume group first.

# lspv
hdisk0          00f61ab2f73e46e2                    rootvg          active
hdisk1          00f61ab20bf28ac6                    None
hdisk2          00f61ab2202f7c0b                    None
hdisk4          00f61ab20b97190d                    None
hdisk3          00f61ab2202f93ab                    None

# rendev -l hdisk3 -n hdisk300

# lspv
hdisk0          00f61ab2f73e46e2                    rootvg          active
hdisk1          00f61ab20bf28ac6                    None
hdisk2          00f61ab2202f7c0b                    None
hdisk4          00f61ab20b97190d                    None
hdisk300        00f61ab2202f93ab                    None

Easy!
Next, I’ll rename a virtual SCSI adapter. I renamed vscsi0 to vscsi2.
Note: I placed the adapter, vscsi0, in a defined state before renaming the device.
# rmdev -Rl vscsi0

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi0 Defined    Virtual SCSI Client Adapter
vscsi1 Available  Virtual SCSI Client Adapter

# rendev -l vscsi0 -n vscsi2

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi1 Available  Virtual SCSI Client Adapter
vscsi2 Defined    Virtual SCSI Client Adapter

Now I’ll rename a network adapter from ent0 to ent10. I bring down the interface before changing the device name
# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi1 Available  Virtual SCSI Client Adapter
vscsi2 Defined    Virtual SCSI Client Adapter

# ifconfig en0
en0: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.1.20.19 netmask 0xffff0000 broadcast 10.153.255.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

# ifconfig en0 down detach

# rendev -l ent0 -n ent10

# lsdev -Cc adapter
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
ent10  Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi1 Available  Virtual SCSI Client Adapter
vscsi2 Defined    Virtual SCSI Client Adapter

# rendev -l en0 -n en10

# chdev -l en10 -a state=up
en10 changed

# mkdev -l inet0
inet0 Available

# ifconfig en10
en10: flags=1e080863,480<UP,BROADCAST,NOTRAILERS,RUNNING,SIMPLEX,MULTICAST,GROUPRT,64BIT,CHECKSUM_OFFLOAD(ACTIVE),CHAIN>
        inet 10.1.20.19 netmask 0xffff0000 broadcast 10.153.255.255
         tcp_sendspace 262144 tcp_recvspace 262144 rfc1323 1

If you want to be creative you can rename devices to anything you like (as long as it’s not more than 15 characters). For example I’ll rename vscsi2 to myvscsiadapter.
# rendev -l vscsi2 -n myvscsiadapter

# lsdev -Cc adapter
ent1           Available  Virtual I/O Ethernet Adapter (l-lan)
myadapter      Available  Virtual I/O Ethernet Adapter (l-lan)
myvscsiadapter Defined    Virtual SCSI Client Adapter
vsa0           Available  LPAR Virtual Serial Adapter
vscsi1         Available  Virtual SCSI Client Adapter

And in the last example I’ll demonstrate changing virtual SCSI adapter device names on a live system.
This is single disk system (hdisk0), with two vscsi adapters.
# lspv
hdisk0          00f6048868b4deee                    rootvg          active

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi1

# lsdev -Cc adapter
ent0   Available  Virtual I/O Ethernet Adapter (l-lan)
ent1   Available  Virtual I/O Ethernet Adapter (l-lan)
vsa0   Available  LPAR Virtual Serial Adapter
vscsi0 Available  Virtual SCSI Client Adapter
vscsi1 Available  Virtual SCSI Client Adapter

We ensure the adapter is in a defined state before renaming it. This will fail otherwise.
# rmdev -Rl vscsi1
vscsi1 Defined

# lsdev -Cc adapter | grep vscsi
vscsi0 Available  Virtual SCSI Client Adapter
vscsi1 Defined    Virtual SCSI Client Adapter

Now we rename the adapter vscsi1 to vscsi3.
# rendev -l vscsi1 -n vscsi3

# lsdev -Cc adapter | grep vscsi
vscsi0 Available  Virtual SCSI Client Adapter
vscsi3 Defined    Virtual SCSI Client Adapter

That was easy enough. Now I need to bring the adapter and path online with cfgmgr. The lspath output displays an additional path to vscsi3.
# lspath
Enabled hdisk0 vscsi0
Defined hdisk0 vscsi1

# cfgmgr
Method error (/etc/methods/cfgscsidisk -l hdisk0 ):
        0514-082 The requested function could only be performed for some
                 of the specified paths.

# lspath
Enabled hdisk0 vscsi0
Defined hdisk0 vscsi1
Enabled hdisk0 vscsi3

Now I need to remove the old path to vscsi1. The path to vscsi3 is now Enabled. The adapter, vscsi3, is in an Available state. All is good.
# rmpath -l hdisk0 -p vscsi1 -d
path Deleted

# lspath
Enabled hdisk0 vscsi0
Enabled hdisk0 vscsi3

# lsdev -Cc adapter | grep vscsi
vscsi0 Available  Virtual SCSI Client Adapter
vscsi3 Available  Virtual SCSI Client Adapter

The same steps need to be repeated for the vscsi0 adapter. This is renamed to vscsi2.
# rmdev -Rl vscsi0
vscsi0 Defined

# lsdev -Cc adapter | grep vscsi
vscsi0 Defined    Virtual SCSI Client Adapter
vscsi3 Available  Virtual SCSI Client Adapter

# rendev -l vscsi0 -n vscsi2

# lsdev -Cc adapter | grep vscsi
vscsi2 Defined    Virtual SCSI Client Adapter
vscsi3 Available  Virtual SCSI Client Adapter

# lspath
Defined hdisk0 vscsi0
Enabled hdisk0 vscsi3

# cfgmgr
Method error (/etc/methods/cfgscsidisk -l hdisk0 ):
        0514-082 The requested function could only be performed for some
                 of the specified paths.

# lspath
Defined hdisk0 vscsi0
Enabled hdisk0 vscsi2
Enabled hdisk0 vscsi3
# rmpath -l hdisk0 -p vscsi0 -d
path Deleted

# cfgmgr

# lspath
Enabled hdisk0 vscsi2
Enabled hdisk0 vscsi3

That’s it. Both adapters have been renamed while the system was in use. No downtime required.
# lsdev -Cc adapter | grep vscsi
vscsi2 Available  Virtual SCSI Client Adapter
vscsi3 Available  Virtual SCSI Client Adapter

# lspath
Enabled hdisk0 vscsi2
Enabled hdisk0 vscsi3

Reference:

Please refer to the AIX 7.1 command reference for more information on this new command:
http://publib.boulder.ibm.com/infocenter/aix/v7r1/topic/com.ibm.aix.cmds/doc/aixcmds4/rendev.htm
IBM AIX Version 7.1 Differences Guide:
http://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/sg247910.html?Open

www.unixmantra.com

Saturday, 3 August 2013

AIX LVM QUORUM mysteries revealed

Technote (FAQ)

Question

Why can't I varyon a Volume Group when one or more physical volumes are not available?

Cause

Varyonvg requires 100% of it's physical volumes be available and accessible in order to successfully vary on the Volume Group without using the force option.

Answer

A common misconception is that the QUORUM setting of an LVM Volume Group can affect one's ability to varyon a volume group, when, in fact, the Volume Group QUORUM setting (enabled or disabled) has no bearing on the varyon process. This misconception is further enhanced by the following varyonvg error message...

0516-052 varyonvg: Volume group cannot be varied on without a quorum. More physical volumes in the group must be active.

This message indicates a "quorum" of physical volumes must exist in order to varyon the Volume Group and is unrelated to the Volume Group's QUORUM setting.

The Volume Group QUORUM setting is a concept that applies only to currently varied on Volume Groups in order to force varyoff of the Volume Group should it lose more than 51% of it's disks. With QUORUM disabled on the Volume Group, loss of one or more disks will not cause the Volume Group to varyoff. If QUORUM is enabled on the Volume Group, LVM will force varyoff the Volume Group if less than 51% of it's disks are available and accessible. For a two disk Volume Group with QUORUM enabled, LVM will check the number of VGDAs on each disk and varyoff the Volume Group should it lose QUORUM (if it loses the disk with two active VGDA's).

The Volume Group's QUORUM setting has no meaning for a Volume Group which is currently varied off. Varyonvg does not look at how many VGDAs a disk has, it ONLY looks at the number of physical volumes which are available and accessible. Without the -f (force) flag, ALL physical volumes in a Volume Group must be available and accessible. If one or more physical volumes is unavailable, the Volume Group may be forced online with varyonvg -f.


Excerpts from the varyonvg man page...
"The varyonvg will fail to varyon the volume group if a majority of the physical volumes are not accessible (no Quorum). This condition is true even if the quorum checking is disabled. Disabling the quorum checking will only ensure that the volume group stays varied on even in the case of loss of quorum."

"If the volume group cannot be varied on due to a loss of the majority of physical volumes, a list of all physical volumes with their status is displayed. To varyon the volume group in this situation, you will need to use the force option."

"-f Allows a volume group to be made active that does not currently have a quorum of available disks. All disk that cannot be brought to an active state will be put in a removed state. At least one disk must be available for use in the volume group."

Friday, 17 May 2013

Moving FS from one volume group to Volume Group

ATTENTION: Make sure a full backup exists of any data you intend to migrate before using these procedures. In AIX, storage allocation is performed at the volume group level. Storage cannot span volume groups. If space within a volume group becomes constrained, then space that is available in other volume groups cannot be used to resolve storage issues. The solution to this problem is to add more physical volumes to the relevant volume group. This may not be an option in all environments. If other volume groups contain the required free space, the alternative is to move the required logical volumes to the desired volume group and expand them as needed. The source logical volume can be moved to another volume group with the cplv command. The following steps achieve this.

ATTENTION:The logical volume should be inactive during these steps to prevent incomplete or inconsistent data. If the logical volume contains a mounted file system, then that file system should be unmounted first. If this logical volume is being used as a RAW storage device, then the application using this logical volume should close the device or be shut down.

1.Copy the source logical volume to the desired volume group with the cplv command. For example, where myvg is the new volume group and mylv is the name of the user's logical volume, enter:
$cplv -v myvg mylv
This will return the name of the new logical volume, such as lv00. If this logical volume was being used for RAW storage, skip to to step 6. If this is a JFS or JFS2 file system, proceed to step 2. Note that RAW storage devices should NOT use the first 512 bytes of the RAW device. This is reserved for the LVCB or logical volume control block. cplv will not copy the first 512 bytes of the RAW logical volume, but it will update fields in the new logical volume's LVCB.

2.All JFS and JFS2 file systems require a log device. This will be a logical volume with a type of jfslog or jfs2log for JFS2 file systems. Run the lsvg -l command on your destination volume group. If a JFS or JFS2 log DOES NOT already exist on the new volume group, create one by using the mklv and logform commands as detailed below. If a JFS or JFS2 log DOES exist, proceed to
step 3 With a JFS2 filesystem, you also have the option of using an inline log. With inline logs, the jfs2log exists on the filesyster itself. After the cplv command is ran on a JFS2 inline log filesystem, run:
$logform /dev/lvname
You should receive a message about formatting the inline log. If you do not receive a message about an inline log, then this filesystem is not a JFS2 inline log filesystem and you should treat it as a regular JFS2 filesystem. After hitting y on formatting the inline log, continue to step 3. To make a new JFS log, enter the following command, where myvg is the name of the new volume group, enter:
$mklv -t jfslog myvg 1
To make a new JFS2 log, enter: mklv -t jfs2log myvg 1 This will return a new logical volume of either type jfslog or jfs2log, such as loglv00. This new logical volume will need to be formatted with the logform command in order to function properly as either a JFS or JFS2 log. 
For example:
logform /dev/loglv00
Answer yes to destroy.
3.Change the filesystem to reference a log device that exists in the new volume group and the new logical volume with the chfs command. For example, where myfilesystem is the name of the user's filesystem, enter: $chfs -a dev=/dev/lv00 -a log=/dev/loglv00 /myfilesystem With inline logs on JFS2 filesystems this command is also different: $chfs -a dev=/dev/lv00 -a log=INLINE /myfilesystem

4.Run fsck to ensure filesystem integrity. Enter:
fsck -p /dev/lv00
NOTE: It is common to receive errors after running fsck -p /dev/lvname prior to mounting the filesystem. These errors are due to a known bug that development is currently aware of and which will be resolved in a future release of AIX. Once the filesystem is mounted, a future fsck with the filesystem unmounted should no longer produce an error.

5.Mount the file system: For example,
where myfilesystem is the name of the user's file system, enter:
$mount /myfilesystem
At this point, the migration is complete, and any applications or users can now access the data in this filesystem. To change the logical volume name, proceed to the following step. NOTE: If you receive errors from the preceding step, do not continue. Contact you AIX support center.

6.Remove the source logical volume with the rmlv command.
For example, where mylv is the name of the user's logical volume, enter:
$rmlv mylv
Rename and reset any needed attributes on the new logical volume with the chlv or chmod commands. In order to rename the logical volume, the filesystem or raw logical volume must be in a closed state. For example, where mylv is the new name you wish to change lv00 to be, enter:
$chlv -n mylv lv00
Logical volumes specific to rootvg
The following logical volumes and file systems are specific to the rootvg volume group and cannot be moved to other volume groups Logical Volume File System or Description 
------------------------------------------------------ 
hd2 /usr 
hd3 /tmp 
hd4 / 
hd5 
hd6 
hd8 
hd9var /var

Friday, 19 April 2013

Migrate data using the migratepv command

You can use the logical volume manager (LVM) migratepv command to migrate data that is located on physical volumes. Because you can use this command while the system is active, users are not disrupted.

About this task

The migratepv command moves allocated physical partitions and the data they contain from the source physical volume to one or more destination physical volumes and has the following syntax: migratepv [ -lv LogicalVolumeSourcePhysicalVolumeDestinationPhysicalVolume. The specified physical volumes must all be in the same volume group. The specified source physical volume cannot be included in the list of destination physical volume parameters.

The migratepv command migrates data by performing these actions:
  • Creates a mirror of the logical volumes that you are moving
  • Synchronizes the logical volumes
  • Removes the original logical volume

Procedure

  1. Identify the source disk that contains the data and its volume group.
    # lsdev -Cc disk
    hdisk0    Available 04-08-00-8,0 16 Bit LVD SCSI Disk Drive  <== choose hdisk0 as source disk
    hdisk1    Available 04-08-01-8,0 16 Bit LVD SCSI Disk Drive
    hdisk2    Available 06-09-02     IBM FC 2107
  2. Calculate the amount of space that is currently in use on the disk. This is the total number of physical partitions (PPs) minus the number of free partitions on the disk. In the following example, disk hdisk0 is a member of the rootvg volume group and is using 279 PPs (542 - 263 = 279).
    # lsvg -p rootvg
    rootvg:
    PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
    hdisk0            active            542         263         78..00..00..76..109
  3. Identify destination disk or disks that have enough empty space to accommodate the copied data. If the destination disk or disks are not in same volume group as the source disk, add the destination disk to the source disk volume group using the extendvg command, for example, extendvg rootvg hdisk1. In the following example, disks hdisk0 and hdisk1 are in same volume group and destination disk hdisk1 has 542 free PPs, which is more than the 279 PPs that are required.
    # lsvg -p rootvg;date
    rootvg:
    PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
    hdisk0            active            542         263         78..00..00..76..109
    hdisk1            active            542         542         109..108..108..108..109
  4. You must verify whether the volume group boot image is located on the physical volume that you are moving. You must first Identify the name of the logical volume that contains the boot image. Issue the lsvg -l rootvg command to identify the name of the logical volume that contains the volume group boot image. In this example, partial output listing for the lsvg -l rootvg command shows that for the rootvg volume group, logical volume hd5 has the type boot.
    LV NAME    TYPE   Ps   PPs  PVs  LV STATE        MOUNT POINT 
    hd5        boot   1    1    1    closed/syncd    N/A 
  5. Issue the lslv -l command. to determine whether the boot image is on the source disk. The following example displays the output generated by the lslv -l command. In this case, logical hd5 is located on disk hdisk0, which is the source disk.
    # lslv -l hd5 
       hd5:N/A 
       PV          COPIES         IN BAND       DISTRIBUTION 
       hdisk0      001:000:000    100%          001:000:000:000:000
  6. If the source disk contains the boot image, complete these sub-steps to transfer that boot image to the destination disk.
    1. Issue migratepv -lv hd5 hdisk0 hdisk1 to move the physical partitions in logical volume hd5 that contain the boot image from source disk hdisk0 to destination disk hdisk1.
    2. Issue chpv -c hdisk0 as root user to delete the boot record from the source disk to avoid a potential boot from the old boot image.
    3. Issue the bosboot command to establish disk hdisk1 as the new boot disk.
    4. Issue the bootlist command to designate disk hdisk1, which now contains the boot image, as the boot disk in the boot list.
      # migratepv -l hd5 hdisk0 hdisk1
      0516-1011 migratepv: Logical volume hd5 is labeled as a boot logical volume.
      0516-1246 migratepv: If hd5 is the boot logical volume, please run 'chpv -c hdisk0'
              as root user to clear the boot record and avoid a potential boot
              off an old boot image that may reside on the disk from which this
              logical volume is moved/removed.
      migratepv: boot logical volume hd5 migrated. Please remember to run
              bosboot, specifying /dev/hdisk1 as the target physical boot device.
              Also, run bootlist command to modify bootlist to include /dev/hdisk1.
      
      <<< the following commands taken based on the warning after migrated the boot device 
      
      # chpv -c hdisk0
      # echo $?
      0
      
      # bosboot -ad /dev/hdisk1
      
      bosboot: Boot image is 49180 512 byte blocks.
      # echo $?
      0
      
      
      # bootlist -m normal -o hdisk1
      hdisk1 blv=hd5 pathid=0
  7. Issue the migratepv command to migrate the data from one physical volume to another. Subsequently, you can issue the lsvg -pcommand to verify the results. In this example, after the migration is complete, volume hdisk0 shows 0 PPs because all data that was previously located on physical volume hdisk0 has been moved to physical volume hdisk1.
    # migratepv hdisk0 hdisk1
    # echo $?
    0
    
    # lsvg -p rootvg
    rootvg:
    PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
    hdisk0            active            542         542         109..108..108..108..109
    hdisk1            active            542         263         30..15..01..108..109
  8. After the operation completes, issue the reducevg command to remove the source physical volume from the volume group. In this example, remove physical volume hdisk0 from volume group rootvg.
    # reducevg rootvg hdisk0
    # echo $?
    0
    
    # lsvg -p rootvg
    rootvg:
    PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
    hdisk1            active            542         263         30..15..01..108..109

Results

After processing is complete, the physical volume is copied to the new location and the LVM no longer accesses the original volume to locate the data that was stored there.