Showing posts with label How-Tos. Show all posts
Showing posts with label How-Tos. Show all posts

Sunday, 11 January 2015

Not enough free space to shrink the file system issue in AIX


Recently got an issue in reducing jfs2 filesystem  with osverion 6.1 and have enough space to reduce filesystem.
root@umaix /tmp>df -g /orafs1
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/oralv1   100.00    75.00   25%      555     1%  /orafs1

root@umaix /tmp>chfs -a size=-15G /orafs1
chfs: There is not enough free space to shrink the file system.
This issue will occur whenever you try to reduce big chunk of data ( in this case 15GB) that may not be contiguous in the file-system because you have files scatted everywhere.

Try   the following methods one by one until your issue fixed

1. Try to defrag the FS:

#defragfs -s /orafs1

2. Reduce in smaller chunks:

If you still can't reduce it after this. Try reducing the filesystem  in smaller chunks. Instead of 15G at a time, try reducing 1 or 2 gigs. Then, repeat the operation.

3. Check the processes:

Sometimes processes open big files and use lots of temporary space in those filesystem.
You could check processes/applications running against the filesystem and stop them temporarily, if you can.
#fuser -cu[x] <filsystem>

4. Move the large files and try shrink

Try looking for files large using the find cmd and move them out temporarily, just to see if we can shrink the fs without them:
#find /<filesystem> -xdev -size +2048 -ls|sort -r +10|pg

Finally the last method, the alternative approach if any one of above methods are not working then go for filesystem recreation.

==> You should be very care full , need to take fs backup and as well as approach application before removing the filelsystem.

5) Recreate filesystem:

  • - Take databackup of the fielsystem  ( very Important,dont skip this )
  •   Either you can take using your backup tools like TSM / netbackup or move data to a temporary   directory

  • - Remove the  filesystem  (  #rmfs /orafs)
  • - Create the filesystem again
  •    #mklv -y oralv1 -t jfs2 oravg 600  ( in this case we need 75GB and pp size is 128)
       #crfs -v jfs2 -d oralv1 -m /orafs1 -A yes  (create orafs1 filesystem)

  • - Restore data to the filesystem
  • - Verify fs size

  • root@umaix /tmp>df -g /orafs1
    Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
    /dev/oralv1   75.00    50.00   33%      555     1%  /orafs1

Sunday, 8 June 2014

How to Remove a Virtual SCSI Disk

This document describes the procedure to remove a virtual disk in a volume group on a Virtual I/O Client, to map the virtual scsi disk to its corresponding backing device, and to remove the backing device from the Virtual I/O Server.  Please, read the entire document before proceeding.

This document applies to AIX version 5.3 and above.

In a Virtual I/O environment, the physical devices are allocated to the VIO server.  When there is a hardware failure (disk or adapter may go bad) on the VIO server, unless the VIO server has some type of redundancy, that will have an impact on the VIO client whose virtual disks are being served by the failing device.  The impact may be loss of connectivity to the virtual scsi disks, unless there is some type of redundancy (MPIO or LVM mirroring) on the client partition. 

This document does NOT apply to any of the following environments:
1. If the virtual disk is in a shared volume group (i.e HACMP, etc)
2. If the virtual disk is part of rootvg volume group.

 Removing a Physical Volume from a Volume Group

 The following steps are needed to remove a virtual disk from the VIO client, and they are later discussed in more detail:

1. Deallocate all the physical partitions associated with the physical volume in the volume group.
2. Remove the physical volume from the volume group
3. Map the virtual scsi disk on the VIO client partiton to the backing device on the VIO server.
4. Remove the virtual scsi disk definition from the device configuration database.
5. Remove the backing device.

At this point, a new virtual scsi can be added to the VIO client in place of the virtual disk that was removed in the case where this procedure was done as a result of a hardware failure on the VIO server partition.

 1. Deallocating the physical partitions

 In the following procedure, we will be using hdisk4 in the example, as the virtual scsi disk wanting to be removed from the VIO client.

First, we need to determine the logical volumes defined on the physical volume we want to remove. This can be done by running:

# lspv -l hdisk#            
where hdisk# is the virtual scsi disk to be removed.

Example:

# lspv -l hdisk4
hdisk4:
LV NAME          LPs      PPs      DISTRIBUTION       MOUNT POINT
fslv00               2          2          00..02..00..00..00    /test
loglv00             1          1          00..01..00..00..00    N/A
rawlv                 30        30         00..30..00..00..00    N/A

If the hdisk name no longer exists, and the disk is identifiable only by its 16-digit PVID (you might see this from the output of lsvg -p <VGname>), substitute the PVID for the disk name. For example:

# lspv -l 00c2b06ef8a9f98a

You may receive the following error:
     0516-320 : Physical volume 00c2b06ef8a9f98a is not assigned to
     a volume group.
If so, run the following command:
# putlvodm -p `getlvodm -v <VGname>` <PVID>
VGname refers to your volume group, PVID refers to the 16-digit physical volume identifier, and the characters around the getlvodm command are grave marks, the backward single quote mark. The lspv -l <PVID> command should now run successfully.  To determine the VGname associated with that physical volume use lspv hdisk#.
If another disk in the volume group has space to contain the partitions on this disk, and the virtual scsi disk to be replaced has not completely failed, the migratepv command may be used to move the used PPs on this disk. See the man page for the migratepv command on the steps to do this.
If the partitions cannot be migrated, they must be removed. The output of the lspv -l <hdisk#>, or lspv -l <PVID>, command indicates what logical volumes will be affected. Run the following command on each LV:
# lslv <LVname>
The COPIES field shows if the LV is mirrored. If so, remove the failed copy with:

# rmlvcopy <LVname> 1 <hdisk#>
hdisk# refers to all the disks in the copy that contain the failed disk. A list of drives can be specified with a space between each. Use the lslv -m <LVname> command to see what other disks may need to be listed in the rmlvcopy command. If the disk PVID was previously used with the lspv command, specify that PVID in the list of disks given to the rmlvcopy command.  The unmirrorvg command may be used in lieu of the rmlvcopy command. See the man pages for rmlvcopy and unmirrorvg, for additional information.
If the logical volume is not mirrored, the entire logical volume must be removed, even if just one physical partition resides on the drive to be replaced and cannot be migrated to another disk. If the unmirrored logical volume is a JFS or JFS2 file system, unmount the file system and remove it. Enter:
# umount /<FSname>
# rmfs /<FSname>

If the unmirrored logical volume is a paging space, see if it is active. Enter:
# lsps -a

If it is active, set it to be inactive on the next reboot.  Enter:
# chps -a n <LVname>

Then deactivate it and remove it remove it by entering:
# swapoff /dev/<LVname>
# rmps <LVname>

Remove any other unmirrored logical volume with the following command:
# rmlv <LVname> 

2. Remove the physical volume from the volume group.

 In the case where the virtual scsi disk to be replaced is the only physical volume in the volume group, then remove the volume group, via:

# exportvg <VGname>

This will deallocate the physical partitions and will free up the virtual disk.  Then, remove the disk definition, as noted on step 3.

In the case where there are more than one physical volumes.  Using either the PVID or the hdisk name, depending on which was used when running lspv -l in the preceding discussion, run one of the following:

# reducevg <VGname> <hdisk#>
# reducevg <VGname> <PVID>

If you used the PVID value and if the reducevg command complains that the PVID is not in the device configuration database, run the following command to see if the disk was indeed successfully removed:

# lsvg -p <VGname>

If the PVID or disk is not listed at this point, then ignore the errors from the reducevg command.

3. How to map the virtual scsi disk (on the client partiton) to the physical disk (on the server partition)

 In the following example, we are going to determine the mapping of virtual scsi disk, hdisk4

On the VIO client:

The following command shows the location of hdisk4:

# lscfg -vl hdisk4
  hdisk4           U9117.570.102B06E-V1-C7-T1-L810000000000  Virtual SCSI Disk Drive

where V1 is the LPAR ID (in this case 1), C7 is the slot# (in this case 7), and L81 is the LUN ID. 
Take note of these values.

Next, determine the client SCSI adapter name, by ‘grep’ing for the location of hdisk4's parent adapter, in this case, V1-C7-T1:

# lscfg -v|grep V1-C7-T1
  vscsi4           U9117.570.102B06E-V1-C7-T1                Virtual SCSI Client Adapter
        Device Specific.(YL)........U9117.570.102B06E-V1-C7-T1
  hdisk4           U9117.570.102B06E-V1-C7-T1-L810000000000  Virtual SCSI Disk Drive

where vscsi4 is the client SCSI adapter.

On the HMC:

Run the following command to obtain the LPAR name associated with the LPAR ID

# lshwres -r virtualio --rsubtype scsi -m <Managed System Name> --level lpar

To get the managed system name, run
# lssyscfg -r sys -F name

Then, look for the "lpar_id" and "slot_num" noted earlier.  In our case, the VIO client lpar id is 1 and the slot # is 7.

In the following example, the managed system name is Ops-Kern-570.  The VIO client partition name is kern1.
The VIO Server partition name is reg33_test_vios.

# lshwres -r virtualio --rsubtype scsi -m Ops-Kern-570 --level lpar
...
lpar_name=kern1,lpar_id=1,slot_num=7,state=1,is_required=0,adapter_type=client,
remote_lpar_id=11,remote_lpar_name=reg33_test_vios,remote_slot_num=23,backing_devices=none
...
Take note of the remote_lpar_id (11) and the remote_slot_num (23).  Then, in the same output, look for a line that corresponds to "lpar_id 11, slot # 23
...
lpar_name=reg33_test_vios,lpar_id=11,slot_num=23,state=1,is_required=0,adapter_type=server,
remote_lpar_id=any,remote_lpar_name=,remote_slot_num=any,backing_devices=none
...
So in this case, VIO server reg33_test_vios is serving virtual scsi disk, hdisk4, on the VIO client, kern1.
            
On the VIO Server:

Go to the VIO Server associated with the LPAR ID obtained in the previous step, in our case reg33_test_vios.
As padmin, run the following command to display the mapping, which should match the mapping obtained from the HMC obtained above.

$ lsmap -all|grep <VIO server lpar ID>-<VIOS slot#>

For example,
$ lsmap -all|grep V11-C23
where V11 is the VIO server lpar_id and C23 is the slot #

The cmd will return something similar to

vhost21         U9117.570.102B06E-V11-C23                    0x00000001

In this case, vhost21 is the server SCSI adapter mapped to our VIO client lpar id 1 (0x00000001).

Next, list the mapping for the vhost# obtained previously.

$ lsmap -vadapter vhost21
SVSA               Physloc                                                Client Partition ID
---------------         --------------------------------------------    ------------------
vhost21            U9117.570.102B06E-V11-C23     0x00000001

VTD                  virdisk01                      
LUN                  0x8100000000000000
Backing device clientlv01                     
Physloc               

Take note of the VTD and Backing device name.  In this case, the backing device mapped to virtual scsi disk, hdisk4, is logical volume, clientlv01, and it is associated with Virtual Target Device, virdisk01.

4. Remove the virtual scsi disk definition from the device configuration database on the VIO client

 To remove the vscsi definition, run

# rmdev -dl hdisk#

Ensure you know the backing device associated with the virtual scsi disk being removed prior to issuing the rmdev command.  That information will be needed in order to do clean up on the server partition.  Refer to the section "How to map the virtual scsi disk (on the client partition) to the physical disk (on the server partitions)".

 5. Remove the backing device on the VIO server

 The peripheral device types or backing devices currently supported are
·                logical volume
·                physical volume
·               optical device starting at v1.2.0.0-FP7 (but not currently supported on System i)

Prior to removing the backing device, the virtual target device must be removed first. To do so, run the following as padmin:

$ rmdev -dev <VTD name>
$ rmlv <LVname>

or you can remove both the VTD and logical volume in one command by running:

$ rmvdev -vtd <VTD name> -rmlv

In the case where the backing device is a physical volume, then, removing the virtual target device completes this document.

If you need to determine the physical device and volume group that the logical volume belongs to, you can issue the following commands prior to running rmlv or rmvdev.
$ lslv -pv <LVname>    List the physical volume that the logical volume specified resides on.
$ lslv <LVname>          Shows the characteristics of the logical volume, including the volume group name, # of mirrored copies, etc.

In our example, the backing device is a logical volume, clientlv01, and it resides on the physical device, hdisk3:

$ lslv -pv clientlv01
clientlv01:N/A
PV                COPIES        IN BAND       DISTRIBUTION 
hdisk3            080:000:000   100%          000:080:000:000:000

$ rmdev -dev virdisk01
virdisk01 deleted

$ rmlv clientlv01
Warning, all data contained on logical volume clientlv01 will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? y
rmlv: Logical volume clientlv01 is removed.

Related Documentation

Virtual I/O Server Website
http://www14.software.ibm.com/webapp/set2/sas/f/vios/home.html

Relevant Links in Documentation Tab:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/home.html
·                     IBM System p Advanced POWER Virtualization Best Practices Redbook
·                     IBM System Hardware Information Center
·                     VIOS Commands Reference

Sunday, 23 February 2014

How to Change Ethernet Media Speed for AIX

How to Change Ethernet Media Speed for AIX

How to Change Ethernet Media Speed for AIX
First you need to find out the device name of your Ethernet card. It should be ent0 if the machine has only 1 Ethernet card. Otherwise, it may be a higher number.

You can determine the legal values for the media speed of the card by running the following command (where the value of ent0 may be different if you machine has multiple Ethernet cards).
lsattr -R -l ent0 -a media_speed
If this command results in the following error—
lsattr: 0514-528 The "media_speed" attribute does not exist in the predefined 
 device configuration database.
—then the Ethernet card is a 10Mbps card that will only do 10Mbps/half duplex

If you do have a 100Mbps card, the lsattr command will return something like this:
 10_Half_Duplex
 10_Full_Duplex
 100_Half_Duplex
 100_Full_Duplex
 Auto_Negotiation
 
These are the media speeds the card will understand. To see the cards current media speed setting you can run
lsattr -EH -l ent0 -a media_speed
To change the media speed, run:
chdev -P -l ent0 -a media_speed=100_Half_Duplex
The value for media_speed can be any of the values listed by the lsattr -R command above. The change does not take effect until you reboot the machine.

If you select a value other than Auto_Negotiation the switch port the machine is connected to must have the same configuration. If the switch and the machine do not match you may get no network connectivity or poor performance.

Tuesday, 7 January 2014

Rename & Moving AIX Logical Volume

I would like to discuss about two scenarios  here.



I. Renaming LV in the Same Volume Group:

Steps:

Let us consider , here we need to rename testlv_old  to testlv_new
1: Unmount the FS associated with the testlv_old  ( lets us say  /testfs)
 #umount  /testfs
2: Rename the logical volume type
# chlv -n  testlv_new testlv_old
3:  Verify  dev parameters of the mount point of the filesystem associates with the logical volume in the /etc/filesystems file to match the new name of the logical volume
4: Note If you rename a JFS log , you will be prompted to run chfs on all filesystems that was renamed
5: remount the filesystem type
mount /testfs
6: Verfication
# df -k /testfs
# ls -ltr /testfs
# cd /testfs  point / then create an file using the touch comand

II. Moving LV  to different Volume Group with Different Name

assumptions:

"testlv_old" logical volume belongs to "testvg1" and mounted on "/testfs"
You need to create a samemount point "/testfs" with different logical volume name "testlv_new" and on different vg "testvg2"

Steps:

Pre-Checks:
[root@umlapr root]# lsvg -l testvg1
testvg1:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
testlv_old          jfs2       702     702    1    open/syncd    /testfs

[root@umlapr root]# ls -ld /testfs
drwxr_xr_x   8 tadmin  tgroup   4096 Sep 19 09:57 /testfs

[root@umlapr root]#df -g
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/testlv_old   351.00    127.58   64%      143     1% /testfs
[root@umlapr root]
1: Take backup of the /testfs either locally or whichever the backtools you are using ( eg: TSM /netbackup)
  if you are taking backup locally  make sure you use '-p' option with 'cp' command
  #cp -pr /testfs/*  /backup/testfs_back/
2:Unmount the FS associated with the testlv_old ,here its  "/testfs"
  #umount /testfs
3: Change mount point name  in /etc/filesystem
/testfs:
        dev             = /dev/testlv_old
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        options         = rw
        account         = false
      
to
/testfs_old:
        dev             = /dev/testlv_old
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        options         = rw
        account         = false
4: Create new lv "testlv_new"
#mklv -y <lv name=""> -t jfslog <vg> <# of PP's>
#mklv -y testlv_new -t jfs2 testvg2  702   ( 702 may change depending upon the pp size of the new vg "testvg2" , here we assugmed both testvg1 and testvg2 have same pp size)
5: Create Filesystem with name "/testfs"
#crfs -v jfs2 -d <lv> -m <mountpoint> -A yes
#crfs -v jfs2 -d testlv_new -m /testfs -A yes
above command creates an entry into /etc/filesystems like below
/testfs:
        dev             = /dev/testlv_new
        vfs             = jfs2
        log             = /dev/loglv00
        mount           = true
        options         = rw
        account         = false
6: Mount the filesystem /testfs
#mount /testfs
7: Change the ownership and permisson on the mount point
  #chown -R tadmin:tgroup /testfs
  #chmod 775 /testfs
8: Restore the backup data taken previously
if you taken it locally
#cp -pr /backup/testfs_back/* /testfs
9: validate lv and fs
[root@umlapr root]# lsvg -l testvg2
testvg2:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
loglv00             jfs2log    1       1       1    open/syncd    N/A
testlv_new          jfs2       702     702    1    open/syncd    /testfs

[root@umlapr root]# df -g /testfs
Filesystem    GB blocks      Free %Used    Iused %Iused Mounted on
/dev/testlv_new    351.00    127.58   64%      143     1% /testfs
[root@umlapr root]

Thats it, I hope this article will help you

Tuesday, 14 May 2013

HOWTO: Disk Mapping to client from dual VIOS using VSCSI and MPIO

Pre-Checks:

1) Update the adapter settings:
   # chdev -l fscsi0 -a fc_err_recov=fast_fail
   # chdev -l fscsi0 -a dyntrk=yes
2) Make sure that exported Lun exported from SAN should be available  in both the VIOS which is going to be mapped to client lpar through vscsi.

3) Set reserve_lock  to no for the disk which was added in both the vios from SAN.

Use the below commands to set it to no
   #chdev -l hdisk1 -a reserve_lock=no 
   #chdev -l hdisk1 -a reserve_lock=no  -P
4) Pre-check  whether disk was by mistakenly  used in any other client already
     #lqueryvg -Atp hdisk1

     root@vio1:/ #lqueryvg -Atp hdisk1
     0516-320 lqueryvg: Physical volume hdisk1 is not
      asigned   to  a volume group.
     0516-066 lqueryvg: Physical volume is not a volume group member.
     Check the physical volume name specified.
     root@vio1:/ #

Implementation:

5) Map the disk to the client from both the  vios check the diagram:

vioa:

#mkvdev -vdev hdisk1 -vadapter vhost1 -dev myvio1clivtd

viob:

#mkvdev -vdev hdisk2 -vadapter vhost1 -dev myvio2clivtd

Now in client:


6) Run config manager
#cfgmgr

root@cli1:/ #lspv
hdisk0          0004a256f06e0c6e      rootvg        active
hdisk1          0004a25613b2835f      datavg
hdisk2          0004a25613b287f1      datavg
hdisk3          0004a2561dc9d8bd      datavg
hdisk4          0004a2561dd57f15      datavg
hdisk5          0004a2561da57f16      None
root@cli1:/ #
7) Check the paths
root@orange-lpar:/ #lspath -l hdisk5
Enabled hdisk5 vscsi1
Enabled hdisk5 vscsi2
8) Check the path priorities
 # lspath -AE -l hdisk5 -p vscsi1
   priority 1 Priority True
 # lspath -AE -l hdisk5 -p vscsi2
   priority 1 Priority True
9) set path which was  from  viob to 2
 # chpath -l hdisk5 -a priority=1 -p vscsi1
 # chpath -l hdisk5 -a priority=2 -p vscsi2
Its done now ...

any questions [email protected]