Tuesday, 24 June 2014

AIX RC Scripts

We need some applications  should be stopped and started gracefully without manual intervention during the reboots . Order to serve this purpose , we use  rc scripts in all unix flavors including AIX  .

So, how do rc.scripts work:
  1. Write a single script, put it into /etc/rc.d/init.d, make sure the script accepts a single parameter of start or stop and does the right thing.
  2. In /etc/rc.d/rc2.d create a link (ln -s) to the script in init.d called Sxxname where xx is a number that dictates where in comparison to other scripts in the directory your script will execute (lower number first).
  3. In /etc/rc.d/rc2.d create a link to the script in init.d called Kxxname where xx is a number which dictates when the script is run to stop your app in comparison to other scripts in the directory (lower number first).
Note: Its just convention to place scripts in /etc/rc.d/init.d and make  soft links  in /etc/rc.d/rc2.d. But its need not mandatory to keep  scripts in /etc/rc.d/init.d.

Example RC Script:


ulimit -c 0

case "$1" in
start )
        ps -ef | grep -v grep | grep myengine > /dev/null
        if [ $ret -gt 0 ]; then
stop )
        for i in myengine-app1 myengine-app2 myengine-app3 myengine-app4; do
                ps -ef | grep $i | grep -v grep | awk '{print $2}' >> /tmp/myengine.$PID
        while read line; do
                kill $line
        done < /tmp/myengine.$PID
        rm /tmp/myengine.$PID
* )
        echo "Usage: $0 (start | stop)"
        exit 1

Example Creating Symbolic Links

This is an example on creating symbolic links for automatic startup for tivoli. tivoli should start first (meaning a low Sxx) and stop last (meaning a high Kxx):
umadmin@umserve1:/etc/rc.d/rc2.d>sudo ln -s /etc/rc.d/init.d/rc.tivoli S20tivoli
umadmin@umserve1:/etc/rc.d/rc2.d>sudo ln -s /etc/rc.d/init.d/rc.tivoli K70tivoli

Thursday, 19 June 2014

How to Convert OpenSSH to SSH2 and vise versa

The program SSH (Secure Shell) provides an encrypted channel for logging into another computer over a network, executing commands on a remote computer, and moving files from one computer to another. SSH provides strong host-to-host and user authentication as well as secure encrypted communications over the Internet.

SSH2 is a more secure, efficient, and portable version of SSH .

Connecting two servers running different type of SSH can be a danting task if you does not know how to convert the key. In this article ,we are going to learn about how to convert  keys   SSH( OpenSSH) to SSH2.

How to Generate OpenSSH(SSH v1) key :

umadm@umixserv1 [/home/umadm/.ssh]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/umadm/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/umadm/.ssh/id_rsa.
Your public key has been saved in /home/umadm/.ssh/id_rsa.pub.
The key fingerprint is:
5b:ac:ea:c3:25:cf:2d:31:a2:aa:83:76:4b:a2:c9:eb umadm@umixserv1
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|         .       |
|        S o      |
|. o   . .+       |
|+o o + oo        |
|Bo.   =.         |
|#Eo..oo.         |
umadm@umixserv1 [/home/umadm/.ssh]$
Here we get two encrypted keys  callled   private key( called id_rsa) and public key id_rsa.pub  undr ~$HOME/.ssh directory.
You can generate dsa key by using below command.
#ssh-keygen -t dsa

Convert SSH2 to  OpenSSH(SSH):

The command below can be used to convert an SSH2 private key into the OpenSSH format:
ssh-keygen -i -f path/to/private.key > path/to/new/opensshprivate.key
The command below can be used to convert an SSH2 public key into the OpenSSH format:
ssh-keygen -i -f path/to/publicsshkey.pub > path/to/publickey.pub
Here  -i ==> SSH to read an SSH2 key and convert it into the OpenSSH format

Convert OpenSSH(SSH) to SSH2:

The  reverse  process to convert an OpenSSH key into the SSH2 format in the event that a client application requires the other format. This can be done using the following command:

OpenSSH to SSH2 Private key conversion:
ssh-keygen -e -f path/to/opensshprivate.key > path/to/ssh2privatekey/ssh2privatekey
OpenSSH to SSH2 Public key conversion:
ssh-keygen -e -f path/to/publickey.pub > path/to/ssh2privatekey/ssh2publickey.pub
Here  -e ==> SSH to read an OpenSSH key file and convert it to SSH2 format

Note:If you need passwordless authentication  b/w two different hosts , you need to convert the publickey as per the destination server SSH version and  append the public key to   ~/.ssh/authorized_keys or  ~/.ssh2/authorized_keys at destination server.

Sunday, 8 June 2014

How to Remove a Virtual SCSI Disk

This document describes the procedure to remove a virtual disk in a volume group on a Virtual I/O Client, to map the virtual scsi disk to its corresponding backing device, and to remove the backing device from the Virtual I/O Server.  Please, read the entire document before proceeding.

This document applies to AIX version 5.3 and above.

In a Virtual I/O environment, the physical devices are allocated to the VIO server.  When there is a hardware failure (disk or adapter may go bad) on the VIO server, unless the VIO server has some type of redundancy, that will have an impact on the VIO client whose virtual disks are being served by the failing device.  The impact may be loss of connectivity to the virtual scsi disks, unless there is some type of redundancy (MPIO or LVM mirroring) on the client partition. 

This document does NOT apply to any of the following environments:
1. If the virtual disk is in a shared volume group (i.e HACMP, etc)
2. If the virtual disk is part of rootvg volume group.

 Removing a Physical Volume from a Volume Group

 The following steps are needed to remove a virtual disk from the VIO client, and they are later discussed in more detail:

1. Deallocate all the physical partitions associated with the physical volume in the volume group.
2. Remove the physical volume from the volume group
3. Map the virtual scsi disk on the VIO client partiton to the backing device on the VIO server.
4. Remove the virtual scsi disk definition from the device configuration database.
5. Remove the backing device.

At this point, a new virtual scsi can be added to the VIO client in place of the virtual disk that was removed in the case where this procedure was done as a result of a hardware failure on the VIO server partition.

 1. Deallocating the physical partitions

 In the following procedure, we will be using hdisk4 in the example, as the virtual scsi disk wanting to be removed from the VIO client.

First, we need to determine the logical volumes defined on the physical volume we want to remove. This can be done by running:

# lspv -l hdisk#            
where hdisk# is the virtual scsi disk to be removed.


# lspv -l hdisk4
LV NAME          LPs      PPs      DISTRIBUTION       MOUNT POINT
fslv00               2          2          00..02..00..00..00    /test
loglv00             1          1          00..01..00..00..00    N/A
rawlv                 30        30         00..30..00..00..00    N/A

If the hdisk name no longer exists, and the disk is identifiable only by its 16-digit PVID (you might see this from the output of lsvg -p <VGname>), substitute the PVID for the disk name. For example:

# lspv -l 00c2b06ef8a9f98a

You may receive the following error:
     0516-320 : Physical volume 00c2b06ef8a9f98a is not assigned to
     a volume group.
If so, run the following command:
# putlvodm -p `getlvodm -v <VGname>` <PVID>
VGname refers to your volume group, PVID refers to the 16-digit physical volume identifier, and the characters around the getlvodm command are grave marks, the backward single quote mark. The lspv -l <PVID> command should now run successfully.  To determine the VGname associated with that physical volume use lspv hdisk#.
If another disk in the volume group has space to contain the partitions on this disk, and the virtual scsi disk to be replaced has not completely failed, the migratepv command may be used to move the used PPs on this disk. See the man page for the migratepv command on the steps to do this.
If the partitions cannot be migrated, they must be removed. The output of the lspv -l <hdisk#>, or lspv -l <PVID>, command indicates what logical volumes will be affected. Run the following command on each LV:
# lslv <LVname>
The COPIES field shows if the LV is mirrored. If so, remove the failed copy with:

# rmlvcopy <LVname> 1 <hdisk#>
hdisk# refers to all the disks in the copy that contain the failed disk. A list of drives can be specified with a space between each. Use the lslv -m <LVname> command to see what other disks may need to be listed in the rmlvcopy command. If the disk PVID was previously used with the lspv command, specify that PVID in the list of disks given to the rmlvcopy command.  The unmirrorvg command may be used in lieu of the rmlvcopy command. See the man pages for rmlvcopy and unmirrorvg, for additional information.
If the logical volume is not mirrored, the entire logical volume must be removed, even if just one physical partition resides on the drive to be replaced and cannot be migrated to another disk. If the unmirrored logical volume is a JFS or JFS2 file system, unmount the file system and remove it. Enter:
# umount /<FSname>
# rmfs /<FSname>

If the unmirrored logical volume is a paging space, see if it is active. Enter:
# lsps -a

If it is active, set it to be inactive on the next reboot.  Enter:
# chps -a n <LVname>

Then deactivate it and remove it remove it by entering:
# swapoff /dev/<LVname>
# rmps <LVname>

Remove any other unmirrored logical volume with the following command:
# rmlv <LVname> 

2. Remove the physical volume from the volume group.

 In the case where the virtual scsi disk to be replaced is the only physical volume in the volume group, then remove the volume group, via:

# exportvg <VGname>

This will deallocate the physical partitions and will free up the virtual disk.  Then, remove the disk definition, as noted on step 3.

In the case where there are more than one physical volumes.  Using either the PVID or the hdisk name, depending on which was used when running lspv -l in the preceding discussion, run one of the following:

# reducevg <VGname> <hdisk#>
# reducevg <VGname> <PVID>

If you used the PVID value and if the reducevg command complains that the PVID is not in the device configuration database, run the following command to see if the disk was indeed successfully removed:

# lsvg -p <VGname>

If the PVID or disk is not listed at this point, then ignore the errors from the reducevg command.

3. How to map the virtual scsi disk (on the client partiton) to the physical disk (on the server partition)

 In the following example, we are going to determine the mapping of virtual scsi disk, hdisk4

On the VIO client:

The following command shows the location of hdisk4:

# lscfg -vl hdisk4
  hdisk4           U9117.570.102B06E-V1-C7-T1-L810000000000  Virtual SCSI Disk Drive

where V1 is the LPAR ID (in this case 1), C7 is the slot# (in this case 7), and L81 is the LUN ID. 
Take note of these values.

Next, determine the client SCSI adapter name, by ‘grep’ing for the location of hdisk4's parent adapter, in this case, V1-C7-T1:

# lscfg -v|grep V1-C7-T1
  vscsi4           U9117.570.102B06E-V1-C7-T1                Virtual SCSI Client Adapter
        Device Specific.(YL)........U9117.570.102B06E-V1-C7-T1
  hdisk4           U9117.570.102B06E-V1-C7-T1-L810000000000  Virtual SCSI Disk Drive

where vscsi4 is the client SCSI adapter.

On the HMC:

Run the following command to obtain the LPAR name associated with the LPAR ID

# lshwres -r virtualio --rsubtype scsi -m <Managed System Name> --level lpar

To get the managed system name, run
# lssyscfg -r sys -F name

Then, look for the "lpar_id" and "slot_num" noted earlier.  In our case, the VIO client lpar id is 1 and the slot # is 7.

In the following example, the managed system name is Ops-Kern-570.  The VIO client partition name is kern1.
The VIO Server partition name is reg33_test_vios.

# lshwres -r virtualio --rsubtype scsi -m Ops-Kern-570 --level lpar
Take note of the remote_lpar_id (11) and the remote_slot_num (23).  Then, in the same output, look for a line that corresponds to "lpar_id 11, slot # 23
So in this case, VIO server reg33_test_vios is serving virtual scsi disk, hdisk4, on the VIO client, kern1.
On the VIO Server:

Go to the VIO Server associated with the LPAR ID obtained in the previous step, in our case reg33_test_vios.
As padmin, run the following command to display the mapping, which should match the mapping obtained from the HMC obtained above.

$ lsmap -all|grep <VIO server lpar ID>-<VIOS slot#>

For example,
$ lsmap -all|grep V11-C23
where V11 is the VIO server lpar_id and C23 is the slot #

The cmd will return something similar to

vhost21         U9117.570.102B06E-V11-C23                    0x00000001

In this case, vhost21 is the server SCSI adapter mapped to our VIO client lpar id 1 (0x00000001).

Next, list the mapping for the vhost# obtained previously.

$ lsmap -vadapter vhost21
SVSA               Physloc                                                Client Partition ID
---------------         --------------------------------------------    ------------------
vhost21            U9117.570.102B06E-V11-C23     0x00000001

VTD                  virdisk01                      
LUN                  0x8100000000000000
Backing device clientlv01                     

Take note of the VTD and Backing device name.  In this case, the backing device mapped to virtual scsi disk, hdisk4, is logical volume, clientlv01, and it is associated with Virtual Target Device, virdisk01.

4. Remove the virtual scsi disk definition from the device configuration database on the VIO client

 To remove the vscsi definition, run

# rmdev -dl hdisk#

Ensure you know the backing device associated with the virtual scsi disk being removed prior to issuing the rmdev command.  That information will be needed in order to do clean up on the server partition.  Refer to the section "How to map the virtual scsi disk (on the client partition) to the physical disk (on the server partitions)".

 5. Remove the backing device on the VIO server

 The peripheral device types or backing devices currently supported are
·                logical volume
·                physical volume
·               optical device starting at v1.2.0.0-FP7 (but not currently supported on System i)

Prior to removing the backing device, the virtual target device must be removed first. To do so, run the following as padmin:

$ rmdev -dev <VTD name>
$ rmlv <LVname>

or you can remove both the VTD and logical volume in one command by running:

$ rmvdev -vtd <VTD name> -rmlv

In the case where the backing device is a physical volume, then, removing the virtual target device completes this document.

If you need to determine the physical device and volume group that the logical volume belongs to, you can issue the following commands prior to running rmlv or rmvdev.
$ lslv -pv <LVname>    List the physical volume that the logical volume specified resides on.
$ lslv <LVname>          Shows the characteristics of the logical volume, including the volume group name, # of mirrored copies, etc.

In our example, the backing device is a logical volume, clientlv01, and it resides on the physical device, hdisk3:

$ lslv -pv clientlv01
PV                COPIES        IN BAND       DISTRIBUTION 
hdisk3            080:000:000   100%          000:080:000:000:000

$ rmdev -dev virdisk01
virdisk01 deleted

$ rmlv clientlv01
Warning, all data contained on logical volume clientlv01 will be destroyed.
rmlv: Do you wish to continue? y(es) n(o)? y
rmlv: Logical volume clientlv01 is removed.

Related Documentation

Virtual I/O Server Website

Relevant Links in Documentation Tab:
·                     IBM System p Advanced POWER Virtualization Best Practices Redbook
·                     IBM System Hardware Information Center
·                     VIOS Commands Reference

Saturday, 7 June 2014

AIX NFS Error - RPC: 1832-010 Authentication error fixing

AIX NFS Error and Solution - RPC: 1832-010 Authentication error
[root-umserv1][/]> mount umserv2:/repos /mymnt
mount: 1831-008 giving up on:
vmount: The file access permissions do not allow the specified action.
NFS fsinfo failed for server umserv2: error 7 (RPC: 1832-010 Authentication error)
To fix this issue check "nfs_use_reserved_ports" value , if its 0 set it to 1
[root-umserv1][/]> nfso -a | grep port
portcheck = 0
nfs_use_reserved_ports = 0

[root-umserv1][/]> nfso -po portcheck=1
Setting portcheck to 1
Setting portcheck to 1 in nextboot file

[root-umserv1][/]> nfso -po nfs_use_reserved_ports=1
Setting nfs_use_reserved_ports to 1
Setting nfs_use_reserved_ports to 1 in nextboot file

[root-umserv1][/]> mount umserv2:/repos /mymnt