Tuesday, 21 May 2013

Upgrading the GPFS cluster on AIX

Upgrading the GPFS cluster on AIX

  • All the GPFS nodes should be upgrade at same time.
  • Make sure the Application is stopped fully.
  • Before starting with OS Upgrade all the GPFS file system should be unmounted. If there are any application process running, and those process using the GPFS file systems, we cannot unmount the GPFS file systems.
  • Before the OS upgrade starts, the GPFS cluster should be stopped.

1) View the cluster information

Before starting the OS Upgrade starts complete below steps.
# mmlscluster                                  
Example output:
GPFS cluster information
========================
  GPFS cluster name:         HOST.test1-gpfs
  GPFS cluster id:           13882565243868289165
  GPFS UID domain:           HOST.test1-gpfs
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
  Primary server:    test1-gpfs
  Secondary server:  test2-gpfs
Node  Daemon node name            IP address       Admin node name             Designation
-----------------------------------------------------------------------------------------------
   1   test1-gpfs            192.168.199.137  test1-gpfs            quorum
   2   test3-gpfs            192.168.199.138  test3-gpfs            quorum
   3   test2-gpfs            192.168.199.139  test2-gpfs            quorum

2) View all the gpfs file systems

# mmlsfs all                                    
Example output:

File system attributes for /dev/gpfs1001:
=========================================
flag value            description
---- ---------------- -----------------------------------------------------
-f  131072           Minimum fragment size in bytes
-i  512              Inode size in bytes
-I  32768            Indirect block size in bytes
-m  1                Default number of metadata replicas
-M  2                Maximum number of metadata replicas
-r  1                Default number of data replicas
-R  2                Maximum number of data replicas
-j  cluster          Block allocation type
-D  nfs4             File locking semantics in effect
-k  all              ACL semantics in effect
-a  -1               Estimated average file size
-n  64               Estimated number of nodes that will mount file system
-B  4194304          Block size
-Q  user;group;fileset Quotas enforced
     none             Default quotas enabled
-F  1000000          Maximum number of inodes
-V  10.01 (3.2.1.5)  File system version
-u  yes              Support for large LUNs?
-z  no               Is DMAPI enabled?
-L  4194304          Logfile size
-E  yes              Exact mtime mount option
-S  no               Suppress atime mount option
-K  whenpossible     Strict replica allocation option
-P  system           Disk storage pools in file system
-d  gpfs1nsd;gpfs2nsd;gpfs3nsd;gpfs4nsd  Disks in file system
-A  yes              Automatic mount option
-o  none             Additional mount options
-T  /sasmart         Default mount point
File system attributes for /dev/gpfs1002:
=========================================
flag value            description
---- ---------------- -----------------------------------------------------
-f  131072           Minimum fragment size in bytes
-i  512              Inode size in bytes
-I  32768            Indirect block size in bytes
-m  1                Default number of metadata replicas
Standard input -M  2                Maximum number of metadata replicas
-r  1                Default number of data replicas
-R  2                Maximum number of data replicas
-j  cluster          Block allocation type
-D  nfs4             File locking semantics in effect
-k  all              ACL semantics in effect
-a  -1               Estimated average file size
-n  64               Estimated number of nodes that will mount file system
-B  4194304          Block size
-Q  user;group;fileset Quotas enforced
     none             Default quotas enabled
-F  1000000          Maximum number of inodes
-V  10.01 (3.2.1.5)  File system version
-u  yes              Support for large LUNs?
-z  no               Is DMAPI enabled?
-L  4194304          Logfile size
-E  yes              Exact mtime mount option
-S  no               Suppress atime mount option
-K  whenpossible     Strict replica allocation option
-P  system           Disk storage pools in file system
-d  gpfs5nsd       Disks in file system
-A  yes              Automatic mount option
-o  none             Additional mount options
-T  /sasplex1        Default mount point
File system attributes for /dev/gpfs1003:
=========================================
flag value            description
---- ---------------- -----------------------------------------------------
-f  131072           Minimum fragment size in bytes
-i  512              Inode size in bytes
-I  32768            Indirect block size in bytes
-m  1                Default number of metadata replicas
-M  2                Maximum number of metadata replicas
-r  1                Default number of data replicas
-R  2                Maximum number of data replicas
-j  scatter          Block allocation type
-D  nfs4             File locking semantics in effect
-k  all              ACL semantics in effect
-a  -1               Estimated average file size
-n  64               Estimated number of nodes that will mount file system
-B  4194304          Block size
Standard input -Q  user;group;fileset Quotas enforced
     none             Default quotas enabled
-F  1000000          Maximum number of inodes
-V  10.01 (3.2.1.5)  File system version
-u  yes              Support for large LUNs?
-z  no               Is DMAPI enabled?
-L  4194304          Logfile size
-E  yes              Exact mtime mount option
-S  no               Suppress atime mount option
-K  whenpossible     Strict replica allocation option
-P  system           Disk storage pools in file system
-d gpfs6nsd;gpfs7nsd;gpfs8nsd;gpfs9nsd;gpfs10nsd;gpfs11nsd;gpfs12nsd;gpfs13nsd;gpfs14nsd;gpfs15nsd;gpfs16nsd;gpfs17nsd;gpfs1
8nsd;gpfs19nsd;gpfs20nsd;gpfs21nsd;gpfs22nsd;
-d
gpfs23nsd;gpfs24nsd;gpfs25nsd;gpfs26nsd;gpfs27nsd;gpfs28nsd;gpfs29nsd;gpfs30nsd;gpfs31nsd;gpfs32nsd;gpfs33nsd;gpfs34nsd;g
pfs35nsd;gpfs36nsd;gpfs37nsd;gpfs38nsd;gpfs39nsd;
-d
gpfs40nsd;gpfs41nsd;gpfs42nsd;gpfs43nsd;gpfs44nsd;gpfs45nsd;gpfs46nsd;gpfs47nsd;gpfs48nsd;gpfs49nsd;gpfs50nsd;gpfs51nsd;g
pfs52nsd;gpfs53nsd;gpfs54nsd;gpfs55nsd;gpfs56nsd;
-d
gpfs57nsd;gpfs58nsd;gpfs59nsd;gpfs60nsd;gpfs61nsd;gpfs62nsd;gpfs63nsd;gpfs64nsd;gpfs65nsd;gpfs66nsd;gpfs67nsd;gpfs68nsd;g
pfs69nsd  Disks in file system
-A  yes              Automatic mount option
-o  none             Additional mount options
-T  /app1            Default mount point
File system attributes for /dev/gpfs1004:
=========================================
flag value            description
---- ---------------- -----------------------------------------------------
-f  131072           Minimum fragment size in bytes
-i  512              Inode size in bytes
-I  32768            Indirect block size in bytes
-m  1                Default number of metadata replicas
-M  2                Maximum number of metadata replicas
-r  1                Default number of data replicas
-R  2                Maximum number of data replicas
-j  cluster          Block allocation type
-D  nfs4             File locking semantics in effect
-k  all              ACL semantics in effect
-a  -1               Estimated average file size
-n  64               Estimated number of nodes that will mount file system
-B  4194304          Block size
-Q  user;group;fileset Quotas enforced
     none             Default quotas enabled
Standard input -F  1000000          Maximum number of inodes
-V  10.01 (3.2.1.5)  File system version
-u  yes              Support for large LUNs?
-z  no               Is DMAPI enabled?
-L  4194304          Logfile size
-E  yes              Exact mtime mount option
-S  no               Suppress atime mount option
-K  whenpossible     Strict replica allocation option
-P  system           Disk storage pools in file system
-d  gpfs70nsd      Disks in file system
-A  yes              Automatic mount option
-o  none             Additional mount options
-T  /sasuserhome     Default mount point

3) View the gpfs filesystem mounted on number of nodes

# mmlsmount all                                          
Example output:
File system gpfs1001 is mounted on 3 nodes.
File system gpfs1002 is mounted on 3 nodes.
File system gpfs1003 is mounted on 3 nodes.
File system gpfs1004 is mounted on 3 nodes.
Standard input: END

4) Check the existing gpfs cluster version

# lslpp -l |grep -i gpfs
Example output:
  gpfs.base                 3.2.1.18  APPLIED    GPFS File Manager
  gpfs.msg.en_US            3.2.1.11  APPLIED    GPFS Server Messages - U.S.
  gpfs.base                 3.2.1.18  APPLIED    GPFS File Manager
  gpfs.docs.data             3.2.1.1  APPLIED    GPFS Server Manpages and

5) unmount all gpfs filesystems

# mmumount all -N test1-gpfs,test3-gpfs,test2-gpfs
Example output:
Wed May 11 00:05:35 CDT 2011: 6027-1674 mmumount: Unmounting file systems ...

6) Verify all gpfs file system are un mounted

# mmlsmount all                            
Example output:
File system gpfs1001 is not mounted.
File system gpfs1002 is not mounted.
File system gpfs1003 is not mounted.
File system gpfs1004 is not mounted.

7) Stop the gpfs cluster

# mmshutdown -a                      
Example output:
Wed May 11 00:08:22 CDT 2011: 6027-1341 mmshutdown: Starting force unmount of GPFS file systems
Wed May 11 00:08:27 CDT 2011: 6027-1344 mmshutdown: Shutting down GPFS daemons
test3-gpfs:  Shutting down!
test2-gpfs:  Shutting down!
test3-gpfs:  'shutdown' command about to kill process 516190
test2-gpfs:  'shutdown' command about to kill process 483444
test1-gpfs:  Shutting down!
test1-gpfs:  'shutdown' command about to kill process 524420
test1-gpfs:  Master did not clean up; attempting cleanup now
test1-gpfs:  Wed May 11 00:09:28.423 2011: GPFS: 6027-311 mmfsd64 is shutting down.
test1-gpfs:  Wed May 11 00:09:28.424 2011: Reason for shutdown: mmfsadm shutdown command timed out
test1-gpfs:  Wed May 11 00:09:28 CDT 2011: mmcommon mmfsdown invoked.  Subsystem: mmfs  Status: down
test1-gpfs:  Wed May 11 00:09:28 CDT 2011: 6027-1674 mmcommon: Unmounting file systems ...
Wed May 11 00:09:33 CDT 2011: 6027-1345 mmshutdown: Finished

8) Verify any cluster process running

# ps -ef|grep -i gpfs                      
After GPFS cluster is stopped, then proceed with patching/upgrade.
Once the OS patching/upgrade complete, then upgrade the GPFS .Make sure the GPFS file systems are not in mounted state.

9) mount the NIM directory on /mnt

# mount wydainim010:/export/ibm _lpp /mnt

10) Change dir to the GPFS new version filesets location

# /mnt/gpfs/3.3/3.3.0.12                           

11) Use update_all cmd to update the file sets do preview only

# smitty update _all                                    
Example output:
INPUT device / directory for software                  .
* SOFTWARE to update_update_allPREVIEW only?
(update operation will NOT occur)                        yes       =====> Select yes for Preview
COMMIT software updates?                               no         =====> Select no for COMMIT filesets
SAVE replaced files?                                         yes       =====> Select yes here
AUTOMATICALLY install requisite software?       yes+
EXTEND file systems if space needed?              yes+
VERIFY install and check file sizes?                   no+
DETAILED output?                                            no+
Process multiple volumes?                                 yes+
ACCEPT new license agreements?                     yes       =====> Accept new licence agreement
Preview new LICENSE agreements?                   no+
If everything is fine in PREVIEW stage, proceed with upgrading the GPFS filesets bye selecting PREVIEW as no

12) Now verify the GPFS filesets version

# lslpp -l |grep -i gpfs              
Example output:
  gpfs.base                  3.3.0.8  APPLIED    GPFS File Manager
  gpfs.msg.en_US             3.3.0.5  APPLIED    GPFS Server Messages - U.S.
  gpfs.base                  3.3.0.8  APPLIED    GPFS File Manager
  gpfs.docs.data             3.3.0.1  APPLIED    GPFS Server Manpages and
Then continue with EMC upgrade and once the emc upgrade is done.
Make sure all the pv's are available in all nodes after the EMC upgrade.
Start the GPFS cluster.

13) Starts the GPFS cluster

# mmstartup -a                                            
Example output:
Wed May 11 06:09:32 CDT 2011: 6027-1642 mmstartup: Starting GPFS ...

14) Check the GPFS cluster

# mmlscluster                                
Example output:
GPFS cluster information
========================
  GPFS cluster name:         HOST.test1-gpfs
  GPFS cluster id:           13882565243868289165
  GPFS UID domain:           HOST.test1-gpfs
  Remote shell command:      /usr/bin/ssh
  Remote file copy command:  /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
  Primary server:    test1-gpfs
  Secondary server:  test2-gpfs
Node  Daemon node name            IP address       Admin node name             Designation
-----------------------------------------------------------------------------------------------
   1   test1-gpfs            192.168.199.137  test1-gpfs            quorum
   2   test3-gpfs            192.168.199.138  test3-gpfs            quorum
   3   test2-gpfs            192.168.199.139  test2-gpfs            quorum

15) Check the GPFS cluster state on all nodes

# mmgetstate -a                                            
Example output:
Node number  Node name        GPFS state
------------------------------------------
       1      test1-gpfs active
       2      test3-gpfs active
       3      test2-gpfs active

16) Check all the filesystems

# mmlsfs all                                       

17) Mount all the gpfs fil systems

# mmmount all -a                       
Wed May 11 06:13:16 CDT 2011: 6027-1623 mmmount: Mounting file systems ...

18) Check the file systems are mounted on all nodes

# mmlsmount all                        
Example output:
File system gpfs1001 is mounted on 3 nodes.
File system gpfs1002 is mounted on 3 nodes.
File system gpfs1003 is mounted on 3 nodes.
File system gpfs1004 is mounted on 3 nodes.

19) Verify the GPFS cluster configuration information

# mmlsconfig                                
Example output:
Configuration data for cluster HOST.test1-gpfs:
----------------------------------------------------------
clusterName HOST.test1-gpfs
clusterId 13882565243868289165
clusterType lc
autoload yes
minReleaseLevel 3.2.1.5
dmapiFileHandleSize 32
maxblocksize 4096K
pagepool 1024M
maxFilesToCache 5000
maxStatCache 40000
maxMBpS 3200
prefetchPct 60
seqDiscardThreshhold 10240000000
worker1Threads 400
prefetchThreads 145
adminMode allToAll
File systems in cluster HOST.test1-gpfs:
---------------------------------------------------
/dev/gpfs1001
/dev/gpfs1002
/dev/gpfs1003
/dev/gpfs1004

0 blogger-disqus:

Post a comment