Question & Answer
Question
How to perform validation for DB2 pureScale feature? Are there any tips and hints for verifying and validating the DB2 pureScale feature for Enterprise Server Edition? Answer section given below consists of the answers along with questions for validating the pureScale environment.
Cause
Since pureScale instance configuration involves multiple steps on multiple hosts, it is mandatory to perform the verification of all tasks that are completed as part of pureScale instance configuration on all hosts.
Answer
The following are few validation steps that can be done on DB2 pureScale environment. Validation description along with the command and output are given below.
Pre-installation validation
Before installation, the following steps are to be performed to validate the deployment environment.
1. Verify that the required OS level and service pack is installed on all the machines which are going to be part of the cluster. The minimum required level is AIX Version 6.1 Technology Level (TL) 3 Service Pack (SP) 2.
# oslevel -s
6100-03-02-0939
2. Ensure the System Firmware is installed. The minimum required System Firmware level is 3.4.5 or 3.5. The following is the command to verify the System Firmware is installed, followed by a sample output:
# lsmcode -A
sys0!system:EH350_038 (t) EH350_028 (p) EH350_038 (t)
3. Verify that (uDAPL) is installed, setup, and configured. The minimum required uDAPL level is 6.1.0.1. The following is the command to verify uDAPL is installed, followed by a sample output:
# lslpp -l bos.mp64 devices.chrp.IBM.lhca.rte devices.common.IBM.ib.rte udapl.rte
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
bos.mp64 6.1.4.1 COMMITTED Base Operating System 64-bit
Multiprocessor Runtime
devices.chrp.IBM.lhca.rte 6.1.4.1 COMMITTED Infiniband Logical HCA Runtime
Environment
devices.common.IBM.ib.rte 6.1.4.1 COMMITTED Infiniband Common Runtime
Environment
udapl.rte 6.1.0.1 APPLIED uDAPL
Path: /etc/objrepos
bos.mp64 6.1.4.1 COMMITTED Base Operating System 64-bit
Multiprocessor Runtime
devices.chrp.IBM.lhca.rte 6.1.4.1 COMMITTED Infiniband Logical HCA Runtime
Environment
devices.common.IBM.ib.rte 6.1.4.1 COMMITTED Infiniband Common Runtime
Environment
udapl.rte 6.1.0.1 APPLIED uDAPL
4. The minimum C++ runtime level requires the xlC.rte 9.0.0.8. The following is the command to verify, followed by a sample output:
# lslpp -l xlC.rte
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
xlC.rte 9.0.0.8 COMMITTED XL C/C++ Runtime
5. Ensure that OpenSSH is installed and password-less access for the root user is configured on each host. The minimum required OpenSSH level is 4.5.0.5302. The following is the command to verify OpenSSH is installed, followed by a sample output:
# lslpp -la "openssh.*"
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
openssh.base.client 4.5.0.5302 COMMITTED Open Secure Shell Commands
openssh.base.server 4.5.0.5302 COMMITTED Open Secure Shell Server
openssh.license 4.5.0.5302 COMMITTED Open Secure Shell License
openssh.man.en_US 4.5.0.5302 COMMITTED Open Secure Shell
Documentation - U.S. English
openssh.msg.EN_US 4.5.0.5302 COMMITTED Open Secure Shell Messages -
U.S. English (UTF)
openssh.msg.en_US 4.5.0.5302 COMMITTED Open Secure Shell Messages -
U.S. English
Path: /etc/objrepos
openssh.base.client 4.5.0.5302 COMMITTED Open Secure Shell Commands
openssh.base.server 4.5.0.5302 COMMITTED Open Secure Shell Server
6. Ensure that the shared disks accessed by all hosts have the same physical volume identifier (PVID) configured. Compare these results between each host in the DB2 pureScale instance
# lspv
hdisk0 000384291e908414 rootvg active
hdisk1 000384297fb6f4e0 None
hdisk2 000384297fb725c0 None
hdisk3 000384297fb746a1 None
hdisk4 000384297fb771e7 None
hdisk5 000384297fb78c49 None
7. Ensure that following VSD packages are installed on the system.
# lslpp -l rsct* | grep vsd
rsct.vsd.cmds 4.2.0.0 COMMITTED VSD Commands
rsct.vsd.rvsd 4.2.0.0 COMMITTED Recoverable VSD
rsct.vsd.vsdd 4.2.0.0 COMMITTED VSD Device Driver
rsct.vsd.vsdrm 4.2.0.0 COMMITTED VSD Resource Manager
rsct.vsd.cmds 4.2.0.0 COMMITTED VSD Commands
rsct.vsd.rvsd 4.2.0.0 COMMITTED Recoverable VSD
rsct.vsd.vsdd 4.2.0.0 COMMITTED VSD Device Driver
rsct.vsd.vsdrm 4.2.0.0 COMMITTED VSD Resource Manager
Note: Four AIX 6.1 LPARs are taken in which 2 are members and 2 are CF to demonstrate all the basic validation steps.
Installation (binaries) validation
Following are the basic validation steps that can be done after the binaries installation on all the hosts that are a part of the cluster.
8. Verify that the db2_install or db2setup displays the success message out put.
Note: At this point, no option is used for log file creation, hence it is the default log file. You have the option to choose your own log file by using “–l” option for install utilities.
The execution completed successfully.
For more information see the DB2 installation log at
"/tmp/db2_install.log.364788".
9. Verify the installation log file, to scan through any error messages that are logged. Aswell, also check for success messages. Every activity that is done by install utilities is logged. Hence, the log file is the key file through which most of the validation can be done. Here is a part of the log file:
Checking license agreement acceptance:...Success
Installing BASE_CLIENT_R
Installing DB2_PRODUCT_MESSAGES_EN
Installing BASE_CLIENT
Installing JAVA_RUNTIME_SUPPORT
Installing DB2_JAVA_HELP_EN
Note: Remaining component list is not mentioned
Following two lines are mentioned in the log, if no errors occurred during installation.
Binary installation succeeded on the following hosts: "coralpib148, coralpib149,
coralpib150".
Installing DB2 file sets:.......Success
Note: Here coralpib148, coralpib149 and coralpib150 are remote hosts. And this line would be present only for GUI/Silent db2setup only, since db2_install is used to install binaries on one machine.
10. Verify that your DB2 database product is installed in the path given at the time of installation. Run the /usr/local/bin/db2ls command
# /usr/local/bin/db2ls
Install Path Level Fix Pack Special Install Number Install Date Installer UID
---------------------------------------------------------------------------------------------------------------------
/opt/IBM/db2/V9.8 9.8.0.2 2 Thu Jul 1 09:17:51 2010 EDT 0
Simple db2ls, confirm that installation path is “/opt/IBM/db2/V9.8”, VRMF number “9.8.0.2”, in which fixpack is “2”.
Using the following command, the name of the installed product can be identified. Example: ENTERPRISE_SERVER_EDITION_DSF is installed in /opt/IBM/db2/V9.8
# /usr/local/bin/db2ls -p -q -b /opt/IBM/db2/V9.8
Install Path: /opt/IBM/db2/V9.8
Product Response File ID Level Fix Pack Product Description
---------------------------------------------------------------------------------------------------------------------
ENTERPRISE_SERVER_EDITION_DSF 9.8.0.2 2 DB2 Enterprise Server Edition with the pureScale Feature
Validation of list of components installed:
11. Using the following command, the list of all components under particular installation path can be generated as shown below.
# /usr/local/bin/db2ls -q -a -b /opt/IBM/db2/V9.8
Install Path : /opt/IBM/db2/V9.8
Feature Response File ID Level Fix Pack Feature Description
---------------------------------------------------------------------------------------------------------------------
BASE_CLIENT_R 9.8.0.2 2 Base Client Support for installation with root privileges
DB2_PRODUCT_MESSAGES_EN 9.8.0.2 2 Product Messages - English
BASE_CLIENT 9.8.0.2 2 Base client support
JAVA_RUNTIME_SUPPORT 9.8.0.2 2 Java Runtime Support
DB2_JAVA_HELP_EN 9.8.0.2 2 Java Help (HTML) - English
BASE_DB2_ENGINE_R 9.8.0.2 2 Base server support for installation with root privileges
ACS 9.8.0.2 2 Integrated Flash Copy Support
GSK 9.8.0.2 2 Global Secure ToolKit
JAVA_SUPPORT 9.8.0.2 2 Java support
SQL_PROCEDURES 9.8.0.2 2 SQL procedures
ICU_SUP 9.8.0.2 2 ICU Utilities
JAVA_COMMON_FILES 9.8.0.2 2 Java Common files
BASE_DB2_ENGINE 9.8.0.2 2 Base server support
JDK 9.8.0.2 2 IBM Software Development Kit (SDK) for Java(TM)
CONNECT_SUPPORT 9.8.0.2 2 Connect support
RELATIONAL_WRAPPERS_COMMON 9.8.0.2 2 Relational wrappers common
DB2_DATA_SOURCE_SUPPORT 9.8.0.2 2 DB2 data source support
LDAP_EXPLOITATION 9.8.0.2 2 DB2 LDAP support
INSTANCE_SETUP_SUPPORT 9.8.0.2 2 DB2 Instance Setup wizard
CONTROL_SERVER 9.8.0.2 2 Control Server
SPATIAL_EXTENDER_CLIENT_SUPPORT 9.8.0.2 2 Spatial Extender client
COMMUNICATION_SUPPORT_TCPIP 9.8.0.2 2 Communication support - TCP/IP
APPLICATION_DEVELOPMENT_TOOLS 9.8.0.2 2 Base application development tools
ESE_DSF_COMMON 9.8.0.2 2 ese dsf common
DB2_UPDATE_SERVICE 9.8.0.2 2 DB2 Update Service
DATABASE_PARTITIONING_SUPPORT 9.8.0.2 2 Parallel Extension
EDB 9.8.0.2 2 EnterpriseDB code
REPL_CLIENT 9.8.0.2 2 Replication tools
CF 9.8.0.2 2 PowerHA pureScale
DB2_SAMPLE_DATABASE 9.8.0.2 2 Sample database source
INFORMIX_DATA_SOURCE_SUPPORT 9.8.0.2 2 Informix data source support
DSF_PRODUCT_SIGNATURE 9.8.0.2 2 Product Signature for DB2 Enterprise Server Edition with the pureScale Feature
Tivoli SA MP 3.1.5.7
12. Verify if files/directories are existing under installation path (under the output of “db2ls”)
Here the sample of files/directories under installation path.
# ls /opt/IBM/db2/V9.8
.licbkup acs bin cfg dasfcn function include install java license misc samples security64
.metadata adm bnd conv doc gskit include32 instance lib32 logs msg sd tivready
Readme adsm cf das dsdriver ha infopop itma lib64 map properties security32 tools
13. Verify that Tivoli SA MP is installed on the system. First check the log file if the entry exists for the TSAMP installation. The following information should be logged if TSAMP installation is successful.
Installing or updating DB2 HA scripts for Tivoli SA MP :.......Success
After this, run the following command to confirm it. The following two file sets should appear in the lslpp output.
# lslpp -l | grep -i sam*
sam.adapter 3.1.0.7 COMMITTED SAM adapter for end-to-end
sam.core.rte 3.1.5.7 COMMITTED SA CHARM Runtime Commands
14. Verify that GPFS is installed on the system. First check the log file if the entry existing about the GPFS installation. The following information should exist in the log if the GPFS installation is successful.
Installing or updating DB2 Cluster Scripts for GPFS :.......Success
After this, run the following command to confirm it
# lslpp -l | grep -i gpfs*
gpfs.base 3.3.0.5 COMMITTED GPFS File Manager
gpfs.msg.en_US 3.3.0.3 COMMITTED GPFS Server Messages - U.S.
gpfs.base 3.3.0.5 COMMITTED GPFS File Manager
gpfs.docs.data 3.3.0.1 COMMITTED GPFS Server Manpages and
15. Ensure that iocp is installed and is made available by executing the following commands.
Validation of iocp:
# lslpp -l bos.iocp.rte
Fileset Level State Description
----------------------------------------------------------------------------
Path: /usr/lib/objrepos
bos.iocp.rte 6.1.3.1 COMMITTED I/O Completion Ports API
Path: /etc/objrepos
bos.iocp.rte 6.1.0.0 COMMITTED I/O Completion Ports API
# lsdev -Cc iocp
iocp0 Available I/O Completion Ports
Instance validation
After binaries installation, the next step of pureScale deployment is DSF instance creation. DSF instance creation involves multiple steps like TSA peer domain, GPFS cluster creation, file system creation and resource creation. All these steps are validated briefly as below.
16. Validation of global.reg/db2greg output:
After the binaries installation and instance creation, greg output should look as below. It has 3 “S” entries, assumed tsamp, gpfs and db2 binaries are not installed before. “6” “V” entries related GPFS_CLUSTER, PEER_DOMAIN, INST_PROF and DEFAULT_INSTPROF. And one “I” entry related to DSF instance. With the help of GREG output, following points can be summarized.
Filesystem mounted on: /db2sd_20100701092048
DB2 binaries installation path: /opt/IBM/db2/V9.8
GPFS binaries installation path: /usr/lpp/mmfs
TSA binaries installation path: /opt/IBM/tsamp
GPFS cluster: db2cluster_20100701091954.torolab.ibm.com
TSA peer domain: db2domain_20100701091919
# /opt/IBM/db2/V9.8/bin/db2greg -dump
S,GPFS,3.3.0.5,/usr/lpp/mmfs,-,-,0,0,-,1277990007,0
S,TSA,3.1.5.7,/opt/IBM/tsamp,-,-,0,0,-,1277990007,0
S,DB2,9.8.0.2,/opt/IBM/db2/V9.8,,,2,0,,1277990271,0
V,DB2GPRF,DB2SYSTEM,coralpib147,/opt/IBM/db2/V9.8,
V,INSTPROF,db2sdin1,/db2sd_20100701092048,-,-
V,DEFAULT_INSTPROF,DEFAULT,/db2sd_20100701092048,-,-
I,DB2,9.8.0.2,db2sdin1,/home/db2sdin1/sqllib,,1,0,/opt/IBM/db2/V9.8,,
V,DB2GPRF,DB2INSTDEF,db2sdin1,/opt/IBM/db2/V9.8,
V,GPFS_CLUSTER,NAME,db2cluster_20100701091954.torolab.ibm.com,-,DB2_CREATED
V,PEER_DOMAIN,NAME,db2domain_20100701091919,-,DB2_CREATED
17. Validation of instance name:
#/opt/IBM/db2/V9.8/instance/db2ilist
db2sdin1
18. Validation of db2nodes.cfg: (Which is in /HOME//<instance_name>/sqllib)
Sqllib/db2nodes.cfg has five fields; first field gives the count of MEMBER starting with “0” and count of CF starting with “128”. Second field “hostname”, third logical partition number, fourth is cluster interconnect netname and fifth is tag to identify if it is member or CF.
The member entry will be tagged with an identifier of “MEMBER” at the end while a CF entry will be tagged with a “CF” identifier. Failure to update the db2nodes.cfg file is considered a major error and DB2 will roll back the instance in that case. Following is the sample out of db2nodes.cfg
# cat /HOME/<Instance_name>/sqllib/db2nodes.cfg
0 coralpib147.torolab.ibm.com 0 coralpib147-ib0 - MEMBER
1 coralpib148.torolab.ibm.com 0 coralpib148-ib0 - MEMBER
128 coralpib150.torolab.ibm.com 0 coralpib150-ib0 - CF
129 coralpib149.torolab.ibm.com 0 coralpib149-ib0 – CF
19. Instance start/stop:
Following command gives the level of the image used,. Mostly this is not used by the user, however it is still mentioned here for completeness.
Switch to instance user and run db2level:
$ db2level
DB21085I Instance "db2sdin1" uses "64" bits and DB2 code release "SQL09082"
with level identifier "09030107".
Informational tokens are "DB2 v9.8.0.2", "s100629", "U832207", and Fix Pack
"2".
Product is installed at "/opt/IBM/db2/V9.8".
Db2stop/db2start should show out put “n” times, if “n” is the number of members. Here 2 member and 2 CFs, hence only 2 db2stop/db2start are displayed.
Switch to instance user and run db2start/db2stop:
$ db2stop
07/01/2010 10:28:09 0 0 SQL1064N DB2STOP processing was successful.
07/01/2010 10:28:10 1 0 SQL1064N DB2STOP processing was successful.
SQL1064N DB2STOP processing was successful.
$ db2start
07/01/2010 10:29:27 0 0 SQL1063N DB2START processing was successful.
07/01/2010 10:29:27 1 0 SQL1063N DB2START processing was successful.
SQL1063N DB2START processing was successful.
20. Resource validation:
Once the resources are created, they must be online; this can be verified by “lssam” output. Below is the sample output.
# lssam
Online IBM.ResourceGroup:ca_db2sdin1_0-rg Nominal=Online
'- Online IBM.Application:ca_db2sdin1_0-rs
|- Online IBM.Application:ca_db2sdin1_0-rs:coralpib149
'- Online IBM.Application:ca_db2sdin1_0-rs:coralpib150
Online IBM.ResourceGroup:db2_db2sdin1_0-rg Nominal=Online
'- Online IBM.Application:db2_db2sdin1_0-rs
|- Online IBM.Application:db2_db2sdin1_0-rs:coralpib147
'- Offline IBM.Application:db2_db2sdin1_0-rs:coralpib148
Online IBM.ResourceGroup:db2_db2sdin1_1-rg Nominal=Online
'- Online IBM.Application:db2_db2sdin1_1-rs
|- Offline IBM.Application:db2_db2sdin1_1-rs:coralpib147
'- Online IBM.Application:db2_db2sdin1_1-rs:coralpib148
Online IBM.ResourceGroup:db2mnt-db2sd_20100701092048-rg Nominal=Online
'- Online IBM.Application:db2mnt-db2sd_20100701092048-rs
|- Online IBM.Application:db2mnt-db2sd_20100701092048-rs:coralpib147
|- Online IBM.Application:db2mnt-db2sd_20100701092048-rs:coralpib148
|- Online IBM.Application:db2mnt-db2sd_20100701092048-rs:coralpib149
'- Online IBM.Application:db2mnt-db2sd_20100701092048-rs:coralpib150
Online IBM.ResourceGroup:idle_db2sdin1_997_coralpib147-rg Nominal=Online
'- Online IBM.Application:idle_db2sdin1_997_coralpib147-rs
'- Online IBM.Application:idle_db2sdin1_997_coralpib147-rs:coralpib147
Online IBM.ResourceGroup:idle_db2sdin1_997_coralpib148-rg Nominal=Online
'- Online IBM.Application:idle_db2sdin1_997_coralpib148-rs
'- Online IBM.Application:idle_db2sdin1_997_coralpib148-rs:coralpib148
Online IBM.ResourceGroup:idle_db2sdin1_998_coralpib147-rg Nominal=Online
'- Online IBM.Application:idle_db2sdin1_998_coralpib147-rs
'- Online IBM.Application:idle_db2sdin1_998_coralpib147-rs:coralpib147
Online IBM.ResourceGroup:idle_db2sdin1_998_coralpib148-rg Nominal=Online
'- Online IBM.Application:idle_db2sdin1_998_coralpib148-rs
'- Online IBM.Application:idle_db2sdin1_998_coralpib148-rs:coralpib148
Online IBM.ResourceGroup:idle_db2sdin1_999_coralpib147-rg Nominal=Online
'- Online IBM.Application:idle_db2sdin1_999_coralpib147-rs
'- Online IBM.Application:idle_db2sdin1_999_coralpib147-rs:coralpib147
Online IBM.ResourceGroup:idle_db2sdin1_999_coralpib148-rg Nominal=Online
'- Online IBM.Application:idle_db2sdin1_999_coralpib148-rs
'- Online IBM.Application:idle_db2sdin1_999_coralpib148-rs:coralpib148
Online IBM.ResourceGroup:primary_db2sdin1_900-rg Nominal=Online
'- Online IBM.Application:primary_db2sdin1_900-rs
|- Offline IBM.Application:primary_db2sdin1_900-rs:coralpib149
'- Online IBM.Application:primary_db2sdin1_900-rs:coralpib150
Online IBM.Equivalency:ca_db2sdin1_0-rg_group-equ
|- Online IBM.PeerNode:coralpib150:coralpib150
'- Online IBM.PeerNode:coralpib149:coralpib149
Online IBM.Equivalency:cacontrol_db2sdin1_equ
|- Online IBM.Application:cacontrol_db2sdin1_128_coralpib150:coralpib150
'- Online IBM.Application:cacontrol_db2sdin1_129_coralpib149:coralpib149
Online IBM.Equivalency:db2_db2sdin1_0-rg_group-equ
|- Online IBM.PeerNode:coralpib147:coralpib147
'- Online IBM.PeerNode:coralpib148:coralpib148
Online IBM.Equivalency:db2_db2sdin1_1-rg_group-equ
|- Online IBM.PeerNode:coralpib148:coralpib148
'- Online IBM.PeerNode:coralpib147:coralpib147
Online IBM.Equivalency:db2_private_network_db2sdin1_0
|- Online IBM.NetworkInterface:ib0:coralpib147
|- Online IBM.NetworkInterface:ib0:coralpib148
|- Online IBM.NetworkInterface:ib0:coralpib150
'- Online IBM.NetworkInterface:ib0:coralpib149
Online IBM.Equivalency:db2_public_network_db2sdin1_0
|- Online IBM.NetworkInterface:en0:coralpib147
'- Online IBM.NetworkInterface:en0:coralpib148
Online IBM.Equivalency:db2mnt-db2sd_20100701092048-rg_group-equ
|- Online IBM.PeerNode:coralpib150:coralpib150
|- Online IBM.PeerNode:coralpib149:coralpib149
|- Online IBM.PeerNode:coralpib148:coralpib148
'- Online IBM.PeerNode:coralpib147:coralpib147
Online IBM.Equivalency:idle_db2sdin1_997_coralpib147-rg_group-equ
'- Online IBM.PeerNode:coralpib147:coralpib147
Online IBM.Equivalency:idle_db2sdin1_997_coralpib148-rg_group-equ
'- Online IBM.PeerNode:coralpib148:coralpib148
Online IBM.Equivalency:idle_db2sdin1_998_coralpib147-rg_group-equ
'- Online IBM.PeerNode:coralpib147:coralpib147
Online IBM.Equivalency:idle_db2sdin1_998_coralpib148-rg_group-equ
'- Online IBM.PeerNode:coralpib148:coralpib148
Online IBM.Equivalency:idle_db2sdin1_999_coralpib147-rg_group-equ
'- Online IBM.PeerNode:coralpib147:coralpib147
Online IBM.Equivalency:idle_db2sdin1_999_coralpib148-rg_group-equ
'- Online IBM.PeerNode:coralpib148:coralpib148
Online IBM.Equivalency:instancehost_db2sdin1-equ
|- Online IBM.Application:instancehost_db2sdin1_coralpib150:coralpib150
|- Online IBM.Application:instancehost_db2sdin1_coralpib148:coralpib148
|- Online IBM.Application:instancehost_db2sdin1_coralpib147:coralpib147
'- Online IBM.Application:instancehost_db2sdin1_coralpib149:coralpib149
Online IBM.Equivalency:primary_db2sdin1_900-rg_group-equ
|- Online IBM.PeerNode:coralpib150:coralpib150
'- Online IBM.PeerNode:coralpib149:coralpib149
21. Validation of PEER DOMAIN:
Following command lists the TSA peer domain created as a part of instance creation.
# /opt/IBM/db2/V9.8/bin/db2cluster -list -cm -domain
Domain Name: db2domain_20100701091919
22. Validation of Hosts part of DOMAIN:
# /opt/IBM/db2/V9.8/bin/db2cluster -cm -list –host
HOSTNAME
------------------------
coralpib150
coralpib149
coralpib148
coralpib147
23. Validation of GPFS cluster:
# /opt/IBM/db2/V9.8/bin/db2cluster -cfs -list -domain
Domain Name: db2cluster_20100701091954.torolab.ibm.com
24. Validation of Hosts part of Cluster:
# /opt/IBM/db2/V9.8/bin/db2cluster -cfs -list -host
HOSTNAME
------------------------
coralpib147
coralpib148
coralpib149
coralpib150
25. Validation of filesystem:
# /opt/IBM/db2/V9.8/bin/db2cluster -cfs -list –filesystem
FILE SYSTEM NAME MOUNT_POINT
--------------------------------- -------------------------
db2fs1 /db2sd_20100701092048
26. Validation of PVID
lspv:
hdisk16 00cc14d2002225f5 gpfs1nsd
hdisk17 00cc14d200222bfa None
27. Validation of tiebreaker
# /opt/IBM/db2/V9.8/bin/db2cluster -cm -list -tiebreaker
The current quorum device is of type Disk with the following specifics: PVID=00cc14d200222bfa.
28. Validation of host if it is in maintenance mode:
# /opt/IBM/db2/V9.8/bin/db2cluster -CM -VERIFY –MAINTENANCE
Host 'coralpib147' is currently not in maintenance mode.
A diagnostic log has been saved to '/tmp/ibm.db2.cluster.-_3iEa'.
29. Validation of Filesystem configuration:
# /opt/IBM/db2/V9.8/bin/db2cluster -CFS -VERIFY -CONFIGURATION -FILESYSTEM db2fs1
File system 'db2fs1' is valid for usage with DB2.
30. Validation of /etc/services file:
Instance db2sdin1 would be having 4 FCM port entries starting from, here, 60000 to 60003, two ports related to CF, here, 56000 and 56001 and one instance configuration port, here, 56000.
DB2_db2sdin1 60000/tcp
DB2_db2sdin1_1 60001/tcp
DB2_db2sdin1_2 60002/tcp
DB2_db2sdin1_END 60003/tcp
DB2CF_db2sdin1 56000/tcp
DB2CF_db2sdin1_MGMT 56001/tcp
db2c_db2sdin1 50000/tcp
31. Validation of all the hosts involved in pureScale setup:
Current state of MEMBER and CF are always critical. To get this information, following command is used. Additionally, the sample output is listed below:
In the output below all the members must be in the state of “started” and one CF should be “PRIMARY” and other should be in “CATCHUP” states.
There should not be any alerts, in other words, under ALERT column “NO” should be populated.
All the hosts’ part of cluster must be “ACTIVE”
# / opt/IBM/db2/V9.8/bin/db2instance -list
(This is run as instance user, if as root user, you need to provide instance name, such as -instance db2sdin1 )
ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME
-- ---- ----- --------- ------------ ----- ---------------- ------------ -------
0 MEMBER STARTED coralpib147 coralpib147 NO 0 0 coralpib147-ib0
1 MEMBER STARTED coralpib148 coralpib148 NO 0 0 coralpib148-ib0
128 CF PRIMARY coralpib150 coralpib150 NO - 0 coralpib150-ib0
129 CF CATCHUP coralpib149 coralpib149 NO - 0 coralpib149-ib0
HOSTNAME STATE INSTANCE_STOPPED ALERT
-------- ----- ---------------- -----
coralpib149 ACTIVE NO NO
coralpib150 ACTIVE NO NO
coralpib148 ACTIVE NO NO
coralpib147 ACTIVE NO NO
Ensure that “state” of all the members should be “started” and alert column should be “NO”. Here is the sample output.
32. Validation of member hosts:
# /opt/IBM/db2/V9.8/bin/db2instance -list -member -instance db2sdin1
ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME
-- ---- ----- --------- ------------ ----- ---------------- ------------ -------
0 MEMBER STARTED coralpib147 coralpib147 NO 0 0 coralpib147-ib0
1 MEMBER STARTED coralpib148 coralpib148 NO 0 0 coralpib148-ib0
33. Validation of CF hosts:
Ensure that one of the CFs state should be “Primary” and other should be in “Catch-up” states, and ensure that it should not have any alerts, in other words, “alert” column of the following output should be “NO”
#/opt/IBM/db2/V9.8/bin/db2instance -list -cf -instance db2sdin1
ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME
-- ---- ----- --------- ------------ ----- ---------------- ------------ -------
128 CF PRIMARY coralpib150 coralpib150 NO - 0 coralpib150-ib0
129 CF CATCHUP coralpib149 coralpib149 NO - 0 coralpib149-ib0
34. Host by host validation:
Below samples are given, one for member and other for secondary CF (which is in catch up state)
# /opt/IBM/db2/V9.8/bin/db2instance -list -host coralpib147 -instance db2sdin1
ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME
-- ---- ----- --------- ------------ ----- ---------------- ------------ -------
0 MEMBER STARTED coralpib147 coralpib147 NO 0 0 coralpib147-ib0
HOSTNAME STATE INSTANCE_STOPPED ALERT
-------- ----- ---------------- -----
coralpib147 ACTIVE NO NO
#/opt/IBM/db2/V9.8/bin/db2instance -list -host coralpib149 -instance db2sdin1
ID TYPE STATE HOME_HOST CURRENT_HOST ALERT PARTITION_NUMBER LOGICAL_PORT NETNAME
-- ---- ----- --------- ------------ ----- ---------------- ------------ -------
129 CF CATCHUP coralpib149 coralpib149 NO - 0 coralpib149-ib0
HOSTNAME STATE INSTANCE_STOPPED ALERT
-------- ----- ---------------- -----
coralpib149 ACTIVE NO NO
35. GPFS and TSAMP level validation:
#/opt/IBM/db2/V9.8/install/gpfs/db2ckgpfs -v install
3.3.0.5
#/opt/IBM/db2/V9.8/install/tsamp/db2cktsa -v install
3.1.5.7
36. Validation of Alerts:
#/opt/IBM/db2/V9.8/bin/db2cluster -cm -list -alert -instance db2sdin1
There are no alerts
37. Validation of “Sample” database creation:
Switch to instance from any of the member as part of pureScale instance and run:
$ db2sampl
Creating database "SAMPLE"...
Connecting to database "SAMPLE"...
Creating tables and data in schema "DB2SDIN1"...
'db2sampl' processing complete.
$ db2 list db directory
System Database Directory
Number of entries in the directory = 1
Database 1 entry:
Database alias = SAMPLE
Database name = SAMPLE
Local database directory = /db2sd_20100701092048/db2sdin1
Database release level = e.00
Comment =
Directory entry type = Indirect
Catalog database partition number = 0
Alternate server hostname =
Alternate server port number =
$ db2 connect to sample
Database Connection Information
Database server = DB2/AIX64 9.8.2
SQL authorization ID = DB2SDIN1
Local database alias = SAMPLE
37. Validating if non-pureScale database can be converted to DB2 pureScale environment.
(for database upgrade validation use db2ckupgrade, and here db2checkSD is shown)
Sample out put is given below.
$ /opt/IBM/db2/V9.8/bin/db2checkSD SAMPLE -l /tmp/check.log
(this is run as instance user )
DBT5000I The db2checkSD utility completed successfully. The specified database can be upgraded to a DB2 pureScale environment. The output log file is named "/tmp/check.log".
Log of db2checkSD:
Version of DB2CHECKSD being run: VERSION 9.8.
Database: SAMPLE
DBT5000I The db2checkSD utility completed successfully. The specified database can be upgraded to a DB2 pureScale environment. The output log file is named "/tmp/check.log".
38. Following are few GPFS/TSA native commands to validate the GPFS cluster and Peer Domain. Commands are self explanatory.
#/usr/lpp/mmfs/bin/mmlscluster
GPFS cluster information
========================
GPFS cluster name: db2cluster_20100701091954.torolab.ibm.com
GPFS cluster id: 656037675713467899
GPFS UID domain: db2cluster_20100701091954.torolab.ibm.com
Remote shell command: /usr/bin/ssh
Remote file copy command: /usr/bin/scp
GPFS cluster configuration servers:
-----------------------------------
Primary server: coralpib147.torolab.ibm.com
Secondary server: coralpib148.torolab.ibm.com
Node Daemon node name IP address Admin node name Designation
-----------------------------------------------------------------------------------------------
1 coralpib147.torolab.ibm.com 9.26.182.202 coralpib147.torolab.ibm.com quorum-manager
2 coralpib148.torolab.ibm.com 9.26.182.203 coralpib148.torolab.ibm.com quorum-manager
3 coralpib149.torolab.ibm.com 9.26.182.204 coralpib149.torolab.ibm.com quorum-manager
4 coralpib150.torolab.ibm.com 9.26.182.205 coralpib150.torolab.ibm.com quorum-manager
~
# lsrpdomain
Name OpState RSCTActiveVersion MixedVersions TSPort GSPort
db2domain_20100701091919 Online 2.5.5.2 No 12347 12348
# lsrpnode
Name OpState RSCTVersion
coralpib150 Online 2.5.5.2
coralpib149 Online 2.5.5.2
coralpib148 Online 2.5.5.2
coralpib147 Online 2.5.5.2
# /usr/lpp/mmfs/bin/mmlsnsd
File system Disk name NSD servers
---------------------------------------------------------------------------
db2fs1 gpfs1nsd (directly attached)
#/usr/lpp/mmfs/bin/mmgetstate
Node number Node name GPFS state
------------------------------------------
1 coralpib147 active
#/usr/lpp/mmfs/bin/mmgetstate -a
Node number Node name GPFS state
------------------------------------------
1 coralpib147 active
2 coralpib148 active
3 coralpib149 active
4 coralpib150 active
Following command confirms that the file system is laid on the physical disk (ex: /dev/hdisk16). So basically this command gives the list of physical disks on which the file system is extended.
#/opt/IBM/db2/V9.8/bin/db2cluster -list -filesystem db2fs1 -disk
PATH ON LOCAL HOST OTHER KNOWN PATHS
--------------------------------- -------------------------
(*) /dev/hdisk16
39. Validation of File system configuration:
# /opt/IBM/db2/V9.8/bin/db2cluster -cfs -list -filesystem db2fs1 -configuration
db2fs1 options.
OPTION VALUE
minFragmentSize 32768
inodeSize 512
indirectBlockSize 32768
defaultMetadataReplicas 1
maxMetadataReplicas 2
defaultDataReplicas 1
maxDataReplicas 2
blockAllocationType cluster
fileLockingSemantics nfs4
ACLSemantics all
estimatedAverageFilesize 1048576
numNodes 255
blockSize 1048576
quotasEnforced none
defaultQuotasEnabled none
maxNumberOfInodes 98304
filesystemVersion 11.05 (3.3.0.2)
filesystemVersionLocal 11.05 (3.3.0.2)
filesystemVersionManager 11.05 (3.3.0.2)
filesystemVersionOriginal 11.05 (3.3.0.2)
filesystemHighestSupported 11.05 (3.3.0.2)
supportForLargeLUNs yes
DMAPIEnabled no
logfileSize 4194304
exactMtime yes
suppressAtime no
strictReplication whenpossible
storagePools system
disks gpfs1nsd
UID 091AB6CA
Maximum Snapshot Id 0
automaticMountOption yes
additionalMountOptions none
defaultMountPoint /db2sd_20100701092048
Related Information
Was this topic helpful?
Document Information
Modified date:
16 June 2018
UID
swg21442415