How to setup HADR in DB2 PureScale environment?
HADR is supported in a DB2 PureScale environment starting in DB2 v10.5. The following is a sample environment used as an example to demonstrate HADR setup on PureScale:
Primary site (site A): pureScale hosted on machine p0 to p3, one member per machine.
Standby site (site B): pureScale hosted on machine s0 to s3, one member per machine.
On all hosts, DB2 instance owner is "db2inst".
On all hosts, TCP port 4000 is used for HADR primary/standby communication.
On all hosts, TCP port 8000 is used for SQL client/server communication.
Database name is myDb
Setting up HADR:
1) An online backup (no downtime needed) is taken on the primary database. The image is transported to standby site and restored.
2) Configure the primary database:
Set at cluster level:
Set for each member (p0 is shown below):
HADR_LOCAL_HOST p0 # p1, p2, p3 for other members.
The followings are default:
HADR_TARGET_LIST need not list all 4 remote members. But for a small cluster like this, it is recommended to list all remote members. Setting HADR_REMOTE_HOST and HADR_REMOTE_INST is recommended but optional. If not set, they will be auto configured after HADR starts. Because there is only one standby, HADR_TARGET_LIST and HADR_REMOTE_HOST are set to same value. HADR_REMOTE_SVC remains NULL.
The following command could be used to set HADR_LOCAL_HOST on all members:
db2_all 'db2 update db cfg for myDb member $DB2NODE using hadr_local_host `hostname`
The above command assumes that p0-p3 are the canonical name of the hosts, therefore returned by the command "hostname". If this is not the case, you can use a custom script that returns the host name or IP address used for HADR log shipping. Since HADR_LOCAL_SVC is the same on all members, a single command below will set it for all members:
db2 update db cfg for myDb using hadr_local_svc 4000
3) Configure the standby database:
Set at cluster level:
Set for each member (s0 is shown below):
HADR_LOCAL_HOST s0 # s1, s2, s3 for other members.
The following is default:
See "notes" in "Configure the primary database" section above.
4) Execute the following command on standby member 0 (to designate member 0 as the preferred replay member):
start hadr on db myDb as standby
5) Execute the following command on primary member 0 (to designate member 0 as the preferred replay member, although the designation won't be used until the db becomes a standby):
start hadr on db myDb as primary
6) Monitor HADR from primary. Sample monitoring command and output:
select LOG_STREAM_ID, PRIMARY_MEMBER, STANDBY_MEMBER, HADR_STATE
from table (mon_get_hadr(-2))
LOG_STREAM_ID PRIMARY_MEMBER STANDBY_MEMBER HADR_STATE
------------- -------------- -------------- ---------------
0 0 0 REMOTE_CATCHUP
1 1 0 REMOTE_CATCHUP
2 2 0 REMOTE_CATCHUP
3 3 0 REMOTE_CATCHUP
STANDBY_MEMBER is 0 on all streams because all primary members connect to the standby replay member, which in this case is member 0. The streams are in REMOTE_CATCHUP state. Logs generated since the backup are being shipped. Once shipping reaches end of log, the streams will switch to PEER state. Once in PEER state, the primary is fully protected. Standby is now a real time copy of the primary.
"db2pd -db <db name> -hadr" can also be used to monitor HADR. Note that it only reports streams being locally processed. Thus on primary, the "-allmembers" option must be used to see all streams.
For the example above:
db2pd -hadr -db myDb -allmembers
On standby, if current replay member is not known, execute "db2pd -hadr -db myDb -allmembers". In the output, the non-replay members will display "Database myDb not activated on database member X" and replay member will display HADR information. Alternatively, the standby replay member can be determined via the STANDBY_MEMBER field from the primary's "db2pd -db <dbname> -hadr" or MON_GET_HADR table function output.