Before you begin
Note: IBM support for
a geographically dispersed DB2 pureScale cluster (GDPC) implementation
requires engagement of IBM Lab
Services for separately charged initial installation services. Contact
your IBM sales
representative for details.
Ensure that these prerequisites are met:
- Three sites to communicate with each other through reliable TCP/IP
links.
- All DB2 pureScale Feature installation
prerequisites have been satisfied across all hosts to be used in the
cluster.
- Two sites are connected via a WAN or dark fiber with distance
range extenders, with a single high speed interconnect subnet configured
across the sites.
- The two sites each have a local SAN controller, and the SAN is
zoned such that LUNs used for the DB2 pureScale instance
are directly accessible from both sites. A one-to-one mapping between
LUNs is required across sites so each LUN on the first site has a
corresponding equally sized LUN on the second site.
Storage
requirements on the three sites (Site A, B, and C):
- Site A and B:
- Provision equal sized LUNs on each site.
- All LUNs are accessible by all hosts in both sites.
- User data is replicated in these LUNs.
- Site C:
- The storage requirement is to assign a device (in /dev)
to each shared file system in the cluster. In the following example
where three shared file systems are intended to be created, three
separate devices in /dev in Host T are required.
- No user data is replicated to storage devices on Host T. These
devices are used only to store replication metadata for recovery and
maintain file system consistency purposes. As such, the size requirement
for these devices is minimal (at least 50 MB.)
- These devices do not need to be provisioned from the same SAN
as in Site A and B. To keep the tiebreaker site as isolated as possible,
use host local physical storage.
- On AIX® operating systems,
instead of hdisk as the device type, logical
volumes can be used with the following guidelines:
- Requires as many logical volumes as the total number of shared
file systems in the cluster (each logical volume is at least 50 MB).
- Create the logical volumes within in the same volume group.
- Assign at least one physical hdisk to the
volume group. The actual number depends on the number of logical volumes
that are required, and the number of empty slots in the host for physical
hard disk. If possible, for redundancy purposes, use two physical
volumes.
- If more than one physical volume is assigned, disable quorum check
in the volume group.
About this task
For this example, the following hardware configurations
are used:
There are three sites. In this example,
the three sites are:
- Site A: Hosts hostA1, hostA2, hostA3
- Site B: Hosts hostB1, hostB2, hostB3
- Site C: Host T
Equal sized LUNs have been provisioned on
storage at 2 sites, and all LUNs are accessible by all hosts at the
sites.
Where
/dev/hdiskA1 is used
for the instance shared file system;
/dev/hdiskA2 is
used for the database log file system; and
/dev/hdiskA3,
/dev/hdiskA4,
/dev/hdiskA5,
/dev/hdiskA6,
and
/dev/hdiskA7 are used for the database data file
system.
Where
/dev/hdiskB1 is used
for the instance shared file system;
/dev/hdiskB2 is
used for the database log file system; and
/dev/hdiskB3,
/dev/hdiskB4,
/dev/hdiskB5,
/dev/hdiskB6,
and
/dev/hdiskB7 are used for the database data file
system.
In this scenario, the geographically dispersed
DB2 pureScale cluster
(GDPC) is setup as:
- Database MYDB is to be created on instance db2inst1.
- db2inst1 has three file systems:
- logfs for transaction logs and database metadata
for MYDB.
- datafs for database containers for MYDB.
- db2fs1 for the shared file system for the instance
The command syntax in the examples use this
format:
uid@host> command
Where
uid is
the user ID that executes the command,
host is
where the command should be executed, and
command is
the command to execute.
What to do next
Once you have configured your GDPC environment, validate the GDPC through testing.