DB2 10.5 for Linux, UNIX, and Windows

Components of the DB2 pureScale Feature

The IBM® DB2® pureScale® Feature combines several tightly integrated software components in a highly available database solution. These software components are installed and configured automatically when you deploy the DB2 pureScale Feature.

Figure 1. A view of the major components in a DB2 pureScale environment, shown with DB2 clients connected to the data server. DB2 members are processing database requests, and cluster caching facilities (CFs) provide required infrastructure services. Data is stored on shared disk storage, accessible to all members.
Graphic of a view of the major components in a DB2 pureScale environment, shown with DB2 clients connected to the data server. DB2 members are processing database requests, and cluster caching facilities (CFs) provide required infrastructure services. Data is stored on shared disk storage, accessible to all members.

The following sections provide an overview of the key components of a DB2 pureScale environment.

DB2 members

When a DB2 client connects to a database, the connection is routed to a member, which then processes the request. The workload of members is balanced automatically by directing requests from DB2 clients to the member with the lowest workload. How individual requests are workload balanced depends on whether you use the less frequent connection-level workload balancing or the more frequent transaction-level workload balancing. All members read from and write to the same database on shared disk; the full set of data is shared among them. Each member runs its own db2sysc process and threads, and each member includes its own buffer pools, memory regions, and log files.

The recommended configuration is one member per host. A host can be either a computer or a logical partition (LPAR). To take advantage of the design for a continuously available environment and to help provide optimum performance, create a minimum of two DB2 members, each on its own computer. The DB2 pureScale Feature supports up to 128 members. Although all your DB2 members might initially use identical hardware specifications, hardware homogeneity is not required. Host computers that you add as you scale your instance can have different specifications.

You should not use DB2 member hosts for any other purpose.

Cluster caching facility (CF )

The DB2 pureScale Feature includes a cluster caching facility, also known as the CF component in a DB2 pureScale environment. This facility is used to coordinate locking through a global lock manager to prevent conflicting access to the same table data by different members. The cluster caching facility is also used to keep page caching consistent across all members through a shared group buffer pool. The group buffer pool coordinates copies of pages that might exist across the (local) buffer pools of members.

The cluster caching facility also provides a shared communication area (SCA). Members can use this shared communication area to emulate cluster wide shared memory.

At least one cluster caching facility must be online for a database to be available while DB2 members are online. To take advantage of the design for a continuously available environment, use multiple cluster caching facilities. Duplexing of both metadata and database data to a secondary cluster caching facility ensures that while it is active, it remains in peer state with the primary CF. If the primary CF fails, a secondary CF can take over to maintain database availability.

CFs can run on their own computers or they can share hosts with members by running on their own logical partitions (LPARs). You should not use the cluster caching facility hosts for anything other than the DB2 pureScale Feature. If you must run other software on the cluster caching facility hosts, additional manual tuning of your database configuration might be required.

DB2 cluster services

DB2 cluster services is software that provides automatic heartbeat failure detection and automatically initiates the necessary recovery operations after a failure is detected. It also provides the cluster file system that gives each host in a DB2 pureScale instance access to a common file system. DB2 cluster services includes technology from IBM Tivoli® System Automation for Multiplatforms (Tivoli SA MP) software, IBM Reliable Scalable Clustering Technology (RSCT) software, and IBM General Parallel File System (GPFS™) software. This technology is packaged as an integral part of the DB2 pureScale Feature.

If a component in your DB2 pureScale environment fails to respond to the heartbeat detection protocol, DB2 cluster services alerts the members and cluster caching facilities, fences the failed component from shared storage (if necessary) and initiates a component restart. This restart process is designed to be automatic and does not require your intervention. While the recovery of the failed component is underway, the rest of the instance remains available and can continue to process incoming database requests. Applications that are connected to a member that fails will be automatically rerouted to other members, via automatic DB2 client reroute support.

The installation process of the DB2 pureScale Feature uses integrated the IBM General Parallel File System software to create the DB2 cluster file system on the shared disk.

Shared disk storage

The disk storage that you use to set up the instance is shared among all components in the DB2 pureScale environment. The disk storage is used for the following purposes:

Network connectivity

In a DB2 pureScale environment, these types of networks are used: