DB2 Version 10.1 for Linux, UNIX, and Windows

Buffer pools in a DB2 pureScale environment

In a DB2® pureScale® environment, the cluster caching facility provides a common group buffer pool (GBP) that is shared by all members. Each member also manages its own set of local buffer pools (LBPs).

The GBP is a single buffer pool that supports all DB2 page sizes and that all members can use. Members cache pages in their own LBPs and use the GBP to maintain page consistency between members. LBPs of different page sizes can exist on each member (such as 4K, 8K, 16K, or 32K).

The GBP stores two types of information, directory entries and data elements. Directory entries store metadata information about buffer pool pages, and data elements store page data. The ratio between directory entries and data elements is automatically adjusted by DB2 for Linux, UNIX, and Windows. The GBP memory size is defined by the cf_gbp_sz configuration parameter.

DB2 buffer pool service

Because there are LBPs on each member and a GBP that is shared by all members, multiple copies of the same page can exist in more than one buffer pool. Global concurrency and coherency control for accessing the pages, for making changes to a page, and for propagating changes to other members are handled by the DB2 buffer pool service. The service also handles the I/O of data in buffer pools, including the writing of pages in the GBP to disk.

GBP-dependency

A buffer pool page that needs to be accessed by different members in a DB2 pureScale environment is GBP-dependent. For these dependent pages, the GBP coordinates copies of the page in different buffer pools. A page that only one member has access to is not GBP-dependent and exists only in a member LBP. In a DB2 pureScale environment, the temporary table space is not shared between different members, so that any buffer pool pages for the temporary table space are not GBP-dependent.

P-lock control the access of buffer pool pages in a DB2 pureScale environment for updating and reading a version of a page. Unlike the logical lock (such as a row lock or a table lock) that is owned by a particular transaction, the P-locks that control access to buffer pool pages are owned by members of the cluster. The following P-locks are used:

To read a consistent version, but not necessary the latest version of the page, no P-lock is required. DB2 for Linux, UNIX, and Windows decides internally which type of read is used when accessing a page.

GBP-coherency

When a buffer pool page is dependent on the GBP, it might exist on disk, in the GBP, in a LBP on multiple members, or in a combination of these. The following protocol rules coordinate the coherency of multiple page copies:

GBP control

The total amount of memory to be used by the GBP is controlled by the cf_gbp_sz database configuration parameter. The GBP is allocated when the database is first activated on a member, if the GBP does not already exist on the CF. The GBP is deallocated when the CF is stopped, when the database is dropped or consistently shutting down in the entire cluster, or during a database restore operation.

Castout writes pages from the GBP to disk and is coordinated between members. Castout is similar to page cleaning in LBPs and fulfills two functions:

If necessary, you can control castout behavior with the softmax database configuration parameter. In a DB2 pureScale environment, this parameter determines how many pages must be cast out from the GBP to disk during each work phase.

LBP control

Local buffer pool configuration is controlled through DDL statements. The member through which a DDL statement is issued acts as the coordinator responsible for distributing the statement to other members in the cluster for local execution, and coordinates the overall execution. The behavior of a LBP DDL statement execution in a DB2 pureScale instance has notable differences in comparison to an instance that are not in a DB2 pureScale environment. In a DB2 pureScale environment, not all members (other than the coordinator) with which the LBP is defined must be available or have the database activated for a DDL statement to be successful. For members who are currently unavailable (for example, due to scheduled maintenance) or who do not have the database activated, the DDL statement is not processed by them. The DDL execution continues normally. When the transaction is committed, the buffer pool changes will be updated in the on-disk buffer pool files by the coordinator for all applicable members, including the ones who did not process the statement. These members apply the committed buffer pool changes the next time they activate the database. However, if any active member fails to run the DDL statement, due to an out of memory condition or other errors, the statement is rolled back and an error is returned to the user or client application.

Buffer pool monitor elements

You can review a number of monitoring elements specific to the GBP and LBP to monitor the overall performance of the DB2 pureScale Feature. There are monitor elements specific to the database configuration parameter cf_gbp_sz. For more information about viewing memory usage levels for the cluster caching facility see the topic "MON_GET_CF table function - Get cluster caching facility metrics".

There are also a number of monitoring elements for tracking the number of physical page reads, logical page reads, and invalid pages for the GBP and LBP. For a list of monitoring elements for the DB2 pureScale Feature, see "New and changed monitor elements".