DB2 10.5 for Linux, UNIX, and Windows

Extreme capacity

The IBM® DB2® pureScale® Feature can scale with near-linear efficiency and high predictability. Adding capacity is as simple as adding new members to the instance.

High scalability

During testing with typical web commerce and OLTP workloads, the DB2 pureScale Feature demonstrated that it can scale to different levels with exceptional efficiency; the maximum supported configuration provides extreme capacity. To scale out, your existing applications do not have to be aware of the topology of your DB2 pureScale environment.1
Figure 1. Scalability of a DB2 pureScale environment. Additional members begin processing incoming database requests as soon as they join the instance. Overall throughput almost doubles as the number of members doubles.
When two more members join a DB2 pureScale data sharing instance, they immediately begin to process incoming database requests.
When two more members join the instance, they immediately begin processing incoming database requests. Overall throughput almost doubles as the number of members doubles. For more information about scalability, see the DB2 pureScale Feature road map.

Scalability by design

Why does the DB2 pureScale Feature scale so well? The answer lies in the highly efficient design, which tightly integrates several advanced hardware and software technologies.

For example, the cluster caching facility (CF) handles instance-wide lock management and global caching with great efficiency. Without the equivalent of such a dedicated component to handle locking and caching, the database servers in a cluster must communicate with each other to maintain vital locking and data consistency information. Each time that a database server is added, the amount of communication "chatter" increases, reducing scale-out efficiency.

Even in the maximum supported configuration, your DB2 pureScale environment communicates efficiently. Data pages in the group buffer pool (global cache) are shared between members and the cluster caching facility through Remote Direct Memory Access (RDMA), without requiring any processor time or I/O cycles on members. All operations are performed over the InfiniBand high-speed interconnect and do not require context switching or routing through a slower IP network stack. Round-trip communication times between cluster components are typically measured in the low tens of microseconds. The end result is an instance that is always aware of what data is in flight and where, but without the performance penalty.

1 During testing, database requests were workload balanced across members by the DB2 pureScale Feature, not routed. Update and select operations were randomized to ensure that the location of data on the shared disk storage had no effect on scalability.