Improving concurrency in data sharing environments

You can apply certain recommendations to reduce lock contention and improve concurrency in data sharing environments.

Before you begin

Before applying these particular data-sharing recommendations, ensure that you have applied the basic recommendations for improving concurrency. For information about the basic recommendations, see Improving concurrency.

About this task

Lock requests, along with group buffer pool requests, are the most critical factors for data sharing performance. To reduce locking contention in data sharing environments, you can apply the same approaches for improving concurrency outside of data sharing. Some of the general recommendations are repeated below with additional emphasis, because lock avoidance is more important in data sharing environments.

Procedure

You can use any of the following approaches to improve concurrency in a data sharing environment:

  • Start of changeSpecify the TRACKMOD NO and MEMBER CLUSTER options when you create table spaces. These options can reduce p-lock and page latch contention on space map pages during heavy inserts into GBP-dependent table spaces. Start of changeTRACKMOD NO cannot be used when incremental image copies are used for the table spaces.End of changeEnd of change
  • When the MEMBER CLUSTER option is used, consider the use of LOCKSIZE ROW for insert-intensive workloads. Row-level locking might provide additional relief in the following forms:
    • Reduced page P-lock and page latch contention on data pages
    • Better space usage
    • Reduced working set of pages in the buffer pool

    However, do not use LOCKSIZE ROW without the MEMBER CLUSTER option for insert-intensive workloads that frequently insert data at the end of the table space. Doing so might result in excessive page p-lock contention on data pages and space map pages, and the extra locking protocol from taking page p-locks.

  • Use LOCKSIZE ANY or PAGE as a design default. Consider LOCKSIZE ROW only when applications encounter significant lock contention, including deadlock and timeout.

    LOCKSIZE ANY is the default for CREATE TABLESPACE. It allows DB2 to choose the lock size, and DB2 usually chooses LOCKSIZE PAGE and LOCKMAX SYSTEM for non-LOB/non-XML table spaces. For LOB table spaces, DB2 chooses LOCKSIZE LOB and LOCKMAX SYSTEM. Similarly, for XML table spaces, DB2 chooses LOCKSIZE XML and LOCKMAX SYSTEM.

    Start of changePage-level locking generally results in fewer requests to lock and unlock data for sequential access and manipulation, which translates to reduced CPU cost. Page-level locking is also more likely to result in sequentially inserted rows in the same data page. Row-level locking with MAXROWS=1 can suffer from data page p-locks in data sharing environments. However, page-level locking can avoid the data page p-locks when MAXROWS=1. End of change

    Row-level locking provides better concurrency because the locks are more granular. However, the cost of each lock and unlock request is roughly the same for both page and row-level locking. Therefore, row-level locking is likely to incur additional CPU cost. Row-level locking might also result in more data page latch contention. Sequentially inserted rows, by concurrent threads, are less likely to be in the same data page under row-level locking.

  • Start of changeUse the RELEASE(DEALLOCATE) bind option to avoid the cost of repeatedly releasing and reacquiring locks for applications that use frequent commit points for repeated access to the same table spaces. For objects that do not have much concurrent activity within a member, this option avoids the cost of repeatedly releasing and reacquiring the same parent locks. You can also reduce the amount of false contention for transactions that use the thread.End of change
  • Use the RELEASE(COMMIT) bind option for plans or packages that are used less frequently to avoid excessive increases to the EDM pool storage, lock storage, and thread-related storage.
  • Consider using randomized index key columns. In a data sharing environment, you can use randomized index key columns to reduce locking contention at the possible cost of more CPU usage, from increased locking and getpage operations, and more index page read and write I/Os.

    This technique is effective for reducing contention on certain types of equality predicates. For example, if you create an index on a timestamp column, where the timestamp is always filled with the current time, every insertion on the index would be the greatest value and cause contention on the last index leaf page. An index on a column of sequential values, such as invoice numbers, causes similar contention, especially in heavy transaction workload environments. In each case, using the RANDOM index order causes the values to be stored at random places in the index tree, and reduce the chance that consecutive insertions hit the same index leaf page and cause contention.

    Although the randomized index can relieve contention problems for sets of similar or sequential values, it does not help with identical values. Identical values encode the same and each are inserted at the same place on the index tree.