Programming for concurrency

You can design your application programs to protect the integrity of accessed data without preventing other processes from accessing the same data for long periods of time.

About this task

Concurrency is the ability of more than one application process to access the same data at essentially the same time. Concurrency must be controlled to prevent lost updates and such possibly undesirable effects as unrepeatable reads and access to uncommitted data.

One basic way that DB2® controls concurrency is by using locks for units of work. When a unit of work completes, all locks that were implicitly acquired by that unit of work are released, which enables a new unit of work to begin. The amount of processing time that is used by a unit of work in your program affects the length of time that DB2 prevents other users from accessing that locked data. When several programs try to use the same data concurrently, each program's unit of work should be as short as possible to minimize the interference between the programs.

Procedure

Begin program-specific programming interface information.
To design your applications for concurrency:

  • Program applications to access data in the same order. When two application access the same rows of a table in the same order, one application might need to wait for the other, but they cannot deadlock. Therefore the recommendation is to try to program different applications to access rows in the same order, and access tables in the same order.
  • Commit work as soon as doing so is practical, to avoid unnecessary lock contention, even in read-only applications.

    Taking commit points frequently in a long running unit of recovery (UR) has the following benefits at the possible cost of more CPU usage and log write I/Os:

    • Reduces lock contention, especially in a data sharing environment
    • Improves the effectiveness of lock avoidance, especially in a data sharing environment
    • Reduces the elapsed time for DB2 system restart following a system failure
    • Reduces the elapsed time for a unit of recovery to rollback following an application failure or an explicit rollback request by the application
    • Provides more opportunity for utilities, such as online REORG, to break in

    Consider using the URCHKTH subsystem parameter or the URLGWTH subsystem parameter to identify applications that do not committing frequently. URCHKTH identifies when too many checkpoints have occurred without a UR issuing a commit. It is helpful in monitoring overall system activity. URLGWTH enables detects applications that might write too many log records between commit points, potentially creating a lengthy recovery situation for critical tables.

    Even though an application might conform to the commit frequency standards of the installation under normal operational conditions, variation can occur based on system workload fluctuations. For example, a low-priority application might issue a commit frequently on a system that is lightly loaded. However, under a heavy system load, the use of the CPU by the application might be preempted, and, as a result, the application might violate the rule set by the URCHKTH subsystem parameter. For this reason, add logic to your application to commit based on time elapsed since last commit, and not solely based on the amount of SQL processing performed. In addition, take frequent commit points in a long running unit of work that is read-only to reduce lock contention and to provide opportunities for utilities, such as online REORG, to access the data.

    Committing frequently is equally important for objects that are not logged and objects that are logged. Make sure, for example, that you commit work frequently even if the work is done on a table space that is defined with the NOT LOGGED option. Even when a given transaction modifies only tables that reside in not logged table spaces, a unit of recovery is still established before updates are performed. Undo processing will continue to read the log in the backward direction looking for undo log records that must be applied until it detects the beginning of this unit of recovery as recorded on the log. Therefore, such transactions should perform frequent commits to limit the distance undo processing might have to go backward on the log to find the beginning of the unit of recovery.

  • Include logic in your application program to retry after a deadlock or timeout to attempt recovery from the contention situation without assistance. Such a method could help you recover from the situation without assistance from operations personnel. You can use the following methods to determine whether a timeout or deadlock occurs:
    • The SQLERRD(3) field in the SQLCA
    • A GET DIAGNOSTICS statement
  • Bind most applications with the ISOLATION(CS) and CURRENTDATA(NO) options. These options enable DB2 to release locks early and avoid taking locks in many cases. ISOLATION(CS) typically enables DB2 to release acquired locks as soon as possible. The CURRENTDATA(NO) typically enables DB2 to acquire the fewest number of locks, for better lock avoidance. When you use ISOLATION(CS) and CURRENTDATA(NO), consider using the SKIPUNCI subsystem parameter value to YES so that readers do not wait for the outcome of uncommitted inserts.
  • If you do not use ISOLATION(CS) and CURRENTDATA(NO), use the following bind options, in order of decreasing preference:
    1. ISOLATION(CS) with CURRENTDATA(YES), when data returned to the application must not be changed before your next FETCH operation.
    2. ISOLATION(RS), when data returned to the application must not be changed before your application commits or rolls back. However, you do not care if other application processes insert additional rows.
    3. ISOLATION(RR), when data evaluated as the result of a query must not be changed before your application commits or rolls back. New rows cannot be inserted into the answer set.
  • Use ISOLATION(UR) option cautiously. The Resource Recovery Services attachment facility UR isolation acquires almost no locks on rows or pages. It is fast and causes little contention, but it reads uncommitted data. Do not use it unless you are sure that your applications and end users can accept the logical inconsistencies that can occur.

    As an alternative, consider using a SKIP LOCKED DATA clause if omitting data is preferable to reading uncommitted data in your application.

  • Use sequence objects to generate unique, sequential numbers. Using an identity column is one way to generate unique sequential numbers.

    However, as a column of a table, an identity column is associated with and tied to the table, and a table can have only one identity column. Your applications might need to use one sequence of unique numbers for many tables or several sequences for each table. As a user-defined object, sequences provide a way for applications to have DB2 generate unique numeric key values and to coordinate the keys across multiple rows and tables.

    The use of sequences can avoid the lock contention problems that can result when applications implement their own sequences, such as in a one-row table that contains a sequence number that each transaction must increment. With DB2 sequences, many users can access and increment the sequence concurrently without waiting. DB2 does not wait for a transaction that has incremented a sequence to commit before allowing another transaction to increment the sequence again.

  • Examine multi-row operations such as multi-row inserts, positioned updates, and positioned deletes, which have the potential of expanding the unit of work. This situation can affect the concurrency of other users that access the data. You can minimize contention by adjusting the size of the host-variable-array, committing between inserts, updates, and preventing lock escalation.
  • Use global transactions, which enables DB2 and other transaction managers to participate in a single transaction and thereby share the same locks and access the same data. The Resource Recovery Services attachment facility (RRSAF) relies on a z/OS® component called Resource Recovery Services (RRS). RRS provides system-wide services for coordinating two-phase commit operations across z/OS products. For RRSAF applications and IMS™ transactions that run under RRS, you can group together a number of DB2 agents into a single global transaction.

    A global transaction allows multiple DB2 agents to participate in a single global transaction and thus share the same locks and access the same data. When two agents that are in a global transaction access the same DB2 object within a unit of work, those agents do not deadlock or timeout with each other. The following restrictions apply:

    • Parallel Sysplex® is not supported for global transactions.
    • Because each of the "branches" of a global transaction are sharing locks, uncommitted updates issued by one branch of the transaction are visible to other branches of the transaction.
    • Claim/drain processing is not supported across the branches of a global transaction, which means that attempts to issue CREATE, DROP, ALTER, GRANT, or REVOKE might deadlock or timeout if they are requested from different branches of the same global transaction.
    • LOCK TABLE might deadlock or timeout across the branches of a global transaction.
  • Use optimistic concurrency control. Optimistic concurrency control represents a faster, more scalable locking alternative to database locking for concurrent data access. It minimizes the time for which a given resource is unavailable for use by other transactions.

    When an application uses optimistic concurrency control, locks are obtained immediately before a read operation and released immediately. Update locks are obtained immediately before an update operation and held until the end of the transaction. Optimistic concurrency control uses the RID and a row change token to test whether data has been changed by another transaction since the last read operation.

    Because DB2 can determine when a row was changed, you can ensure data integrity while limiting the time that locks are held. With optimistic concurrency control, DB2 releases the row or page locks immediately after a read operation. DB2 also releases the row lock after each FETCH, taking a new lock on a row only for a positioned update or delete to ensure data integrity.

    To implement optimistic concurrency control, you must establish a row change timestamp column with a CREATE TABLE statement or an ALTER TABLE statement. The column must be defined with one of the following null characteristics:

    • NOT NULL GENERATED ALWAYS FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP
    • NOT NULL GENERATED BY DEFAULT FOR EACH ROW ON UPDATE AS ROW CHANGE TIMESTAMP

    After you establish a row change timestamp column, DB2 maintains the contents of this column. When you want to use this change token as a condition when making an update, you can specify an appropriate predicate for this column in a WHERE clause.