DB2 10.5 for Linux, UNIX, and Windows

softmax - Recovery range and soft checkpoint interval configuration parameter

This parameter determines the frequency of soft checkpoints and the recovery range, which help out in the crash recovery process.

Important: The softmax database configuration parameter is deprecated is deprecated in Version 10.5 and might be removed in a future release. For more information, see Some database configuration parameters are deprecated.

The softmax parameter is replaced with the new page_age_trgt_mcr and page_age_trgt_gcr parameters, which are both configured as a number of seconds.

Existing upgraded databases continue to use the softmax parameter. You can check if you are using softmax by querying the database configuration and checking the value of this parameter. You can switch from using softmax to these new parameters by setting the value of softmax to 0.

New databases are created with the value of softmax set to 0 by default.

Configuration Type
Database
Parameter Type
Configurable
Default [range]
DB2® pureScale® environment
0 [ 1 - 65 535 ]
Outside of a DB2 pureScale environment
100 [ 1 - 100 * logprimary ]
Note: The default value is subject to change by the DB2 Configuration Advisor after initial database creation.
Unit of Measure
Percentage of the size of one primary log file
This parameter is used to:
  • Influence the number of log files that need to be recovered following a crash (such as a power failure). For example, if the default value of 100 is used, the database manager will try to keep the number of log files that need to be recovered to 1. If you specify 300 as the value of this parameter, the database manager will try to keep the number of log files that need to be recovered to 3.

    To influence the number of log files required for crash recovery, the database manager uses this parameter to trigger the page cleaners to ensure that pages older than the specified recovery window are already written to disk.

  • Determine the frequency of soft checkpoints. It is the process of writing information to the log control file. This information is used to determine the starting point in the log in case a database restart is required.
At the time of a database failure resulting from an event such as a power failure, there might have been changes to the database which:
  • Have not been committed, but updated the data in the buffer pool
  • Have been committed, but have not been written from the buffer pool to the disk
  • Have been committed and written from the buffer pool to the disk.
When a database is restarted, the log files will be used to perform a crash recovery of the database which ensures that the database is left in a consistent state (that is, all committed transactions are applied to the database and all uncommitted transactions are not applied to the database).

To determine which records from the log file need to be applied to the database, the database manager uses information recorded in a log control file. (The database manager actually maintains two copies of the log control file, SQLOGCTL.LFH.1 and SQLOGCTL.LFH.2, so that if one copy is damaged, the database manager can still use the other copy.) These log control files are periodically written to disk, and, depending on the frequency of this event, the database manager might be applying log records of committed transactions or applying log records that describe changes that have already been written from the buffer pool to disk. These log records have no impact on the database, but applying them introduces some additional processing time into the database restart process.

The log control files are always written to disk when a log file is full, and during soft checkpoints. You can use this configuration parameter to control the frequency of soft checkpoints.

The timing of soft checkpoints is based on the difference between the "current state" and the "recorded state", given as a percentage of the logfilsiz. The "recorded state" is determined by the oldest valid log record indicated in the log control files on disk, while the "current state" is determined by the log control information in memory. (The oldest valid log record is the first log record that the recovery process would read.) The soft checkpoint will be taken if the value calculated by the following formula is greater than or equal to the value of this parameter:
  ( (space between recorded and current states) / logfilsiz ) * 100 

Recommendation: You might want to increase or reduce the value of this parameter, depending on whether your acceptable recovery window is greater than or less than one log file. Lowering the value of this parameter will cause the database manager both to trigger the page cleaners more often and to take more frequent soft checkpoints. These actions can reduce both the number of log records that need to be processed and the number of redundant log records that are processed during crash recovery.

Note however, that more page cleaner triggers and more frequent soft checkpoints increase the processing time associated with database logging, which can impact the performance of the database manager. Also, more frequent soft checkpoints might not reduce the time required to restart a database, if you have:
  • Very long transactions with few commit points.
  • A very large buffer pool and the pages containing the committed transactions are not written back to disk very frequently. (Note that the use of asynchronous page cleaners can help avoid this situation.)

In both of these cases, the log control information kept in memory does not change frequently and there is no advantage in writing the log control information to disk, unless it has changed.