This configuration parameter can be set to prevent disk full errors from being generated when the DB2® database manager cannot create a new log file in the active log path. Instead, the DB2 database manager will attempt to create the log file every five minutes until it succeeds. After each attempt, the DB2 database manager will write a message to the administration notification log. The only way to confirm that your application is hanging because of a log disk full condition is to monitor the administration notification log. Until the log file is successfully created, any user application that attempts to update table data will not be able to commit transactions. Read-only queries might not be directly affected; however, if a query needs to access data that is locked by an update request or a data page that is fixed in the buffer pool by the updating application, read-only queries will also be blocked.
Setting blk_log_dsk_ful to YES causes applications to hang when the DB2 database manager encounters a log disk full error. You are then able to resolve the error and the application can continue. A disk full situation can be resolved by moving old log files to another file system, by increasing the size of the file system so that hanging applications can complete, or by investigating and resolving any log archiving failures.
If blk_log_dsk_ful is set to NO, a transaction that receives a log disk full error will fail and be rolled back.
"-fromnode=nodename"
"-fromowner=ownername"
"-fromnode=nodename -fromowner=ownername"
"-asnodename=proxynode"
where nodename is
the name of the TSM node that originally archived the log files, ownername is
the name of the TSM user that originally archived the log files,
and proxynode is the name of the shared TSM target
proxy node. Each log archive options field corresponds to one of the
log archive methods: logarchopt1 is used with logarchmeth1,
and logarchopt2 is used with logarchmeth2.Increasing the log buffer size can result in more efficient input/output (I/O) activity associated with logging, because the log records are written to disk less frequently, and more records are written each time. However, recovery can take longer with a larger log buffer size value. As well, you may be able to use a higher logbufsz setting to reduce number of reads from the log disk. (To determine if your system would benefit from this, use the log_reads monitor element to check if reading from log disk is significant.
There is a 1024 GB logical limit on the total active log space per log stream that you can configure. This limit is the result of the upper limit for each log file, which is 4 GB, and the maximum combined number of primary and secondary log files, which is 256.
The size of the log file has a direct bearing on performance. There is a performance cost for switching from one log to another. So, from a pure performance perspective, the larger the log file size the better. This parameter also indicates the log file size for archiving. In this case, a larger log file is size it not necessarily better, since a larger log file size can increase the chance of failure or cause a delay in log shipping scenarios. When considering active log space, it might be better to have a larger number of smaller log files. For example, if there are two very large log files and a transaction starts close to the end of one log file, only half of the log space remains available.
Every time a database is deactivated (all connections to the database are terminated), the log file that is currently being written is truncated. So, if a database is frequently being deactivated, it is better not to choose a large log file size because the DB2 database manager will create a large file only to have it truncated. You can use the ACTIVATE DATABASE command to avoid this cost because it prevents automatic database deactivation when the last client disconnects from the database.
Assuming that you have an application that keeps the database open to minimize processing time when opening the database, the log file size should be determined by the amount of time it takes to make offline archived log copies.
Minimizing log file loss is also an important consideration when setting the log size. Archiving operates on one entire log file at a time. If you configure larger log files, you increase the time between archiving. If the medium containing the log fails, some transaction information will probably be lost. Decreasing the log file size increases the frequency of archiving but can reduce the amount of information loss in case of a media failure because on average less log data is not yet archived at any given point in time.
If the value is set to 0, there is no limit to the percentage of total primary log space that a transaction can consume. If an application violates the max_log configuration, the application will be forced to disconnect from the database, the transaction will be rolled back.
You can override this behavior by setting the DB2_FORCE_APP_ON_MAX_LOG registry variable to FALSE. This will cause transactions that violate the max_log configuration to fail. The application can still commit the work completed by previous statements in the unit of work, or it can roll back the completed work to undo the unit of work.
This parameter, along with the num_log_span configuration parameter, can be useful when infinite active log space is enabled. If infinite logging is on (that is, if logsecond is -1) then transactions are not restricted to the upper limit of the number of log files (logprimary + logsecond). When the value of logprimary is reached, the DB2 database manager starts to archive the active logs, rather than failing the transaction. This can cause problems if, for instance, there is a long running transaction that has been left uncommitted (perhaps caused by an application with a logic error). If this occurs, the active log space keeps growing, which might lead to poor crash recovery performance. To prevent this, you can specify values for either one or both of the max_log and num_log_span configuration parameters.
The mirrorlogpath parameter also has an effect on log archiving behavior, which you can use to further improve resilience during rollforward recovery: When both mirrorlogpath and logarchmeth2 are set, logarchmeth2 archives log files from the mirror log path instead of archiving additional copies of the log files in the active log path. You can use this log archiving behaviour to improve resilience, because a usable, archived log file from the mirror log path might still be available to continue a database recovery operation, even if a primary log file became corrupted due to a disk failure before archiving.
Because you can change the log path location, the logs needed for rollforward recovery might exist in different directories. You can change the value of this configuration parameter during a rollforward operation to allow you to access log files from a different mirror log path.
You must keep track of the location of the logs.
Changes are not applied until the database is in a consistent state. The configuration parameter database_consistent returns the status of the database.
To turn this configuration parameter off, set its value to DEFAULT.
Because you can change the log path location, the logs needed for rollforward recovery might exist in different directories or on different devices. You can change the value of this configuration parameter during a rollforward operation to allow you to access logs in multiple locations.
You must keep track of the location of the logs.
Changes are not applied until the database is in a consistent state. The configuration parameter database_consistent returns the status of the database.
The grouping of commits occurs only if the value of this parameter is greater than 1 and multiple applications attempt to commit their transactions at about the same time. When commit grouping is in effect, application commit requests are held until either one second has elapsed, or the number of commit requests equals the value of this parameter.
If an application violates the num_log_span setting, the application will be forced to disconnect from the database.
This parameter, along with the max_log configuration parameter, can be useful when infinite active log space is enabled. If infinite logging is on (that is, if logsecond is -1) then transactions are not restricted to the upper limit of the number of log files (logprimary + logsecond). When the value of logprimary is reached, the DB2 database manager starts to archive the active logs, rather than failing the transaction. This can cause problems if, for instance, there is a long running transaction that has been left uncommitted (perhaps caused by an application with a logic error). If this occurs, the active log space keeps growing, which might lead to poor crash recovery performance. To prevent this, you can specify values for either one or both of the max_log and num_log_span configuration parameters.
If logsecond is set to -1, you can specify a directory for the DB2 database manager to store active log files retrieved from the archive. (Active log files must be retrieved for rollback operations if they are no longer in the active log path).
If overflowlogpath is not specified, the DB2 database manager will retrieve the log files into the active log path. By specifying this parameter you can provide an additional storage resource where the DB2 database manager can place the retrieved log files. The benefit includes spreading the I/O cost to different disks, and allowing more log files to be stored in the active log path.
For example, if you are using the db2ReadLog API for replication, you can use overflowlogpath to specify a location for the DB2 database manager to search for log files that are needed for this API. If the log file is not found (in either the active log path or the overflow log path) and the database is configured for log archiving, the DB2 database manager will retrieve the log file. You can also use this parameter to specify a directory for the DB2 database manager to store the retrieved log files. The benefit comes from reducing the I/O cost on the active log path and allowing more log files to be stored in the active log path.
Setting overflowlogpath is useful when infinite logging is configured (i.e., when logsecond is set to -1). The DB2 database manager can store active log files retrieved from the archive in this path. (With infinite logging, active log files may need to be retrieved from archive, for rollback or crash recovery operations, if they are no longer in the active log path.)
If you have configured a raw device for the active log path, overflowlogpath must be configured if you want to set logsecond to -1, or if you want to use the db2ReadLog API.
To set overflowlogpath, specify a string of up to 242 bytes. The string must point to a path name, and it must be a fully qualified path name, not a relative path name. The path name must be a directory, not a raw device.
A primary log file, whether empty or full, requires the same amount of disk space. Thus, if you configure more logs than you need, you use disk space unnecessarily. If you configure too few logs, you can encounter a log-full condition. As you select the number of logs to configure, you must consider the size you make each log and whether your application can handle a log-full condition. The total log file size limit on active log space is 256 GB.
If you are enabling an existing database for rollforward recovery, change the number of primary logs to the sum of the number of primary and secondary logs, plus one.
If the primary log files become full, secondary log files (of size logfilsiz) are allocated, one at a time as needed, up to the maximum number specified by this parameter. If this parameter is set to -1, the database is configured with infinite active log space. There is no limit on the size or number of in-flight transactions running on the database. Infinite active logging is useful in environments that must accommodate large jobs requiring more log space than you would normally allocate to the primary logs.