Help Q Replication and Event Publishing

Descriptions of asnqapp parameters

These descriptions provide detail on the asnqapp parameters, their defaults, and why you might want to change the default in your environment.

apply_server

For z/OS Default: None

For Linux, UNIX, Windows Default: apply_server=value of DB2DBDFT environment variable, if it is set

The apply_server parameter identifies the database or subsystem where a Q Apply program runs, and where its control tables are stored. The control tables contain information about targets, Q subscriptions, WebSphere® MQ queues, and user preferences. The Q Apply server must be the same database or subsystem that contains the targets.

apply_schema

Default: apply_schema=ASN

The apply_schema parameter lets you distinguish between multiple instances of the Q Apply program on a Q Apply server.

The schema identifies one Q Apply program and its control tables. Two Q Apply programs with the same schema cannot run on a server.

A single Q Apply program can create multiple browser threads. Each browser thread reads messages from a single receive queue. Because of this, you do not need to create multiple instances of the Q Apply program on a server to divide the flow of data that is being applied to targets.

For z/OS On z/OS®, no special characters are allowed in the Q Apply schema except for the underscore (_).

apply_path

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The apply_path parameter specifies the directory where a Q Apply program stores its work files and log file. By default, the path is the directory where you start the program. You can change this path.

For Windows
For Windows If you start a Q Apply program as a Windows service, by default the program starts in the \SQLLIB\bin directory.
For z/OS
For z/OS Because the Q Apply program is a POSIX application, the default path depends on how you start the program:
  • If you start a Q Apply program from a USS command line prompt, the path is the directory where you started the program.
  • If you start a Q Apply program using a started task or through JCL, the default path is the home directory in the USS file system of the user ID that is associated with the started task or job.

To change the path, you can specify either a path name or a high-level Qualifier (HLQ), such as //QAPP. When you use an HLQ, sequential files are created that conform to the file naming conventions for z/OS sequential data set file names. The sequential data sets are relative to the user ID that is running the program. Otherwise these file names are similar to the names that are stored in an explicitly named directory path, with the HLQ concatenated as the first part of the file name. For example, sysadm.QAPPV9.filename. Using an HLQ might be convenient if you want to have the Q Apply log and LOADMSG files be system-managed (SMS).

If you want the Q Apply started task to write to a .log data set with a user ID other than the ID that is executing the task (for example TSOUSER), you must specify a single quotation mark (‘) as an escape character when using the SYSIN format for input parameters to the started task. For example, if you wanted to use the high-level qualifier JOESMITH, then the user ID TSOUSER that is running the Q Apply program must have RACF® authority to write data sets by using the high-level qualifier JOESMITH, as in the following example:

//SYSIN    DD  *
 APPLY_PATH=//'JOESMITH
/*     

You can set the apply_path parameter when you start the Q Apply program, or you can change its saved value in the IBMQREP_APPLYPARMS table. You cannot alter this parameter while the Q Apply program is running.

applydelay

Default: applydelay=0 seconds

Method of changing: When Q Apply starts

The applydelay parameter controls the amount of time in seconds that the Q Apply program waits before replaying each transaction at the target. The delay is based on the source commit time of the transaction. Q Apply delays applying transactions until the current time reaches or exceeds the source transaction commit time plus the value of applydelay. Changes at the source database are captured and sent to the receive queue, where they wait during the delay period.

This parameter can be used, for example, to maintain multiple copies of a source database at different points in time for failover in case of problems at the source system. For example, if a user accidentally deletes data at the primary system, a copy of the database exists where the data is still available.

The applydelay parameter has no effect on the applyupto or autostop parameters.

Important: If you plan to use the applydelay parameter, ensure that the receive queue has enough space to hold messages that accumulate during the delay period.

applyupto

Default: None

Method of changing: When Q Apply starts

The applyupto parameter identifies a timestamp that instructs the Q Apply program to stop after processing transactions that were committed at the source on or before one of the following times:

You can optionally specify the WAIT or NOWAIT keywords to control when Q Apply stops:

WAIT (default)
Q Apply does not stop until it receives and processes all transactions up to the specified GMT timestamp or the value of CURRENT_TIMESTAMP, even if the receive queue becomes empty.
NOWAIT
Q Apply stops after it processes all transactions on the receive queue, even if it has not seen a transaction with a commit timestamp that matches or exceeds the specified GMT timestamp or the value of CURRENT_TIMESTAMP.

The applyupto parameter applies to all browser threads of a Q Apply instance. Each browser thread stops when it reads a message on its receive queue with a source commit timestamp that matches or exceeds the specified time. The Q Apply program stops when all of its browser threads determine that all transactions with a source commit timestamp prior to and including the applyupto timestamp have been applied. All transactions with a source commit time greater than the specified GMT timestamp stay on the receive queue and are processed the next time the Q Apply program runs.

The timestamp must be specified in Greenwich mean time (GMT) in a full or partial timestamp format. The full timestamp uses the following format: YYYY-MM-DD-HH.MM.SS.mmmmmm. For example, 2007-04-10-10.35.30.555555 is the GMT timestamp for April 10th, 2007, 10:35 AM, 30 seconds, and 555555 microseconds.

You can specify the partial timestamp in one of the following formats:

YYYY-MM-DD-HH.MM.SS
For example, 2007-04-10-23.35.30 is the partial GMT timestamp for April 10th, 2007, 11:35 PM, 30 seconds.
YYYY-MM-DD-HH.MM
For example, 2007-04-10-14.30 is the partial GMT timestamp for April 10th, 2007, 1:30 PM.
YYYY-MM-DD-HH
For example, 2007-04-10-01 is the partial GMT timestamp for April 10th, 2007, 1:00 AM.
HH.MM
For example, 14:55 is the partial GMT timestamp for today at 2:55 PM.
HH
For example, 14 is the partial GMT timestamp for today at 2 PM.
The partial timestamp could be used to specify a time in the format HH.MM. This format could be helpful if you schedule a task to start the Q Apply program every day at 1 AM Pacific Standard Time (PST) and you want to stop the program after processing the transactions that were committed at the source with a GMT timestamp on or before 4 AM PST. For example, run the following task at 1 AM PST and set the applyupto parameter to end the task at 4 AM PST:
asnqapp apply_server=MYTESTSERVER apply_schema=ASN applyupto=12.00

During daylight saving time, the difference between GMT and local time might change depending on your location. For example, the Pacific timezone is GMT-8 hours during the fall and winter. During the daylight saving time in the spring and summer, the Pacific timezone is GMT-7 hours.

Restriction: You cannot specify both the autostop parameter and the applyupto parameter.

You might want to set the heartbeat interval to a value greater than zero so that the Q Apply program can tell if the time value specified in the applyupto parameter has passed.

For z/OS

arm

Default: None

Method of changing: When Q Apply starts

You can use the arm=identifier parameter to specify a unique identifier for the Q Apply program that the Automatic Restart Manager uses to automatically start a stopped Q Apply instance. The three-character alphanumeric value that you supply is appended to the ARM element name that Q Apply generates for itself: ASNQAxxxxyyyy (where xxxx is the data-sharing group attach name, and yyyy is the DB2® member name). You can specify any length of string for the arm parameter, but the Q Apply program will concatenate only up to three characters to the current name. If necessary, the Q Apply program pads the name with blanks to make a unique 16-byte name.

autostop

Default: autostop=n

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The autostop parameter lets you direct a Q Apply program to automatically stop when there are no transactions to apply. By default (autostop=n), a Q Apply program keeps running when queues are empty and waits for transactions to arrive.

Typically, the Q Apply program is run as a continuous process whenever the target database is active, so in most cases you would keep the default (autostop=n). Set autostop=y only for scenarios where the Q Apply program is run at set intervals, such as when you synchronize infrequently connected systems, or in test scenarios.

If you set autostop=y, the Q Apply program shuts down after all receive queues are emptied once. When the browser thread for each receive queue detects that the queue has no messages, the thread stops reading from the queue. After all threads stop, the Q Apply program stops. Messages might continue to arrive on queues for which the browser thread has stopped, but the messages will collect until you start the Q Apply program again.

Restriction: You cannot specify both the autostop parameter and the applyupto parameter.
For Linux, UNIX, Windows

buffered_inserts

Default: buffered_inserts=n

Method of changing: When Q Apply starts

The buffered_inserts parameter specifies whether the Q Apply program uses buffered inserts, which can improve performance in some partitioned databases that are dominated by INSERT operations. If you specify buffered_inserts=y, Q Apply internally binds appropriate files with the INSERT BUF option. This bind option enables the coordinator node in a partitioned database to accumulate inserted rows in buffers rather than forwarding them immediately to their destination partitions. When a buffer is filled, or when another SQL statement such as an UPDATE, DELETE, or INSERT to a different table, or COMMIT/ROLLBACK are encountered, all the rows in the buffer are sent together to the destination partition.

You might see additional performance gains by combining the use of buffered inserts with the commit_count parameter.

When buffered inserts are enabled, Q Apply does not perform exception handling. Any conflict or error prompts Q Apply to stop reading from the queue. To recover past the point of an exception, you must start message processing on the queue and start Q Apply with buffered_inserts=n.

For z/OS

caf

Default: caf=n

Method of changing: When Q Apply starts

The Q Apply program runs with the default of Recoverable Resource Manager Services (RRS) connect. You can override this default and prompt the Q Apply program to use the Call Attach Facility (CAF) by specifying the caf=y option.

If RRS is not available you receive a message and the Q Apply program switches to CAF. The message warns that the program was not able to initialize a connection because RRS is not started. The program attempts to use CAF instead. The program runs correctly with CAF connect.

classic_load_file_sz

Default: classic_load_file_sz=500000 rows

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

For z/OS Classic replication to z/OS target: Specifies the estimated number of rows in tables or views from a Classic replication data source that are to be loaded into target tables. The Q Apply program uses this estimate to calculate the DASD allocation of the data set that is used as input to the load utility. If you do not specify classic_load_file_sz the Q Apply program uses 500,000 rows as the estimate to calculate space. Use this parameter when the default allocation is too small.

This parameter applies only to automatic loads of z/OS target tables from Classic sources.

commit_count

Default: commit_count=1

Method of changing: When Q Apply starts

The commit_count parameter specifies the number of transactions that each Q Apply agent thread applies to the target table within a commit scope. By default, the agent threads commit after each transaction that they apply.

By increasing commit_count and grouping more transactions within the commit scope, you might see improved performance.

Recommendation: Use a higher value for commit_count only with row-level locking. This parameter requires careful tuning when used with a large number of agent threads because it could cause lock escalation resulting in lock timeouts and deadlock retries.

deadlock_retries

Default: deadlock_retries=3

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The deadlock_retries parameter specifies how many times the Q Apply program tries to reapply changes to target tables when it encounters an SQL deadlock or lock timeout. The default is three tries. This parameter also controls the number of times that the Q Apply program tries to insert, update, or delete rows from its control tables after an SQL deadlock.

After the limit is reached, if deadlocks persist the browser thread stops. You might want to set a higher value for deadlock_retries if applications are updating the target database frequently and you are experiencing a high level of contention. Or, if you have a large number of receive queues and corresponding browser threads, a higher value for deadlock_retries might help resolve possible contention in peer-to-peer and other multidirectional replication environments, as well as at control tables such as the IBMQREP_DONEMSG table.

Restriction: You cannot lower the default value for deadlock_retries.

dftmodelq

Default: None

Method of changing: When Q Apply starts

By default, the Q Apply program uses IBMQREP.SPILL.MODELQ as the name for the model queue that it uses to create spill queues for the loading process. To specify a different default model queue name, specify the dftmodelq parameter. The following list summarizes the behavior of the parameter:

If you specify dftmodelq when you start Q Apply
For each Q subscription, Q Apply will check to see if you specified a model queue name for the Q subscription by looking at the value of the MODELQ column in the IBMQREP_TARGETS control table:
  • If the value is NULL or IBMQREP.SPILL.MODELQ, then Q Apply will use the value that you specify for the dftmodelq parameter.
  • If the column contains any other non-NULL value, then Q Apply will use the value in the MODELQ column and will ignore the value that you specify for the dftmodelq parameter.
If you do not specify dftmodelq when you start Q Apply
Q Apply will use the value of the MODELQ column in the IBMQREP_TARGETS table. If the value is NULL, Q Apply will default to IBMQREP.SPILL.MODELQ.
For z/OS

eif_conn1 (z/OS)

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The eif_conn1 parameter specifies connection information for the primary event server for the Event Interface Facility (EIF) in Tivoli® NetView® Monitoring for GDPS®. Use this parameter in conjunction with enabling Q Apply event notification (event_gen=y).

You specify the connection information in the format address(port) where address is the host name or IPv4 address of the event server and (port) is the port that the Event Receiver monitors for incoming events.

For z/OS

eif_conn2 (z/OS)

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The eif_conn2 parameter specifies connection information for the backup event server for EIF. Use this parameter in conjunction with enabling Q Apply event notification (event_gen=y).

You specify the connection information in the format address(port) where address is the host name or IPv4 address of the event server and (port) is the port that the Event Receiver monitors for incoming events.

For z/OS

eif_hbint (z/OS)

Default: eif_hbint=10000 milliseconds (10 seconds)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The eif_hbint parameter determines how often a Q Apply program sends "heartbeat" messages to the EIF server to indicate that it is running and monitoring event conditions. The default and minimum values are 10000 milliseconds (10 seconds).

Heartbeat messages are for EIF only and do not go to the console or IBMQREP_APPEVENTS table. These messages are unconditionally generated, even if other events are also generated within the heartbeat interval. The only exception is if Q Apply stops processing messages on the receive queue for the specified replication queue map.

event_gen

Default: event_gen=n

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The event_gen parameter specifies whether the Q Apply program creates a separate thread to check in-memory monitoring data and issue events when certain conditions occur or thresholds are exceeded. By default, Q Apply does not generate events, but you can invoke this function by specifying event_gen=y.

You can use events to speed the response to certain conditions, for example the latency of replicated transactions exceeding a desired level or a problem that forces Q Apply to stop reading from a receive queue. You define the events for which you want to receive notification by inserting rows in the IBMQREP_APPEVTDEFS control table. You can specify whether the events are sent to the console, the IBMQREP_APPEVENTS control table, or sent to the Event Interface Facility (EIF) over a TCP/IP IPv4 socket for use by Tivoli NetView Monitoring for GDPS as part of the GDPS Active-Active solution.

event_interval

Default: event_interval=3000 milliseconds (3 seconds)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The event_interval parameter determines how often a Q Apply program collects end-to-end replication latency values for generating events. The default is every 3000 milliseconds (3 seconds) and the minimum value is 1000 milliseconds (1 second). A longer interval might provide more data on which to base responses to events and reduces the data collection overhead. A shorter interval allows faster responses. You should determine a value for this parameter based on the type of events that are defined for your environment.

event_limit

Default: event_limit=10080 minutes (7 days)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The event_limit parameter specifies how old the rows must be in the IBMQREP_APPEVENTS table before the rows are eligible for pruning. By default, rows that are older than 10080 minutes (7 days) are pruned. At most five rows are inserted at each event interval. Adjust the event limit based on your needs.

gdps_total_num_cg_override

Default: None

Method of changing: When Q Apply starts

The gdps_total_num_cg_override parameter enables you to override the field consistency_group_total in EIF messages. The value for this parameter corresponds to the number of consistency groups (replication queue maps) that are defined in the workload for GDPS active-active continuous availability. You should specify the gdps_total_num_cg_override parameter if the GDPS workload spans more than one multiple consistency group. Otherwise, the number of consistency groups that are reported to GDPS in EIF messages for a workload is the number of consistency groups in the multiple consistency group.

ignbaddata

Default: None

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

Note: This parameter applies only if the Q Apply program uses International Components for Unicode (ICU) for code page conversion (if the code page of the source database and the code page that Q Apply uses are different).

The ignbaddata parameter specifies whether the Q Apply program checks for illegal characters in data from the source and continues processing even if it finds illegal characters.

If you specify ignbaddata=y, Q Apply checks for illegal characters and takes the following actions if any are found:

A value of n prompts Q Apply to not check for illegal characters and not report exceptions for illegal characters. With this option, the row might be applied to the target table if DB2 does not reject the data. If the row is applied, Q Apply continues processing the next row. If the bad data prompts an SQL error, Q Apply follows the error action that is specified for the Q subscription and reports an exception.

insert_bidi_signal

Default: insert_bidi_signal=y

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The insert_bidi_signal parameter specifies whether the Q Capture and Q Apply programs use signal inserts to prevent recapture of transactions in bidirectional replication.

By default, the Q Apply program inserts P2PNORECAPTURE signals into the IBMQREP_SIGNAL table to instruct the Q Capture program at its same server not to recapture applied transactions at this server.

When there are many bidirectional Q subscriptions, the number of signal inserts can affect replication performance. By specifying insert_bidi_signal=n, the Q Apply program does not insert P2PNORECAPTURE signals. Instead, you insert Q Apply's AUTHTKN information into the IBMQREP_IGNTRAN table, which instructs the Q Capture program at the same server to not capture any transactions that originated from the Q Apply program, except for inserts into the IBMQREP_SIGNAL table.

For improved performance when you use insert_bidi_signal=n, update the IBMQREP_IGNTRAN table to change the value of the IGNTRANTRC column to N (no tracing). This change prevents the Q Capture program from inserting a row into the IBMQREP_IGNTRANTRC table for each transaction that it does not recapture.

loadcopy_path

Default: loadcopy_path=Value of apply_path parameter

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

Use with the DB2 High Availability Disaster Recovery (HADR) feature: You can use the loadcopy_path parameter instead of the DB2_LOAD_COPY_NO_OVERRIDE registry variable when the Q Apply server is the primary server in a HADR configuration and tables on the primary server are loaded by the Q Apply program calling the DB2 LOAD utility. HADR sends log files to the standby site, but when a table on the primary server is loaded by the DB2 LOAD utility, the inserts are not logged. Setting LOADCOPY_PATH to an NFS directory that is accessible from both the primary and secondary servers prompts Q Apply to start the LOAD utility with the option to create a copy of the loaded data in the specified path. The secondary server in the HADR configuration then looks for the copied data in this path.

load_data_buff_sz

Default: load_data_buff_sz=8

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

Use with multidimensional clustering (MDC) tables: Specifies the number of 4KB pages for the DB2 LOAD utility to use as buffered space for transferring data within the utility during the initial loading of the target table. This parameter applies only to automatic loads using the DB2 LOAD utility.

By default, the Q Apply program starts the utility with the option to use a buffer of 8 pages, which is also the minimum value for this parameter. Load performance for MDC targets can be significantly improved by specifying a much higher number of pages.

logmarkertz

Default: logmarkertz=gmt

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The logmarkertz parameter controls the time zone that the Q Apply program uses when it inserts source commit data into the IBMSNAP_LOGMARKER column of consistent-change data (CCD) tables or point-in-time (PIT) tables. By default (logmarkertz=gmt), Q Apply inserts a timestamp in Greenwich mean time (GMT) to record when the data was committed at the source. You can specify logmarkertz=local and Q Apply inserts a timestamp in the local time of the Q Capture server.

Existing rows in CCD or PIT targets that were generated before the use of logmarkertz=local are not converted by Q Apply and remain in GMT unless you manually convert them.

The logmarkertz parameter does not affect stored procedure targets. The src_commit_timestamp IN parameter for stored procedure targets always uses GMT-based timestamps.

logreuse

Default: logreuse=n

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

Each Q Apply program keeps a log file that tracks its work history, such as when it starts and stops reading from queues, changes parameter values, prunes control tables, or encounters errors.

By default, the Q Apply program adds to the existing log file when the program restarts. This default lets you keep a history of the program's actions. If you don't want this history or want to save space, set logreuse=y. The Q Apply program clears the log file when it starts, then writes to the blank file.

The log is stored by default in the directory where the Q Apply program is started, or in a different location that you set using the apply_path parameter.

For z/OS The log file name is apply_server.apply_schema.QAPP.log. For example, SAMPLE.ASN.APP.log. Also, if apply_path is specified with slashes (//) to use a High Level Qualifier (HLQ), the file naming conventions of z/OS sequential data set files apply, and apply_schema is truncated to eight characters.

For Linux, UNIX, Windows The log file name is db2instance.apply_server.apply_schema.QAPP.log. For example, DB2.SAMPLE.ASN.QAPP.log.

logstdout

Default: logstdout=n

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

By default, a Q Apply program writes its work history only to the log. You can change the logstdout parameter if you want to see program history on the standard output (stdout) in addition to the log.

Error messages and some log messages (initialization, stop, subscription activation, and subscription deactivation) go to both the standard output and the log file regardless of the setting for this parameter.

You can specify the logstdout parameter when you start a Q Apply program with the asnqapp command. If you use the Replication Center to start a Q Apply program, this parameter is not applicable.

max_parallel_loads

Default: max_parallel_loads=1 (z/OS); 15 (Linux, UNIX, Windows)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The max_parallel_loads parameter specifies the maximum number of automatic load operations of target tables that Q Apply can start at the same time for a given receive queue. The default for max_parallel_loads differs depending on the platform of the target server:

z/OS
On z/OS the default is one load at a time because of potential issues with the DSNUTILS stored procedure that Q Apply uses to call the DB2 LOAD utility. Depending on your environment you can experiment with values higher than max_parallel_loads=1. If errors occur, reset the value to 1.
Linux, UNIX, Windows
On Linux, UNIX, and Windows the default is 15 parallel loads.

monitor_interval

Default: monitor_interval=60000 milliseconds (1 minute) on z/OS; 30000 milliseconds (30 seconds) on Linux, UNIX, and Windows

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The monitor_interval parameter tells a Q Apply program how often to insert performance statistics into the IBMQREP_APPLYMON and IBMQREP_MCGMON tables. You can view these statistics by using the Q Apply Throughput and Latency windows.

You can adjust the monitor_interval based on your needs:

If you want to monitor a Q Apply program's activity at a more granular level, shorten the monitor interval
For example, you might want to see the statistics for the number of messages on queues broken down by each 10 seconds rather than one-minute intervals.
Lengthen the monitor interval to view Q Apply performance statistics over longer periods
For example, if you view latency statistics for a large number of one-minute periods, you might want to average the results to get a broader view of performance. Seeing the results averaged for each half hour or hour might be more useful in your replication environment.
Important for Q Replication Dashboard users: When possible, you should synchronize the Q Apply monitor_interval parameter with the dashboard refresh interval (how often the dashboard retrieves performance information from the Q Capture and Q Apply monitor tables). The default refresh interval for the dashboard is 10 seconds (10000 milliseconds). If the value of monitor_interval is higher than the dashboard refresh interval, the dashboard refreshes when no new monitor data is available.

monitor_limit

Default: monitor_limit=10080 minutes (7 days)

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The monitor_limit parameter specifies how old the rows must be in the IBMQREP_APPLYMON and IBMQREP_MCGMON tables before the rows are eligible for pruning.

By default, rows that are older than 10080 minutes (7 days) are pruned. The IBMQREP_APPLYMON table provides statistics about a Q Apply program's activity. A row is inserted at each monitor interval. You can adjust the monitor limit based on your needs:

Increase the monitor limit to keep statistics
If you want to keep records of the Q Apply program's activity beyond one week, set a higher monitor limit.
Lower the monitor limit if you look at statistics frequently.
If you monitor the Q Apply program's activity on a regular basis, you probably do not need to keep one week of statistics and can set a lower monitor limit.

You can set the monitor_limit parameter when you start the Q Apply program or while the program is running. You can also change its saved value in the IBMQREP_APPLYPARMS table.

For z/OS

multi_row_insert (z/OS)

Default: multi_row_insert=n

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The multi_row_insert parameter controls whether the Q Apply program uses multi-row insert SQL statements. This parameter is supported if the Q Apply program is at Version 10.2.1 or newer and the target database is DB2 for z/OS Version 8 and newer.

Inserting replicated rows in batches of 100 or less can reduce CPU consumption at the target server and increase throughput. Performance improvements might be greater for tables with fewer columns, smaller row size, and fewer indexes than tables with more columns, larger row size, and more indexes. All of the rows that are part of a multi-row insert must be continuous insert statements against the same table and in the same transaction.

If an insert fails for any row in the rowset, DB2 rolls back all of the changes in the rowset. Q Apply then switches to single-row insert mode to process all of the rows in the rowset that caused an error. The error is retried and handled with the error action and conflict action that are specified for the Q subscription.

You can allocate additional memory for Q Apply to use in performing multi-row inserts by increasing the value of the MRI_MEMORY_LIMIT column in the IBMQREP_RECVQUEUES table. The default value is 1024 KB per agent thread. A larger value for this parameter can enable the Q Apply program to group more rows into each multi-row insert operation. The allocation for MRI_MEMORY_LIMIT is separate from the overall memory that Q Apply uses, which is set in the MEMORY_LIMIT column in IBMQREP_RECVQUEUES.

The following restrictions apply:

Note: Unsubscribed TIMESTAMP columns with a default value of CURRENT TIMESTAMP are likely to be given identical values in multi-row insert mode. To use the multi-row-insert option, you should subscribe to the TIMESTAMP columns with default values.

nickname_commit_ct

Default: nickname_commit_ct=10

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

Federated targets: The nickname_commit_ct parameter specifies the number of rows after which the DB2 IMPORT utility commits changes to nicknames that reference a federated target table during the loading process. This parameter applies only to automatic loads for federated targets that use the IMPORT utility.

By default, Q Apply specifies that the IMPORT utility commits changes every 10 rows during the federated loading process. You might see improved load performance by raising the value of nickname_commit_ct. For example, a setting of nickname_commit_ct=100 would lower the CPU overhead by reducing interim commits. However, more frequent commits protect against problems that might occur during the load, enabling the IMPORT utility to roll back a smaller number of rows if a problem occurs.

The nickname_commit_ct parameter is a tuning parameter used to improve DB2 IMPORT performance by reducing the number of commits for federated targets.

nmi_enable (z/OS)

Default: nmi_enable=n

Method of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The nmi_enable parameter specifies whether the Q Apply program is enabled to provide a Network Management Interface (NMI) for monitoring Q Replication statistics from IBM® Tivoli NetView Monitoring for GDPS. The NMI client application must be on the same z/OS system as the Q Apply program. By default (nmi_enable=n), the interface is not enabled.

When you specify nmi_enable=y, the Q Apply program acts as an NMI server and listens on the socket that is specified by the nmi_socket_name parameter for client connection requests and data requests. Q Apply can support multiple client connections, and has a dedicated thread to interact with NMI clients. The thread responds to requests in the order that they arrived.

nmi_socket_name (z/OS)

Default: nmi_socket_name=/var/sock/group-attach-name_apply-schema_asnqapp

Method of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The nmi_socket_name parameter specifies the name of the AF_UNIX socket where the Q Apply program listens for requests for statistical information from NMI client applications. You can specify this parameter to change the socket name that the program automatically generates. The socket file is generated in the directory /var/sock. The socket name is constructed by combining the file path, group attach name, Q Apply schema name, and the program name (asnqapp). An example socket name is /var/sock/V91A_ASN_asnqapp.

To use this parameter you must set nmi_enable=y.

After a Q Apply program is started with either a default or a user-defined NMI socket name, the name cannot be changed dynamically. To list the name of the current NMI file socket and all clients that are connected, you can use the status show details parameter of the MODIFY command with the Q Apply job name.

oracle_empty_str

Default: oracle_empty_str=n

Methods of changing: When Q Apply starts

The oracle_empty_str parameter specifies whether the Q Apply program replaces an empty string in VARCHAR columns with a space. By default, Q Apply leaves the empty string as is.

DB2 allows empty strings in VARCHAR columns. When a source DB2 VARCHAR column is mapped to an Oracle target, or to a DB2 server that is running with Oracle compatibility mode, the empty string is converted to a NULL value. The operation fails when the target column is defined with NOT NULL semantics.

With oracle_empty_str=y, Q Apply replaces the NULL value with a space just before applying the data to the target and after any codepage conversion. If you are using SQL expressions in any Q subscriptions, take the following considerations into account:

p2p_2nodes

Default: p2p_2nodes=n

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The p2p_2nodes parameter allows the Q Apply program to optimize for performance in a peer-to-peer configuration with only two active servers by not logging conflicting deletes in the IBMQREP_DELTOMB table. Only use the setting p2p_2nodes=y for peer-to-peer replication with two active servers.

By default, the Q Apply program records conflicting DELETE operations in the IBMQREP_DELTOMB table. With p2p_2nodes=y the Q Apply program does not use the IBMQREP_DELTOMB table. This avoids any unnecessary contention on the table or slowing of Q Apply without affecting the program's ability to correctly detect conflicts and ensure data convergence.

Important: The Q Apply program does not automatically detect whether a peer-to-peer configuration has only two active servers. Ensure that the option p2p_2nodes=y is used only for a two-server peer-to-peer configuration. Using the option for configurations with more than two active servers might result in incorrect conflict detection and data divergence.

prune_batch_size

Default: prune_batch_size=1000

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The prune_batch_size parameter specifies the number of rows that are deleted from the IBMQREP_DONEMSG table in one commit scope. The default is 1000 rows. The minimum value is 2.

The IBMQREP_DONEMSG table is an internal table used by the Q Apply program to record all transaction or administrative messages that are received. The records in this table help ensure that messages are not processed more than once (for example in the case of a system failure) before they are deleted. During regular execution, Q Apply follows the value of prune_batch_size when it deletes rows from the table.

Q Apply follows the value set for this parameter regardless of the setting for the prune_method parameter.

prune_interval

Default: prune_interval=300 seconds (5 minutes)

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The prune_interval parameter determines how often a Q Apply program looks for old rows to delete from the IBMQREP_APPLYMON, IBMQREP_APPLYTRACE, IBMQREP_MCGMON, and IBMQREP_APPEVENTS tables. By default, a Q Apply program looks for rows to prune every 300 seconds (5 minutes).

Your pruning frequency depends on how quickly these control tables grow, and what you intend to use them for:

Shorten the prune interval to manage monitor tables
A shorter prune interval might be necessary if the IBMQREP_APPLYMON table is growing too quickly because of a shortened monitor interval. If this table is not pruned often enough, it can exceed its table space limit, which forces a Q Apply program to stop. However, if the table is pruned too often or during peak times, pruning can interfere with application programs that run on the same system.
Lengthen the prune interval for record keeping
You might want to keep a longer history of a Q Apply program's performance by pruning the IBMQREP_APPLYTRACE and IBMQREP_APPLYMON tables less frequently.

The prune interval works in conjunction with the trace_limit and monitor_limit parameters, which determine when data is old enough to prune. For example, if the prune_interval is 300 seconds and the trace_limit is 10080 seconds, a Q Apply program will try to prune every 300 seconds. If the Q Apply program finds any rows in the IBMQREP_APPLYTRACE table that are older than 10080 minutes (7 days), it prunes them.

The prune_interval parameter does not affect pruning of the IBMQREP_DONEMSG table. Pruning of this table is controlled by the prune_method and prune_batch_size parameters.

prune_method

Default: prune_method=2

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The prune_method parameter specifies the method that the Q Apply program uses to delete unneeded rows from the IBMQREP_DONEMSG table. By default, (prune_method=2), Q Apply prunes groups of rows based on the prune_batch_size value. A separate prune thread records which messages were applied, and then issues a single range-based DELETE.

When you specify prune_method=1, Q Apply prunes rows from the IBMQREP_DONESG table one at a time. First Q Apply queries the table to see if data from a message was applied, then it deletes the message from the receive queue, and then prunes the corresponding row from IBMQREP_DONEMSG by issuing an individual SQL statement.

pwdfile

Default: pwdfile=apply_path/asnpwd.aut

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The pwdfile parameter specifies the name of the encrypted password file that the Q Apply program uses to connect to the Q Capture server. This connection is required only when a Q subscription specifies an automatic load that uses the EXPORT utility. When you use the asnpwd command to create the password file, the default file name is asnpwd.aut. If you create the password file with a different name or change the name, you must change the pwdfile parameter to match. The Q Apply program looks for the password file in the directory specified by the apply_path parameter.

For z/OS No password file is required.

You can set the pwdfile parameter when you start the Q Apply program, and you can change its saved value in the IBMQREP_APPLYPARMS table. You cannot change the value while the Q Apply program is running.

report_exception

Default: report_exception=y

Methods of changing: When Q Apply starts

The report_exception parameter controls whether the Q Apply program inserts data into the IBMQREP_EXCEPTIONS table when a conflict or SQL error occurs at the target table but the row is applied to the target anyway because the conflict action that was specified for the Q subscription was F (force). By default, (report_exception=y), Q Apply inserts details in the IBMQREP_EXCEPTIONS table for each row that causes a conflict or SQL error at the target, regardless of whether the row was applied or not. You can specify report_exception=n and Q Apply will not insert data into the IBMQREP_EXCEPTIONS table when a row causes a conflict but is applied. With report_exception=n, Q Apply continues to insert data about rows that were not applied.

When report_exception=n, the Q Apply program also tolerates codepage conversion errors when writing SQL text into the IBMQREP_EXCEPTIONS table and continues normal processing.

richklvl

Default: richklvl=2

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The richklvl parameter specifies the level of referential integrity checking. By default (richklvl=2), the Q Apply program checks for RI-based dependencies between transactions to ensure that dependent rows are applied in the correct order.

If you specify richklvl=5, Q Apply checks for RI-based dependencies when a key value is updated in the parent table, a row is updated in the parent table, or a row is deleted from the parent table.

A value of 0 prompts Q Apply to not check for RI-based dependencies.

When a transaction cannot be applied because of a referential integrity violation, the Q Apply program automatically retries the transaction until it is applied in the same order that it was committed at the source table.

spill_commit_count

Default: spill_commit_count=10

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The spill_commit_count parameter specifies how many rows are grouped together in a commit scope by the Q Apply spill agents that apply data that was replicated during a load operation. Increasing the number of rows that are applied before a COMMIT is issued can improve performance by reducing the I/O resources that are associated with frequent commits. Balance the potential for improvement with the possibility that fewer commits might cause lock contention at the target table and the IBMQREP_SPILLEDROW control table.

skiptrans

Default: None

Method of changing: When Q Apply starts

The skiptrans parameter specifies that the Q Apply program should not apply one or more transactions from one or more receive queues based on their transaction ID.

Stopping the program from applying transactions is useful in unplanned situations, for example:

You can also prompt the Q Capture program to ignore transactions. This action would be more typical when you can plan which transactions do not need to be replicated.

Note: Ignoring a transaction that was committed at the source server typically causes divergence between tables at the source and target. You might need use the asntdiff and asntrep utilities to synchronize the tables.

startallq

Default: startallq=n (z/OS); y (Linux, UNIX, Windows)

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The startallq parameter specifies how Q Apply processes receive queues when it starts. With startallq=y, Q Apply puts all receive queues in active state and begins reading from them when it starts. When you specify startallq=n, Q Apply processes only the active receive queues when it starts.

You can use startallq=y to avoid having to issue the startq command for inactive receive queues after the Q Apply program starts. You can use startallq=n to keep disabled queues inactive when you start Q Apply.

term

Default: term=y

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The term parameter controls whether a Q Apply program keeps running when DB2 or the queue manager are unavailable.

By default (term=y), the Q Apply program terminates when DB2 or the queue manager are unavailable. You can change the default (term=n) if you want a Q Apply program to keep running while DB2 or the queue manager are unavailable. When DB2 or the queue manager are available, Q Apply begins applying transactions where it left off without requiring you to restart the program.

Restriction: The setting term=n is not supported for federated targets.
Note: Regardless of the setting for term, if the WebSphere MQ sender or receiver channels stop, the Q Apply program keeps running because it cannot detect channel status. This situation causes replication to stop because the two queue managers cannot communicate. If you find that replication has stopped without any messages from the Q replication programs, check for WebSphere MQ errors. For example, check the channel status by using the WebSphere MQ DISPLAY CHSTATUS command.

trace_ddl

Default: trace_ddl=n

Methods of changing: When Q Apply starts; IBMQREP_APPLYPARMS table

The trace_ddl parameter specifies whether, when DDL operations at the source database are replicated, the SQL text of the operation that the Q Apply program performs at the target database is logged. By default (trace_ddl=n), Q Apply does not log the SQL text. If you specify trace_ddl=y, Q Apply issues an ASN message to the Q Apply log file, standard output, and IBMQREP_APPLYTRACE table with the text of the SQL statement. The SQL text is truncated to 1024 characters.

trace_limit

Default: trace_limit=10080 minutes (7 days)

Methods of changing: When Q Apply starts; while Q Apply is running; IBMQREP_APPLYPARMS table

The trace_limit parameter specifies how long rows remain in the IBMQREP_APPLYTRACE table before the rows can be pruned.

The Q Apply program inserts all informational, warning, and error messages into the IBMQREP_APPLYTRACE table. By default, rows that are older than 10080 minutes (7 days) are pruned at each pruning interval. Modify the trace limit depending on your need for audit information.



Send your feedback | Information roadmap | The Q+SQL Replication Forum

Update icon Last updated: 2013-10-25