Help Q Replication and Event Publishing

Peer-to-peer replication

In peer-to-peer replication (also known as multimaster replication) updates on any one server are replicated to all other associated servers.

Peer-to-peer replication has the following characteristics:

You replicate copies of tables between multiple servers in peer-to-peer replication. The collection of all copies of a single table is called a logical table. Each server has a copy of the table. The different versions of the table can have different names and schemas. The columns in each copy of the table that are included in Q subscriptions must have identical names and compatible data types.

In peer-to-peer replication, data convergence is assured between all copies of the logical table, meaning that each copy of the table eventually achieves the same state as the other copies and has the most recent committed values. Because peer-to-peer replication is asynchronous, the copies of the tables might not converge until your applications stop making changes to all tables, all changes are replicated, and all messages are processed.

In this type of replication, you cannot manipulate the data by having the Q Apply program pass the data to a stored procedure. There is at least one Q Capture program and one Q Apply program running on each server that is part of a peer-to-peer configuration.

The control tables for the Q Capture and Q Apply programs that are on each individual server must have the same schema name. For example, if you have a server named SERVER_RED and a server named SERVER_GREEN, then the Q Capture and Q Apply programs that are on SERVER_RED must both have the same schema, and the Q Capture and Q Apply programs that are on SERVER_GREEN must both have the same schema.

In a peer-to-peer configuration, conflict detection and resolution are managed automatically by the Q Apply program in a way that assures data convergence; you do not need to configure anything for conflict detection and resolution. Q Replication maintains additional information to track the version of each data change, and the Q Replication system uses this additional version information to detect and resolve conflicts.

All tables that are replicated in peer-to-peer replication are altered to include two columns that are used only by Q Replication: a timestamp column and a small integer column. These columns are both maintained by triggers. These extra replication columns and triggers are created when you create the Q subscriptions for peer-to-peer replication. The version columns reflect which version of the row is most current. By examining the values of the version columns, the replication programs determine at which time the row was last updated, and by which server.

Important: Make sure that the version columns are not part of a primary key, unique constraint, or unique index to avoid having them selected by the replication administration tools as keys for replication.

Conflict detection and resolution is based on the contents of these version columns. If a conflict is detected, the most recent version of the row is kept, which is the one that contains the most recent timestamp value (after the times are corrected for time zones).

When you create a Q subscription for peer-to-peer replication with the ASNCLP command-line program or Replication Center, the following conflict handling options are set automatically in the IBMQREP_TARGETS table:

Conflict rule
V (check version): The Q Apply program checks the version column before applying a row.
Conflict action
F (force): The Q Apply program tries to force the change. This requires that the Q Capture program send all columns, so the CHANGED_COLS_ONLY value must be set to N (no) in the IBMQREP_SUBS table.

The V conflict rule and F conflict action are required for peer-to-peer replication. Do not change these settings in the control tables.

The following topics describe the number of replication queue maps and Q subscriptions that are needed for peer-to-peer replication and how peer-to-peer replication handles referential integrity:

Replication objects for peer-to-peer replication with two servers

In a peer-to-peer configuration with two servers, you must have the appropriate number of replication queue maps and Q subscriptions:
Number of replication queue maps
Between each pair of servers that participate in peer-to-peer replication, you need two replication queue maps. For example, if you have two servers named SERVER_RED and SERVER_GREEN, you need two replication queue maps:
  • One to identify the WebSphere® MQ queues that transport data from SERVER_RED to SERVER_GREEN
  • One to identify the WebSphere MQ queues that transport data from SERVER_GREEN to SERVER_RED
Number of Q subscriptions
For every logical table that is replicated in peer-to-peer replication, you need a pair of Q subscriptions between the two servers. For example, if you have two servers named SERVER_RED and SERVER_GREEN, then two Q subscriptions are built for you:
  • One from the source table on SERVER_RED to the target table on SERVER_GREEN
  • One from the source table on SERVER_GREEN to the target table SERVER_RED

If you have two logical tables, you need four Q subscriptions; for three logical tables, you need six Q subscriptions, and so on.

Figure 1 shows peer-to-peer replication of one logical table between two servers. For one logical table replicated between two servers, you need two Q subscriptions: one to replicate data from peer table A to peer table B, and one to replicate data from peer table B to peer table A. You also need at least two replication queue maps.
Figure 1. Q subscriptions in peer-to-peer replication with two servers. Changes are replicated from each copy of the table to the other copy of that table over WebSphere MQ queues.
The graphic shows Q subscriptions in peer-to-peer replication with two servers.

Replication objects for peer-to-peer replication with three or more servers

In a peer-to-peer configuration with three or more servers, you must have the appropriate number of replication queue maps and Q subscriptions:
Number of replication queue maps
Between each pair of servers that participate in peer-to-peer replication, you need two replication queue maps. You can calculate the number of replication queue maps that you need by using the equation n*(n-1), where n is the number of servers. For example, if you have three servers named SERVER_RED, SERVER_BLUE, and SERVER_GREEN, you need six replication queue maps:
  • One to identify the WebSphere MQ queues that transport data from SERVER_RED to SERVER_GREEN
  • One to identify the WebSphere MQ queues that transport data from SERVER_GREEN to SERVER_RED
  • One to identify the WebSphere MQ queues that transport data from SERVER_RED to SERVER_BLUE
  • One to identify the WebSphere MQ queues that transport data from SERVER_BLUE to SERVER_RED
  • One to identify the WebSphere MQ queues that transport data from SERVER_BLUE to SERVER_GREEN
  • One to identify the WebSphere MQ queues that transport data from SERVER_GREEN to SERVER_BLUE
Number of Q subscriptions
For every logical table that is replicated in peer-to-peer replication, there is a pair of Q subscriptions between the two servers. You can calculate the number of Q subscriptions that are built for you by using the equation n*(n-1), where n is the number of servers. For example, if you have three servers named SERVER_RED, SERVER_GREEN, and SERVER_BLUE, then six Q subscriptions are built for you:
  • One from the source table on SERVER_RED to the target table on SERVER_GREEN
  • One from the source table on SERVER_GREEN to the target table on SERVER_RED
  • One from the source table on SERVER_RED to the target table on SERVER_BLUE
  • One from the source table on SERVER_BLUE to the target table on SERVER_RED
  • One from the source table on SERVER_BLUE to the target table on SERVER_GREEN
  • One from the source table on SERVER_GREEN to the target table on SERVER_BLUE

If you have two logical tables, you need 12 Q subscriptions; for three logical tables, you need 18 Q subscriptions, and so on.

Figure 2 shows peer-to-peer replication of one logical table between three servers. In this case, you need six Q subscriptions: two going between each pair of servers. You also need at least six replication queue maps.
Figure 2. Q subscriptions in peer-to-peer replication with three servers. Changes are replicated from each copy of the table to all other copies of that table over WebSphere MQ queues.
The graphic shows Q subscriptions in peer-to-peer replication with three servers.

Conflict resolution and referential integrity

In almost all cases, peer-to-peer replication assures that all copies of a replicated table converge to the same state, even when conflicting changes occur at different copies. However, unresolvable conflicts can occur when a conflict stems from duplicate values in unique constraints that are defined on columns other than key columns or from referential constraint violations. When an unresolvable conflict occurs, the conflicting row is recorded in the IBMQREP_EXCEPTIONS table, and the Q Apply program performs the error action that you specified for the Q subscription.

If you want specific, unresolvable conflicts to be tolerated and the Q Apply program not to perform the error action that you specified for the Q subscription, then you can specify acceptable SQLSTATE values by setting the OKSQLSTATES for the Q subscription. Note, however, that even if you specify specific SQL states in the OKSQLSTATES, peer-to-peer replication still does not ensure convergence of all copies of the table for conflicts that result from referential constraint violations or from duplicate values in unique constraints that are defined on non-key columns. You can use the table differencing utility and the table repair utility to find and repair differences that are caused by any unresolvable conflicts that you allow.

Conflicts cannot be resolved when changes occur at different copies of the replicated table that introduce the same value for a unique constraint on columns other than the key columns in the Q subscription. If you specify that SQLSTATE 23505 is allowed by adding the value to the OKSQLSTATES for the Q subscription, then any unresolvable unique key conflicts do not cause the Q Apply program to perform the error action that you specified for the Q subscription.

Conflicts cannot be resolved when changes occur in rows in different copies of tables on which referential constraints are defined. These conflicts might be caused by either delays in the propagation of messages that involve the rows or by true conflicts. An example of a true conflict is when a parent row is deleted in one copy and concurrently a child row is inserted in another copy. When the Q Apply program tries to insert the child row at the copy where the parent row was concurrently deleted, an unresolvable conflict occurs, and the Q Apply program records the child row in the IBMQREP_EXCEPTIONS table with SQLSTATE 23503. When the Q Apply program attempts to delete the parent row at the copy where the child row was concurrently inserted, the delete fails if the referential constraint's delete rule is to restrict deletes (DELETE RESTRICT). The Q Apply program records the parent row in the IBMQREP_EXCEPTIONS table with SQLSTATE 23504 or SQLSTATE 23001.

Another example of a true conflict is when a child row is concurrently inserted and removed by the delete rule (CASCADE DELETE) of the referential integrity when a delete of the parent row is applied. In this case, when the cascade delete of the child row is replicated to the other copies of the table, the other copies might not find that child row, and a SQLSTATE 02000 is reported. The same SQLSTATE 02000 might be caused by delays in the propagation of messages that involve the rows. The insert of a child row at Copy 2 might arrive at Copy 3 before the insert of the parent row at Copy 1 arrives at Copy 3.

Referential integrity for partitioned databases

In a multiple partitioned database environment with tables that have referential integrity relationships, ensure that both the parent and child rows are on the same partition. If the parent and child rows are in a referential integrity relationship and are not on the same partition, the target might have referential integrity problems that result in SQLSTATE 23504, 23001, or 23503 (which correspond to SQLCODE 530 and 532).

Avoiding deadlocks in the IBMQREP_DELTOMB table

The IBMQREP_DELTOMB table is used by the Q Apply program to record conflicting deletes in peer-to-peer replication. If you experience deadlocks in this control table on any of the servers, increase the value of the deadlock_retries parameter for the Q Apply program. Also, try to reduce delete conflicts in your workload if possible.



Send your feedback | Information roadmap | The Q+SQL Replication Forum

Update icon Last updated: 2013-10-25