Adding a queue to act as a backup

Follow these instructions to provide a backup in Chicago for the inventory system that now runs in New York. The Chicago system is only used when there is a problem with the New York system.

Before you begin

Note: For changes to a cluster to be propagated throughout the cluster, at least one full repository must always be available. Ensure that your repositories are available before starting this task.
Scenario:
  • The INVENTORY cluster has been set up as described in Adding a queue manager to a cluster. It contains three queue managers; LONDON and NEWYORK both hold full repositories, PARIS holds a partial repository. The inventory application runs on the system in New York, connected to the NEWYORK queue manager. The application is driven by the arrival of messages on the INVENTQ queue.
  • A new store is being set up in Chicago to provide a backup for the inventory system that now runs in New York. The Chicago system only used when there is a problem with the New York system.

About this task

Follow these steps to add a queue to act as a backup.

Procedure

  1. Decide which full repository CHICAGO refers to first.

    Every queue manager in a cluster must refer to one or other of the full repositories to gather information about the cluster. It builds up its own partial repository. It is of no particular significance which repository you choose for any particular queue manager. In this example, NEWYORK is chosen. Once the new queue manager has joined the cluster it communicates with both of the repositories.

  2. Define the CLUSRCVR channel.

    Every queue manager in a cluster needs to define a cluster-receiver on which it can receive messages. On CHICAGO, define:

    DEFINE CHANNEL(INVENTORY.CHICAGO) CHLTYPE(CLUSRCVR) TRPTYPE(TCP) CONNAME(CHICAGO.CMSTORE.COM) CLUSTER(INVENTORY) DESCR('Cluster-receiver channel for CHICAGO')

  3. Define a CLUSSDR channel on queue manager CHICAGO.

    Every queue manager in a cluster needs to define one cluster-sender channel on which it can send messages to its first full repository. In this case we have chosen NEWYORK, so CHICAGO needs the following definition:

    DEFINE CHANNEL(INVENTORY.NEWYORK) CHLTYPE(CLUSSDR) TRPTYPE(TCP) CONNAME(NEWYORK.CHSTORE.COM) CLUSTER(INVENTORY) DESCR('Cluster-sender channel from CHICAGO to repository at NEWYORK')

  4. Alter the existing cluster queue INVENTQ.

    The INVENTQ which is already hosted by the NEWYORK queue manager is the main instance of the queue.

    ALTER QLOCAL(INVENTQ) CLWLPRTY(2)

  5. Review the inventory application for message affinities.

    Before proceeding, ensure that the inventory application does not have any dependencies on the sequence of processing of messages.

  6. Install the inventory application on the system in CHICAGO.
  7. Define the backup cluster queue INVENTQ

    The INVENTQ which is already hosted by the NEWYORK queue manager, is also to be hosted as a backup by CHICAGO. Define it on the CHICAGO queue manager as follows:

    DEFINE QLOCAL(INVENTQ) CLUSTER(INVENTORY) CLWLPRTY(1)

    Now that you have completed all the definitions, if you have not already done so start the channel initiator on WebSphere® MQ for z/OS®. On all platforms, start a listener program on queue manager CHICAGO. The listener program listens for incoming network requests and starts the cluster-receiver channel when it is needed.

Results

Figure 1 shows the cluster set up by this task.

Figure 1. The INVENTORY cluster, with four queue managers
The diagram shows the INVENTORY cluster, with NEW YORK, LONDON, CHICAGO, and PARIS connected inside the cluster. NEWYORK and LONDON are hosting the inventory application and CHICAGO and NEWYORK are hosting the INVENTQ.

The INVENTQ queue and the inventory application are now hosted on two queue managers in the cluster. The CHICAGO queue manager is a backup. Messages put to INVENTQ are routed to NEWYORK unless it is unavailable when they are sent instead to CHICAGO.

Note:

The availability of a remote queue manager is based on the status of the channel to that queue manager. When channels start, their state changes several times, with some of the states being less preferential to the cluster workload management algorithm. In practice this means that lower priority (backup) destinations can be chosen while the channels to higher priority (primary) destinations are starting.

If you need to ensure that no messages go to a backup destination, do not use CLWLPRTY. Consider using separate queues, or CLWLRANK with a manual switch over from the primary to backup.