AMQSCLM: Design and Planning for using the sample

Information about how the cluster queue monitoring sample program works, points to consider when setting up a system for the sample program to run on, and modifications that can be made to the sample source code.

Design

The cluster queue monitoring sample program monitors local clustered queues that have consuming applications attached. The program monitors queues specified by the user. The name of the queue might be specific, for example APP.TEST01, or generic. Generic names must be in a format that conforms to PCF (Programmable Command Format). Examples of generic names are APP.TEST*, or APP*.

Each queue manager in a cluster that owns an instance of a local queue to be monitored, requires an instance of the cluster queue monitoring sample program to be connected to it.

Dynamic message routing

The cluster queue monitoring sample program uses the IPPROCS (open for input process count) value of a queue to determine whether that queue has any consumers. A value greater than 0 indicates that the queue has at least one consuming application attached. Such queues are active. A value of 0 indicates that the queue has no attached consuming programs. Such queues are inactive.

For a clustered queue with multiple instances in a cluster, WebSphere® MQ uses the cluster workload priority property CLWLPRTY of each queue instance to determine which instances to send messages to. WebSphere MQ sends messages to the available instances of a queue with the highest CLWLPRTY value.

The cluster queue monitoring sample program activates a cluster queue by setting the local CLWLPRTY value to 1. The program deactivates a cluster queue by setting its CLWLPRTY value to 0.

WebSphere MQ clustering technology propagates the updated CLWLPRTY property of a clustered queue to all relevant queue managers in the cluster. For example,
  • A queue manager with a connected application that puts messages to the queue.
  • A queue manager that owns a local queue of the same name in the same cluster.
The propagation is done using the full repository queue managers of the cluster. New messages for the cluster queue are directed to the instances with the highest CLWLPRTY value within the cluster.

Queued message transfer

The dynamic modification of the value of CLWLPRTY influences the routing of new messages. This dynamic modification does not affect messages already queued on a queue instance with no attached consumers, or messages that have been through the workload balancing mechanism before a modified CLWLPRTY value was propagated across the cluster. As a result, messages remain on any inactive queue and not be processed by a consuming application. To solve this, the cluster queue monitoring sample program is able to get messages from a local queue with no consumers, and send these messages to remote instances of the same queue where consumers are attached.

The cluster queue monitoring sample program transfers messages from an inactive local queue to one or more active remote queues by getting messages (using MQGET) and putting messages (using MQPUT) to the same clustered queue. This transfer causes the WebSphere MQ cluster workload management to select a different target instance, based on a higher CLWLPRTY value than that of the local queue instance. Message persistence and context are preserved during the message transfer. Message order, and any binding options are not preserved.

Planning

The cluster queue monitoring sample program modifies the cluster configuration when there is a change in the connectivity of consuming applications. Modifications are transmitted from the queue managers where the cluster queue monitoring sample program is monitoring queues, to the full repository queue managers in the cluster. The full repository queue managers process the configuration updates and resend them to all relevant queue managers in the cluster. Relevant queue managers include those queue managers that own clustered queues of the same name (where an instance of the cluster queue monitoring sample program is running), and any queue manager where an application opened the cluster queue to put messages to it in the last 30 days.

Changes are asynchronously processed across the cluster. Therefore, after each change, different queue managers in the cluster might have different views of the configuration for a period of time.

The cluster queue monitoring sample program is only suitable for systems where consuming applications infrequently attach or detach; for example, long running consuming applications. When used to monitor systems where consuming applications are only attached for short periods, the latency incurred when distributing the configuration updates might result in queue managers in the cluster having an incorrect view of the queues where consumers are attached. This latency might result in incorrectly routed messages.

When monitoring many queues, a relatively low rate of change in attached consumers across all queues might increase cluster configuration traffic across the cluster. Increased cluster configuration traffic can result in excessive load on one or more of the following queue managers.
  • The queue managers where the cluster queue monitoring sample program is running
  • The full repository queue managers
  • A queue manager with a connected application that puts messages to the queue
  • A queue manager that owns a local queue of the same name in the same cluster

Processor usage on the full repository queue managers must be assessed. Additional processor usage is visible as message traffic on the full repository queue SYSTEM.CLUSTER.COMMAND.QUEUE. If messages build up on that queue, it indicates that the full repository queue managers are unable to keep up with the rate of cluster configuration change in the system.

When many queues are being monitored by the cluster queue monitoring sample program, there is an amount of work performed by the sample program and the queue manager. This work is performed, even when there are no changes to the attached consumers. The -i argument can be modified to reduce processor usage of the sample program on the local system, by decreasing the frequency of the monitoring cycle.

To help detect excessive activity, the cluster queue monitoring sample program reports average processing time per polling interval, elapsed processing time, and number of configuration changes. The reports are delivered in an information message, CLM0045I, every 30 minutes, or every 600 poll intervals, whichever is sooner.

Cluster queue monitoring usage requirements

The cluster queue monitoring sample program has requirements and restrictions. You can modify the sample source code provided to change some of these restrictions in how it can be used. Examples listed in this section detail modifications that can be made.

  • The cluster queue monitoring sample program is designed to be used to monitor queues where consuming applications are either attached, or not attached. If the system has consuming applications that are frequently attaching and detaching, the sample program might generate excessive cluster configuration activity across the entire cluster. This might have an impact on the performance of the queue managers in the cluster.
  • The cluster queue monitoring sample program depends upon the underlying WebSphere MQ system and cluster technology. The number of queues being monitored, the frequency of monitoring and the frequency of the change of the state of each queue affects the load on the overall system. These factors must be considered when selecting the queues to be monitored and the poll interval of the monitoring.
  • An instance of the cluster queue monitoring sample program must be connected to every queue manager in the cluster that owns an instance of a queue to be monitored. It is not necessary to connect the sample program to queue managers in the cluster that do not own the queues.
  • The cluster queue monitoring sample program must be run with suitable authorization to access all of the WebSphere MQ resources required. For example,
    • The queue manager to be connected to
    • The SYSTEM.ADMIN.COMMAND.QUEUE
    • All the queues to be monitored when message transfer is performed
  • The command server must be running for each queue manager with the cluster queue monitoring sample program connected.
  • Each instance of the cluster queue monitoring sample program requires exclusive use of a local (non-clustered) queue on the queue manager that it is connected to. This local queue is used to control the sample program, and receive reply messages from inquires made to the command server of the queue manager.
  • All queues to be monitored by a single instance of the cluster queue monitoring sample program must be in the same cluster. If a queue manager has queues in multiple clusters that require monitoring, multiple instances of the sample program are required. Each instance needs a local queue for control and reply messages.
  • All queues to be monitored must be in a single cluster. Queues configured to use a cluster namelist are not monitored.
  • Enabling the transfer of messages from inactive queues is optional. It applies to all queues being monitored by the instance of the cluster queue monitoring sample program. If only a subset of the queues being monitored require message transfer enabled, two instances of the cluster queue monitoring sample program are needed. One sample program has message transfer enabled, and the other has message transfer disabled. Each instance of the sample program needs a local queue for control and reply messages.
  • WebSphere MQ cluster workload balancing will, by default, send messages to instances of clustered queues that reside on the same queue manager that a putting application is connected to. This must be disabled while the local queue is inactive in the following circumstances:
    • Putting applications connect to queue managers that own instances of an inactive queue that are being monitored
    • Queued messages are being transferred from inactive queues to active queues.
    The local workload balancing preference on the queue can be disabled statically, through setting the CLWLUSEQ value to ANY. In this configuration messages put on local queues are distributed to local and remote queue instances to balance workload, even when there are local consuming applications. Alternatively, the cluster queue monitoring sample program can be configured to temporarily set the CLWLUSEQ value to ANY while the queue has no attached consumers which results in only local messages going to local instances of a queue while that queue is active.
  • The WebSphere MQ system and applications must not use CLWLPRTY for the queues to be monitored, or channels being used. Otherwise, the actions of the cluster queue monitoring sample program on CLWLPRTY queue attributes might have undesired effects.
  • The cluster queue monitoring sample program logs runtime information to a set of report files. A directory to store these reports is required, and the cluster queue monitoring sample program must have authorization to write to it.