Create a multi-instance queue manager on Linux

An example shows how to set up a multi-instance queue manager on Linux®. The setup is small to illustrate the concepts involved. The example is based on Linux Red Hat Enterprise 5. The steps differ on other UNIX platforms.

The example is set up on a 2 GHz notebook computer with 3 GB RAM running Windows XP Service Pack 2. Two VMware virtual machines, Server1 and Server2, run Linux Red Hat Enterprise 5 in 640 MB images. Server1 hosts the network file system (NFS), the queue manager logs and an HA instance. It is not usual practice for the NFS server also to host one of the queue manager instances; this is to simplify the example. Server2 mounts Server1's queue manager logs with a standby instance. A WebSphere MQ MQI client is installed on an additional 400 MB VMware image that runs Windows XP Service Pack 2 and runs the sample high availability applications. All the virtual machines are configured as part of a VMware host-only network for security reasons.

Note: You should put only queue manager data on an NFS server. On the NFS, use the following three options with the mount command to make the system secure:
noexec
By using this option, you stop binary files from being run on the NFS, which prevents a remote user from running unwanted code on the system.
nosuid
By using this option, you prevent the use of the set-user-identifier and set-group-identifier bits, which prevents a remote user from gaining higher privileges.
nodev
By using this option, you stop character and block special devices from being used or defined, which prevents a remote user from getting out of a chroot jail.

Example

Table 1. Illustrative multi-instance queue manager configuration on Linux
Server1 Server2
Log in as root
Follow the instructions in Installing IBM® WebSphere® MQ to install WebSphere MQ, create the mqm user and group if these do not exist, and define /var/mqm.

Check what uid and gid in /etc/passwd on the first machine displays for mqm; for example,

mqm:x:501:100:MQ User:/var/mqm:/bin/bash

Match the uid and gid for mqm in /etc/passwd on the second machine to ensure that these values are identical. Reboot this machine if you have to change the values.

Complete the task Verifying shared file system behavior to check that the file system supports multi-instance queue managers.
Create log and data directories in a common folder, /MQHA, that is to be shared. For example:
  1. mkdir /MQHA
  2. mkdir /MQHA/logs
  3. mkdir /MQHA/qmgrs
Create the folder, /MQHA, to mount the shared file system. Keep the path the same as on Server1; for example:
  1. mkdir /MQHA
Ensure that the MQHA directories are owned by user and group mqm, and the access permissions are set to rwx for user and group; for example ls -al displays,
drwxrwxr-x mqm mqm 4096 Nov 27 14:38 MQDATA
  1. chown -R mqm:mqm /MQHA
  2. chmod -R ug+rwx /MQHA
Create the queue manager:
crtmqm -ld /MQHA/logs -md /MQHA/qmgrs QM1
 
Add1 /MQHA *(rw,sync,no_wdelay,fsid=0) to /etc/exports  
Start the NFS daemon: /etc/init.d/nfs start Mount the exported file system /MQHA:

mount -t nfs4 -o hard,intr Server1:/ /MQHA
Copy the queue manager configuration details from Server1:
dspmqinf -o command QM1
and copy the result to the clipboard:
addmqinf -s QueueManager
 -v Name=QM1
 -v Directory=QM1
 -v Prefix=/var/mqm
 -v DataPath=/MQHA/qmgrs/QM1
Paste the queue manager configuration command into Server2:
addmqinf -s QueueManager 
-v Name=QM1 
-v Directory=QM1 
-v Prefix=/var/mqm 
-v DataPath=/MQHA/qmgrs/QM1
Start the queue manager instances, in either order, with the -x parameter: strmqm -x QM1

The command used to start the queue manager instances must be issued from the same IBM WebSphere MQ installation as the addmqinf command. To start and stop the queue manager from a different installation, you must first set the installation associated with the queue manager using the setmqm command. For more information, see setmqm.

1 The '*' allows all machines that can reach this one mount /MQHA for read/write. Restrict access on a production machine.