To migrate a single instance queue manager to
a multi-instance queue manager, you must move the queue manager data
to a shared directory, and reconfigure the queue manager on two other
servers.
Before you begin
You must check the prerequisites for running a multi-instance
queue manager as part of this task. Some environments have been tested
with multi-instance queue managers, and are known to work. They are AIX®, Red Hat Linux®, SUSE Linux Enterprise
Server, HP-UX with the file system on Linux Red
Hat, IBM® i, and Windows Server. See Testing and support statement for WebSphere® MQ multi-instance queue managers for
the latest list of tested environments. The support statement has
detailed version and prerequisite information for each environment
it lists. Other environments might work; a test tool is provided with WebSphere MQ to assist you
in qualifying other environments.
You must have three servers
to run a multi-instance queue manager. One server has a shared file
system to store the queue manager data and logs. The other servers
run the active and standby instances of the queue manager.
About this task
You have a single-instance queue manager that you want
to convert to a multi-instance queue manager. The queue manager conversion
itself is straightforward, but you must do other tasks to create a
fully automated production environment.
You must check the prerequisites
for a multi-instance queue manager, set up the environment and check
it. You must set up a monitoring and management system to detect if
the multi-instance queue manager has failed and been automatically
restarted. You can then find out what caused the restart, remedy it,
and restart the standby. You must also modify applications, or the
way applications are connected to the queue manager, so that they
can resume processing after a queue manager restart.
Procedure
- Check the operating system that you are going to run the
queue manager on, and the file system on which the queue manager data
and logs are stored on. Check that they can run a multi-instance queue
manager.
- Consult Testing and support statement for WebSphere MQ multi-instance queue managers.
See whether the combination of operating system and file system is
tested and capable of running a multi-instance queue manager.
- A shared file system must provide lease-based locking to be adequate
to run multi-instance queue managers. Lease-based locking is a recent
feature of some shared file systems, and in some case fixes are required.
The support statement provides you with the essential information.
- Run amqmfsck to verify that the file
system is configured correctly.
- File systems are sometimes configured with performance at a premium
over data integrity. It is important to check the file system configuration.
A negative report from the amqmfsck tool tells
you the settings are not adequate. A positive result is an indication
that the file system is adequate, but the result is not a definitive
statement that the file system is adequate. It is a good indication.
- Run the integrity checking application provided in the
technote, Testing a shared file system for compatibility with WebSphere MQ Multi-instance
Queue Managers.
- The checking application tests that the queue manager is restarting
correctly.
- Configure a user and group to be able to access a share
on the networked file system from each server that is running a queue
manager instance.
- Set up a directory for the share on the networked
file system with the correct access permissions.
- For example, create a root directory on the share called
MQHA
that
has subdirectories data and logs.
Each queue manager creates its own data and log directories under data and logs.
Create MQHA
with the following properties:
- On Windows, create
drive\MQHA
on
the shared drive. The owner is a member of mqm
. mqm
must
have full-control authority. Create a share for drive\MQHA
.
- On UNIX, create
/MQHA
on
the shared drive. /MQHA
is owned by the user and
group mqm
and has the access permissions rwx
.
- If you are using an NFS v4 file server, add the line
/MQHA * rw,sync,no_wdelay,fsid=0)
to etc/exports,
and then start the NFS daemon: /etc/init.d/nfs start
.
- Copy the queue manager data and the logs to the share.
- You can choose to copy files manually, by following the procedure
to back up the queue manager. On Windows,
you can run the hamvmqm command to move the queue
manager data to the share. The hamvmqm command
works for queue managers created before version 7.0.1, and not reconfigured
with a datapath, or for queue managers that do not have a DataPath configuration
attribute. Choose one of these methods:
- Update the queue manager configuration information stored
on the current queue manager server.
- If you moved the queue manager data and logs by running the hamvmqm command,
the command has already modified the configuration information correctly
for you.
- If you moved the queue manager data and logs manually, you must
complete the following steps.
- On Windows:
- Modify the log registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\QMgrName\Log
"LogPath"="share\\logs\\QMgrName\\"
- Modify the Prefix registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\QMgrName
"Prefix"="share\\data"
- On UNIX, and Linux,
- Modify the
Log:
stanza in the queue manager qm.ini file,
which is on the share
:
LogPath=share/logs/QMgrName
- Modify the
QueueManager:
stanza in the WebSphere MQ mqs.ini file,
which is typically in the /var/mqm
directory on UNIX and Linux:DataPath=share/data/QMgrName
- Where,
QMgrName
is the representation
of the queue manager name in the existing registry key on Windows. QMgrName is
the Directory
name in the QueueManager:
stanza
in the mqs.ini
file on UNIX, and Linux. share
is
share where the data and logs are moved to.
- Add the queue manager configuration information to the
new queue manager server.
- Run the dspmqinf command to display
the queue manager information
- Run the command on the server that ran the queue manager in version
6.0.
dspmqinf -o command QMgrName
The command output is formatted
ready to create a queue manager configuration.
addmqinf
-s QueueManager -v Name=QMgrName -v Directory=QMgrName -v
Prefix=d:\var\mqm Datapath=\share\data\QMgrName
- Create a queue manager configuration on the other server.
- Run the addmqinf command copied from the previous
output
- Add the network address of the new server to the connection
name in client and channel definitions.
- Find all the client, sender, and requester TCPIP settings
that refer to the server.
- Client settings might be in Client Definition Tables (CCDT),
in environment variables, in Java properties
files, or in client code.
- Cluster channels automatically discover the connection name of
a queue manager from its cluster receiver channel. As long as the
cluster receiver channel name is blank or omitted, TCPIP discovers
the IP address of the server hosting the queue manager.
- Modify the connection name for each of these connections
to include the TCPIP addresses of both servers that are hosting the
multi-instance queue manager.
echo DISPLAY CHANNEL(ENGLAND) CONNAME | runmqsc QM1
5724-H72 (C) Copyright IBM Corp. 1994, 2024. ALL RIGHTS RESERVED.
Starting MQSC for queue manager QM1.
1: DISPLAY CHANNEL(ENGLAND) CONNAME
AMQ8414: Display Channel details.
CHANNEL(ENGLAND) CHLTYPE(SDR)
CONNAME(LONDON)
echo ALTER CHANNEL(ENGLAND) CHLTYPE(SDR) CONNAME('LONDON, BRISTOL') | runmqsc QM1
- Update your monitoring and management procedures to detect
the queue manager restarting.
- Update client applications to be automatically reconnectable,
if appropriate.
- Update the start procedure for your WebSphere MQ applications to be started
as queue manager services.
- Start each instance of the queue manager, permitting them
to be highly available.
- The first instance of the queue manager that is started becomes
the active instance.
- Issue the command twice, once on each server.
What to do next
To get the highest availability out of multi-instance
queue managers, you must design client applications to be reconnectable
and server applications to be restartable; see Application
recovery.