Configuring the global cache for multi-instance integration nodes

You can configure the global cache to withstand software or hardware failures so that it is available for as much time as possible. Configure a multi-instance integration node to host container servers by using an XML cache policy file.

Before you begin

For more information about the default global cache topology, see Data caching overview.

About this task

You can configure the global cache so that a multi-instance integration node hosts up to 4 container servers. If the active integration node instance fails, the standby integration node instance starts, and the container servers start up successfully within that integration node. As long as there is an active catalog server running elsewhere, the container servers rejoin the global cache. This mechanism does not allow a single-integration node cache to retain cached data on failover to a standby integration node instance

Consider the following example. Your global cache consists of two integration nodes that host catalog servers and container servers, and a multi-instance integration node. The active instance of the multi-instance integration node hosts up to 4 container servers. If the active instance of the multi-instance integration node fails, the cache will remain operational if at least one of the catalog servers is still available. Data is temporarily rebalanced across the remaining container servers in the integration node that host the catalog servers. When the standby instance of the multi-instance integration node starts, the container servers rejoin the global cache, and cached data is rebalanced automatically.

A multi-instance integration node cannot host a catalog server. Therefore, you cannot configure an integration server to host a catalog server if that integration server is defined with multiple listener hosts.

A sample XML cache policy file is provided as a starting point for your configuration. The policy_multi_instance.xml file configures three integration nodes in a high availability scenario. Two integration nodes each host a catalog server, and a multi-instance integration node hosts two container servers.

To configure a multi-instance integration node, a listenerHost element was introduced as an alternative to the listenerHost attribute of the integration node element. You can use the listenerHost element to specify a list of listener hosts. Alternatively, you can set the listenerHost property on the integration server to a comma-separated list of listener hosts.

The following steps describe how to configure the global cache for a multi-instance integration node.

Procedure

  1. Copy the sample cache policy file, policy_multi_instance.xml, from install_dir/server/sample/globalcache to another location on your file system.

    Do not edit the sample cache policy file in its original location; copy it to your own file system first. The original sample cache policy file might be replaced when you apply maintenance to IBM Integration Bus. You can place a copy of the same cache policy file on each computer where an integration node is running, or you can provide a single copy of the cache policy file in a shared file system for all integration nodes to access. No matter how the file is shared between two computers, the cache policy file must be placed on the same file path on each computer or shared system.

  2. Modify the cache policy file for your system, specifying the appropriate integration node names and listener hosts, the port range that the integration node is to use, and how many catalog servers the integration node hosts.
    Optionally, you can also specify a domain name for all catalog servers in the embedded cache. If you do not set a domain name, the integration node creates one.
    Ensure that the cache policy meets the following criteria:
    • You can define 0, 1, or 2 catalog servers for an individual integration node, but at least one catalog server must be defined in the cache policy.
    • A multi-instance integration node cannot host a catalog server.
    • You cannot specify the listenerHost element more than once for an integration node that is configured to host one or more catalog servers.
    • If two integration nodes share a host name, you must set a distinct port range for each integration node.
    • Ensure that the port range for each integration node includes at least 20 ports.
    • The integration node names and listener hosts that are specified in the cache policy must match the values that are defined for the integration node.
    • You must specify either the listenerHost element or the listenerHost attribute for each integration node.
    • You can define only one domain name in the cache policy file.
    • If specified, the domain name must precede the integration node elements in the cache policy file.
    • The cache policy file must be encoded in UTF-8.
    • The cache policy file must contain valid XML. The cache policy file is validated against an XML schema when you set the integration node level property. You can also validate the cache policy file against the copy of the schema (policy.xsd) that is provided in install_dir/server/cachesupport/schema.
    When you use an XML cache policy file, the integration node level portRange property is ignored. The port range that is specified in the XML file overrides the property that is specified for the integration node.
  3. Save the cache policy file.
  4. Set the cache policy to the fully qualified name of the cache policy file.

    The path that you specify must be absolute, not relative. If you use a shared drive on Windows, you must use the \\hostname\directory path syntax to the shared drive, instead of a mapped drive letter. The IBM Integration Bus user ID that is used to access the \\hostname\directory path must have read access to the file system and must use the same password.

    You can set the cache policy by using commands.

  5. Restart each integration node.

Results

When each integration node restarts, it uses the values in the cache policy file to determine its cache properties. If multiple listener hosts are specified, the global cache tries to bind to each one in turn until it finds one that is available on the system. If the global cache does not find a listener host that is available on the system, it uses the first listener host in the list.
Each integration node contains up to 4 container servers. To find out where container servers are placed, use the mqsicacheadmin command to run the showPlacement command, as shown in the following example:
mqsicacheadmin integrationNodeName -c showPlacement
You can also use the mqsicacheadmin command to show cache components in a multi-integration node cache. For example, the listHosts command shows the host names, number of hosts, and number of catalogs in the cache:
mqsicacheadmin integrationNodeName -c listHosts