IBM Integration Bus, Version 9.0.0.8 Operating Systems: AIX, HP-Itanium, Linux, Solaris, Windows, z/OS

See information about the latest product version

Data caching terminology

The global cache is embedded in the broker. You can also connect to an external WebSphere® eXtreme Scale grid.

The embedded cache has a default single-broker topology and can be used by message flows running in any integration server on the broker, without any configuration. However, you can switch off the default topology by selecting a broker policy of none, and set properties explicitly for each integration server.

The following diagram shows the embedded global cache in a broker that contains six integration servers. Four integration servers host components for the global cache, but message flows in all six integration servers can use the cache.

Diagram showing how integration server 1 hosts a catalog server and a container server, and integration servers 2, 3, and 4 host container servers only. Integration servers 5 and 6 do not host any cache components, but their message flows can still communicate with the cache.

The following diagram shows a multi-broker embedded cache, which is configured by the broker policy file policy_two_brokers_ha.xml.

Diagram showing two brokers that are participating in an embedded cache. Integration server 1 of broker 1 contains a catalog server and a container server. Integration server 1 of broker 2 also contains a catalog server and container server. Integration servers 2, 3, and 4 of broker 1 contain container server. Integration server 2 of broker 2 also contains a container server.

The following diagram shows how IBM® Integration Bus can connect to both an embedded cache and an external WebSphere eXtreme Scale grid. A configurable service is used to connect to the external grid.

Diagram showing how IBM Integration Bus can connect to an embedded cache and a WebSphere eXtreme Scale grid at the same time. Integration server 1 in the broker contains a catalog server and a container server. Integration servers 2, 3, and 4 each host a container server. Double-ended arrows link the message flows in each integration server to the embedded cache and to a remote WebSphere eXtreme Scale grid. Between the message flows and remote grid is a box that represents the configurable service that is used to connect to the external grid.

The following components are involved in the global cache.
Broker-level properties
By default, the global cache is turned off, and the cache policy is set to disabled. To use the global cache, select a broker-level cache policy by using the cachePolicy parameter.

IBM Integration Bus has a default cache policy that creates a default topology of cache components in a single broker. The default topology puts catalog servers and container servers in integration servers dynamically so that the cache is available for use by all integration servers in the broker. Broker-level properties are available to specify a range of ports, and a listener host for the default topology. The broker sets a range of ports to use, but you can specify a particular range of ports by using the cachePortRange parameter. You can use the listenerHost parameter to specify the listener host that is used by the cache components. If your computer has more than one host name, setting the listener host ensures that the cache components use the correct host name.

If you set the cache policy to none, you must set the integration server properties explicitly. The properties that were set most recently by the broker-level policy are used as a starting point. Therefore, if you set the cache policy to default first, then switch to none, the default topology properties are retained.

You can configure the global cache to span multiple brokers by setting the cache policy to the fully qualified name of an XML policy file. This policy file lists the brokers that share the cache, and for each broker specifies the listener host, port range, and the number of catalog servers hosted. You can use the policy file to set up a single broker that hosts two catalog servers. If one catalog server is stopped, the broker switches to the other catalog server, ensuring that no cache data is lost. You can also use the policy file to configure a multi-instance broker to host more than one container server. If the active instance of the multi-instance broker fails, the global cache switches to the container server in the standby instance.

If you set the cache policy to disabled, all cache components in the broker are disabled. The disabled policy is the default setting.

For more information, see Configuring the embedded global cache and Parameter values for the cachemanager component.

Cache manager
The cache manager is the integration server resource that manages the cache components that are embedded in that integration server.

In the default topology, one integration server in the broker hosts a catalog server, and up to three other integration servers in that broker host container servers. All integration servers can communicate with the global cache, regardless of whether they are hosting catalog servers, container servers, or neither. Each integration server contains a cache manager, which manages the cache components that are embedded in that integration server. When you turn off the default topology, configure the integration servers by setting the parameter values for the cachemanager component.

For more information, see Configuring the embedded global cache and Parameter values for the cachemanager component.

Container servers
A container server is a component that is embedded in the integration server that holds a subset of the cache data. Between them, all container servers in the global cache host all of the cache data at least once. If more than one container exists, the default cache policy ensures that all data is replicated at least once. In this way, the global cache can cope with the loss of container servers without losing data.

You can host more than one container server in a multi-instance broker. If the active instance of the multi-instance broker fails, the global cache switches to the container server in the standby instance.

Catalog servers
The catalog server controls placement of data and monitors the health of containers. You must have at least one catalog server in your global cache.

To avoid losing cache data when a catalog server is lost, use a policy file to specify more than one catalog server for a broker. For example, if you specify two catalog servers for a single broker, if one catalog server fails, the broker switches to the other catalog server. If the cache is shared by two brokers, each of which hosts a catalog server, if one catalog server fails, the brokers switch to the remaining catalog server. Having more than one catalog server can affect startup time until the cache is available. If you have more than one catalog server, you must start at least two of them for the cache to be available. When you configure a cache across multiple brokers with multiple catalog servers, if you need to start one broker before the others then you can configure this broker to host two catalog servers. You cannot host catalog servers in a multi-instance broker.

When you are using multiple catalog servers, you can improve performance by taking the following steps:
  • Provide other integration servers that host container servers only, rather than having only integration servers that host both catalog and container servers.
  • Start and stop integration servers in sequence, rather than using the mqsistart or mqsistop commands to start or stop all integration servers at once. For example, start the integration servers that host catalog servers before you start the integration servers that host only container servers.

Domain name
When you are using a global cache that spans multiple brokers, ensure that all WebSphere eXtreme Scale servers that are clustered in one embedded grid use the same domain name. Only servers with the same domain name can participate in the same grid. WebSphere eXtreme Scale clients use the domain name to identify and distinguish between embedded grids. If you do not specify a domain name in the integration server or broker-level policy file, the broker creates a name that is based on the server names of the catalog servers.

By default, each server starts with a domain name that is derived by the broker. In previous versions of IBM Integration Bus, the domain name for all WebSphere eXtreme Scale servers in all embedded caches was an empty string. Servers in different domains cannot collaborate in the same grid. Therefore, for a cache that spans more than one broker, migrate those brokers at the same time.

Grids
WebSphere eXtreme Scale provides a scalable, in-memory data grid. The data grid dynamically caches, partitions, replicates, and manages data across multiple servers. The catalog servers and container servers for the IBM Integration Bus global cache collaborate to act as a WebSphere eXtreme Scale grid. For more information about grids, see WebSphere eXtreme Scale product documentation.

Maps
Data is stored in maps. A map is a data structure that maps keys to values. One map is the default map, but the global cache can have several maps.

The cache uses WebSphere eXtreme Scale dynamic maps. Any map name is allowed, apart from names that begin with SYSTEM.BROKER, which is reserved for use by the broker. The default map is named SYSTEM.BROKER.DEFAULTMAP; you can use or clear this map.

ObjectGrid® file
An ObjectGrid XML file is used to configure the WebSphere eXtreme Scale client. You can use this file to override WebSphere eXtreme Scale properties. For more information about configuring clients, see WebSphere eXtreme Scale product documentation.

You can configure WebSphere eXtreme Scale options by using the following tools:

You can use resource statistics and activity trace to monitor the status of the global cache and external grid, and to diagnose problems. You can also administer the embedded global cache by using the mqsicacheadmin command.


bn23740_.htm | Last updated Friday, 21 July 2017