Peer-replicated local cache

You must ensure the cache is synchronized if multiple processes with independent cache instances exist. To ensure that the cache instances are synchronized, enable a peer-replicated cache with Java™ Message Service (JMS).

WebSphere® eXtreme Scale includes two plug-ins that automatically propagate transaction changes between peer ObjectGrid instances. The JMSObjectGridEventListener plug-in automatically propagates eXtreme Scale changes using JMS.
Figure 1. Peer-replicated cache with changes that are propagated with JMS
JMS propagates changes among two ObjectGrid instances that are running in different Java virtual machines. Each ObjectGrid instance is associated with an application.
If you are running a WebSphere Application Server environment, the TranPropListener plug-in is also available. The TranPropListener plug-in uses the high availability (HA) manager to propagate the changes to each peer cache instance.
Figure 2. Peer-replicated cache with changes that are propagated with the high availability manager
The HA manager propagates changes among two ObjectGrid instances that are running in different Java virtual machines. Each ObjectGrid instance is associated with an application.

Advantages

  • The data is more valid because the data is updated more often.
  • With the TranPropListener plug-in, like the local environment, the eXtreme Scale can be created programmatically or declaratively with the eXtreme Scale deployment descriptor XML file or with other frameworks such as Spring. Integration with the high availability manager is done automatically.
  • Each BackingMap can be independently tuned for optimal memory utilization and concurrency.
  • BackingMap updates can be grouped into a single unit of work and can be integrated as a last participant in 2-phase transactions such as Java Transaction Architecture (JTA) transactions.
  • Ideal for few-JVM topologies with a reasonably small dataset or for caching frequently accessed data.
  • Changes to the eXtreme Scale are replicated to all peer eXtreme Scale instances. The changes are consistent as long as a durable subscription is used.

Disadvantages

  • Configuration and maintenance for the JMSObjectGridEventListener can be complex. eXtreme Scale can be created programmatically or declaratively with the eXtreme Scale deployment descriptor XML file or with other frameworks such as Spring.
  • Not scalable: The amount of memory required by the database may overwhelm the JVM.
  • Functions improperly when adding Java virtual machines:
    • Data cannot easily be partitioned
    • Invalidation is expensive.
    • Each cache must be warmed-up independently

When to use

Use deployment topology only when the amount of data to be cached is small, can fit into a single JVM, and is relatively stable.