Configuring distributed deployments

Use the deployment policy descriptor XML file and the ObjectGrid descriptor XML file to manage your topology.

The deployment policy is encoded as an XML file that is provided to the eXtreme Scale container server. The XML file specifies the following information:
  • The maps that belong to each map set
  • The number of partitions
  • The number of synchronous and asynchronous replicas
The deployment policy also controls the following placement behaviors.
  • The minimum number of active container servers before placement occurs
  • Automatic replacement of lost shards
  • Placement of each shard from a single partition onto a different machine

Endpoint information is not pre-configured in the dynamic environment. There are no server names or physical topology information found in the deployment policy. All shards in a data grid are automatically placed into container servers by the catalog service. The catalog service uses the constraints that are defined by the deployment policy to automatically manage shard placement. This automatic shard placement leads to easy configuration for large data grids. You can also add servers to your environment as needed.

Restriction: In a WebSphere® Application Server environment, a core group size of more than 50 members is not supported.

A deployment policy XML file is passed to a container server during startup. A deployment policy must be used along with an ObjectGrid XML file. The deployment policy is not required to start a container server, but is recommended. The deployment policy must be compatible with the ObjectGrid XML file that is used with it. For each objectgridDeployment element in the deployment policy, you must include a corresponding objectGrid element in your ObjectGrid XML file. The maps in the objectgridDeployment must be consistent with the backingMap elements found in the ObjectGrid XML. Every backingMap must be referenced within only one mapSet element.

In the following example, the companyGridDpReplication.xml file is intended to be paired with the corresponding companyGrid.xml file.
companyGridDpReplication.xml
<?xml version="1.0" encoding="UTF-8"?>
<deploymentPolicy xmlns:xsi="http://www.w3.org./2001/XMLSchema-instance"
	xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd"
	xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">

	<objectgridDeployment objectgridName="CompanyGrid">
		<mapSet name="mapSet1" numberOfPartitions="11"
			minSyncReplicas="1" maxSyncReplicas="1"
			maxAsyncReplicas="0" numInitialContainers="4">
			<map ref="Customer" />
			<map ref="Item" />
			<map ref="OrderLine" />
			<map ref="Order" />
		</mapSet>
	</objectgridDeployment>

</deploymentPolicy>
companyGrid.xml
<?xml version="1.0" encoding="UTF-8"?>
<objectGridConfig xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://ibm.com/ws/objectgrid/config ../objectGrid.xsd"
	xmlns="http://ibm.com/ws/objectgrid/config">

	<objectGrids>
		<objectGrid name="CompanyGrid">
			<backingMap name="Customer" />
			<backingMap name="Item" />
			<backingMap name="OrderLine" />
			<backingMap name="Order" />
		</objectGrid>
	</objectGrids>

</objectGridConfig>

The companyGridDpReplication.xml file has one mapSet element that is divided into 11 partitions. Each partition must have exactly one synchronous replica. The number of synchronous replicas is specified by the minSyncReplicas and maxSyncReplicas attributes. Because the minSyncReplicas attribute is set to 1, each partition in the mapSet element must have at least one synchronous replica available to process write transactions. Because the maxSyncReplicas attribute is set to 1, each partition cannot exceed one synchronous replica. The partitions in this mapSet element have no asynchronous replicas.

The numInitialContainers attribute instructs the catalog service to defer placement until four container servers are available to support this ObjectGrid instance. The numInitialContainers attribute is ignored after the specified number of container servers has been reached.

You can also use the placementDeferralInterval property and xscmd -c suspendBalancing command to delay the placement of shards on the container servers.

Although the companyGridDpReplication.xml file is a basic example, a deployment policy can offer you full control over your environment.

Distributed topology

Distributed coherent caches offer increased performance, availability, and scalability, which you can configure.

WebSphere eXtreme Scale automatically balances servers. You can include additional servers without restarting WebSphere eXtreme Scale. Adding additional servers without having to restart eXtreme Scale allows you to have simple deployments and also large, terabyte-sized deployments in which thousands of servers are needed.

This deployment topology is flexible. Using the catalog service, you can add and remove servers to better use resources without removing the entire cache. You can use the startOgServer and stopOgServer [Version 8.6 and later] or startXsServer and stopXsServer commands to start and stop container servers. Both of these commands require you to specify the -catalogServiceEndPoints option. All distributed topology clients communicate to the catalog service through the Internet Interoperability Object Protocol (IIOP). All clients use the ObjectGrid interface to communicate with servers.

The dynamic configuration capability of WebSphere eXtreme Scale makes it easy to add resources to the system. Containers host the data and the catalog service allows clients to communicate with the grid of container servers. The catalog service forwards requests, allocates space in host container servers, and manages the health and availability of the overall system. Clients connect to a catalog service, retrieve a description of the container server topology, and then communicate directly to each server as needed. When the server topology changes due to the addition of new servers, or due to the failure of others, the catalog service automatically routes client requests to the appropriate server that hosts the data.

A catalog service typically exists in its own grid of Java™ virtual machines. A single catalog server can manage multiple servers. You can start a container server in a JVM by itself or load the container server into an arbitrary JVM with other container servers for different servers. A client can exist in any JVM and communicate with one or more servers. A client can also exist in the same JVM as a container server.

You can also create a deployment policy programmatically when you are embedding a container server in an existing Java process or application. For more information, see the DeploymentPolicy API documentation.