Deployment policy descriptor XML file

To configure a deployment policy, use a deployment policy descriptor XML file.

In the following sections, the elements and attributes of the deployment policy descriptor XML file are defined. See the deploymentPolicy.xsd file for the corresponding deployment policy XML schema.

Figure 1. Elements in the deploymentPolicy.xml file
<?xml version="1.0" encoding="UTF-8"?>
<deploymentPolicy xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd"
	xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">     
	<objectgridDeployment objectgridName="myGrid">
		<mapSet
				name="mapSetName"
				numberOfPartitions="numberOfPartitions"
				minSyncReplicas="minimumNumber"
 				maxSyncReplicas="maximumNumber"
				maxAsyncReplicas="maximumNumber"
				replicaReadEnabled="true|false"
				numInitialContainers="numberOfInitialContainersBeforePlacement"
				autoReplaceLostShards="true|false"
				developmentMode="true|false"
				placementStrategy="FIXED_PARTITIONS|PER_CONTAINER">
				<map ref="backingMapReference" />

					<zoneMetadata>
						<shardMapping
							shard="shardType"
							zoneRuleRef="zoneRuleRefName" />
						<zoneRule 
								name="zoneRuleName"
								exclusivePlacement="true|false" >
								<zone name="ALPHA" />
								<zone name="BETA" />
								<zone name="GAMMA" />
						</zoneRule>
					</zoneMetadata>
	 		</mapSet>
		</objectgridDeployment>
	</deploymentPolicy>

deploymentPolicy element

The deploymentPolicy element is the top-level element of the deployment policy XML file. This element sets up the namespace of the file and the schema location. The schema is defined in the deploymentPolicy.xsd file.
  • Number of occurrences: One
  • Child element: objectgridDeployment

objectgridDeployment element

The objectgridDeployment element is used to reference an ObjectGrid instance from the ObjectGrid XML file. Within the objectgridDeployment element, you can divide your maps into map sets.
  • Number of occurrences: One or more
  • Child element: mapSet
Attributes
objectgridName
Specifies the name of the ObjectGrid instance to deploy. This attribute references an objectGrid element that is defined in the ObjectGrid XML file. (Required)
For example, the objectgridName attribute is set as CompanyGrid in the companyGridDpReplication.xml file. The objectgridName attribute references the CompanyGrid that is defined in the companyGrid.xml file. Read about the ObjectGrid descriptor XML file, which you must couple with the deployment policy file for each ObjectGrid instance.

mapSet element

The mapSet element is used to group maps together. The maps within a mapSet element are partitioned and replicated similarly. Each map must belong to only one mapSet element.
  • Number of occurrences: One or more
  • Child elements:
    • map
    • zoneMetadata
Attributes
name
(Required) Specifies the name of the mapSet. This attribute must be unique within the objectgridDeployment element.
numberOfPartitions
(Optional) Specifies the number of partitions for the mapSet element. The default value is 1. The number must be appropriate for the number of container servers that host the partitions. (Optional)
minSyncReplicas
Specifies the minimum number of synchronous replicas for each partition in the mapSet. The default value is 0. Shards are not placed until the domain can support the minimum number of synchronous replicas. To support the minSyncReplicas value, you need one more container server than the minSyncReplicas value. If the number of synchronous replicas falls below the minSyncReplicas value, write transactions are no longer allowed for that partition.
In the following configurations, when the minSyncReplicas value is set to a value greater than 0, transactions are rejected from the data grid because a replica is expected:
  • Only one zone is available in a multiple zone configuration
  • Only one host is available and the developmentMode attribute is set to false.
  • If the allowableShardOverrage property is configured, transactions for a particular partition are rejected until the second zone has a number of container servers over the configured percentage.
maxSyncReplicas
Specifies the maximum number of synchronous replicas for each partition in the mapSet. The default value is 0. No other synchronous replicas are placed for a partition after a domain reaches this number of synchronous replicas for that specific partition. Adding container servers that can support this ObjectGrid can result in an increased number of synchronous replicas if your maxSyncReplicas value is not already met. (Optional)
maxAsyncReplicas
Specifies the maximum number of asynchronous replicas for each partition in the mapSet. The default value is 0. After the primary and all synchronous replicas are placed for a partition, asynchronous replicas are placed until the maxAsyncReplicas value is met. (Optional)
replicaReadEnabled
If this attribute is set to true, read requests are distributed between a partition primary and its replicas. If the replicaReadEnabled attribute is false, read requests are routed to the primary only. The default value is false. (Optional)
numInitialContainers
Specifies the number of container servers that are required before initial placement occurs for the shards in this mapSet element. The default value is 1. This attribute can help save process and network bandwidth when you are bringing a data grid online from a cold start. (Optional)
You can also use the placementDeferralInterval property and the xscmd -c suspendBalancing command to delay the initial placement of shards on the container servers.
Starting a container server sends an event to the catalog service. The first time that the number of active container servers is equal to the numInitialContainers value for a mapSet element, the catalog service places the shards from the mapSet, if the minSyncReplicas value can also be satisfied. After the numInitialContainers value is met, each container server-started event can trigger a rebalancing of unplaced and previously placed shards. If you know approximately how many container servers you are going to start for this mapSet element, you can set the numInitialContainers value close to that number to avoid the rebalancing after every container server start. Placement occurs only when you reach the numInitialContainers value that is specified in the mapSet element.

To override the numInitialContainers value, for example, when you are performing maintenance on your servers and want shard placement to continue running, you can use the xscmd -c triggerPlacement command. This override is temporary and is applied when you run the command. After you run the command, all subsequent placement runs use the numInitialContainers value.

autoReplaceLostShards
Specifies if lost shards are placed on other container servers. The default value is true. When a container server is stopped or fails, the shards that are running on the container server are lost. A lost primary shard causes one of its replica shards to be promoted to the primary shard for the corresponding partition. Because of this promotion, one of the replicas is lost. If you want lost shards to remain unplaced, set the autoReplaceLostShards attribute to false. This setting does not affect the promotion chain, but only the replacement of the last shard in the chain. (Optional)
developmentMode
With this attribute, you can influence where a shard is placed in relation to its peer shards. The default value is true. When the developmentMode attribute is set to false, no two shards from the same partition are placed on the same computer. When the developmentMode attribute is set to true, shards from the same partition can be placed on the same server. In either case, no two shards from the same partition are ever placed in the same container server. (Optional)
placementStrategy
There are two placement strategies. The default strategy is to use a fixed partition strategy. By setting the attribute placementStrategy to FIXED_PARTITIONS, the number of primary shards that are placed across available container servers is equal to the number of partitions that are defined, and increased by the number of replicas. The other strategy is to use a per container strategy. By setting placementStrategy to PER_CONTAINER, the number of primary shards that are placed on each container server is equal to the number of partitions that are defined, with an equal number of replicas that are placed on other container servers. (Optional)

map element

Each map in a mapSet element references one of the backingMap elements that is defined in the corresponding ObjectGrid XML file. Every map in a distributed eXtreme Scale environment can belong to only one mapSet element.
  • Number of occurrences: One or more
  • Child element: None
Attributes
ref
Provides a reference to a backingMap element in the ObjectGrid XML file. Each map in a mapSet element must reference a backingMap element from the ObjectGrid XML file. The value that is assigned to the ref attribute must match the name attribute of one of the backingMap elements in the ObjectGrid XML file. (Required)

zoneMetadata element

You can place shards into zones. With zones, you can control how eXtreme Scale places shards on a grid. Java virtual machines that host an eXtreme Scale server can be tagged with a zone identifier. The deployment file can include one or more zone rules, and these zone rules are associated with a shard type. The zoneMetadata element is a receptacle of zone configuration elements. Within the zoneMetadata element, zones can be defined and shard placement behavior can be influenced.

For more information, see Zones for replica placement.

  • Number of occurrences: Zero or one
  • Child elements:
    • shardMapping
    • zoneRule

Attributes: None

shardMapping element

The shardMapping element is used to associate a shard type with a zone rule. Placement of the shard is influenced by the mapping to the zone rule.
  • Number of occurrences: Zero or one
  • Child elements: None
Attributes
shard
Specify the name of a shard with which to associate the zoneRule. (Required)
zoneRuleRef
Specify the name of a zoneRule with which to associate the shard. (Optional)

zoneRule element

A zone rule specifies the possible set of zones in which a shard can be placed. The zoneRule element is used to specify a set of zones that a set of shard types can be placed within. The zone rule can also be used to determine how shards are grouped across the zones with the exclusivePlacement attribute.
  • Number of occurrences: One or more
  • Child elements: zone
Attributes
name
Specify the name of the zone rule that you defined previously, as the zoneRuleRef in a shardMapping element. (Required)
exclusivePlacement
An exclusive setting indicates that each shard type mapped to this zone rule is placed in a different zone in the zone list. An inclusive setting indicates that after a shard is placed in a zone from the list, then the other shard types that are mapped to this zone rule are also placed in that zone. At least 3 zones are required when you use an exclusive setting with 3 shards that are mapped to the same zone rule. The 3 shards include the primary, and 2 synchronous replicas. (Optional)

zone element

The zone element is used to name a zone within a zone rule. Each zone that is named must correspond to a zone name that is used to start servers.

Example

In the following example, the mapSet element is used to configure a deployment policy. The value is set to mapSet1, and is divided into 10 partitions. Each of these partitions must have at least one synchronous replica available and no more than two synchronous replicas. Each partition also has an asynchronous replica if the environment can support it. All synchronous replicas are placed before any asynchronous replicas are placed. Additionally, the catalog service does not attempt to place the shards for the mapSet1 element until the domain can support the minSyncReplicas value. Supporting the minSyncReplicas value requires two or more container servers: one for the primary and two for the synchronous replica.
<?xml version="1.0" encoding="UTF-8"?>
<deploymentPolicy xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://ibm.com/ws/objectgrid/deploymentPolicy ../deploymentPolicy.xsd"
	xmlns="http://ibm.com/ws/objectgrid/deploymentPolicy">

	<objectgridDeployment objectgridName="CompanyGrid">
		<mapSet name="mapSet1" numberOfPartitions="10"
			minSyncReplicas="1" maxSyncReplicas="2"	maxAsyncReplicas="1"
		 	numInitialContainers="10" autoReplaceLostShards="true"
			developmentMode="false" replicaReadEnabled="true">
			<map ref="Customer"/>
			<map ref="Item"/>
			<map ref="OrderLine"/>
			<map ref="Order"/>
		</mapSet>
	</objectgridDeployment>

</deploymentPolicy>
Only 2 container servers are required to satisfy the replication settings. However, the numInitialContainers attribute requires 10 available container servers before the catalog service attempts to place any of the shards in this mapSet element. After the domain has 10 container servers that are able to support the CompanyGrid ObjectGrid, all shards in the mapSet1 element are placed.

When the autoReplaceLostShards attribute is set to true, any shard in this mapSet element that is lost as the result of container server failure is automatically replaced on another container server. This replacement occurs only if a container server is available to host the lost shard. Shards from the same partition cannot be placed on the same server for the mapSet1 element because the developmentMode attribute is set to false. Read-only requests are distributed across the primary shard and its replicas for each partition because the replicaReadEnabled value is true.

The companyGridDpMapSetAttr.xml file uses the ref attribute on the map to reference each of the backingMap elements from the companyGrid.xml file.

For more examples, see Zone-preferred routing.