Installing in high availability environments

To set up InfoSphere® MDM to operate in a high availability environment, configure multiple instances on multiple host severs.

About this task

To install InfoSphere MDM in a high availability environment, configure your installation to work on a WebSphere® Application Server cluster.

If you wish to place your InfoSphere MDM servers in different geographic locations, then they must be installed either in different active clusters (multiple clusters) or different active WebSphere Application Servers in different cells (multiple instances). In the case of multiple instances, a load balancer may be needed to balance requests and handle failover.

Important: Even in high availability configurations, there is a risk that the failure of a cluster or server may lead to the failure of any transactions being processed at the time of the failure. When using WebSphere default messaging, any messages that are currently in-process can be lost.

The remainder of this topic describes how to set up the following high availability scenarios:

  • InfoSphere MDM is installed on multiple clusters that share a single MDM schema. In this scenario, there are four servers set up in the following configuration:
    • Cluster 1 - Node 1 - Server 1
    • Cluster 1 - Node 2 - Server 2
    • Cluster 2 - Node 3 - Server 3
    • Cluster 2 - Node 4 - Server 4
  • InfoSphere MDM is installed on multiple instances that share a single MDM schema. In this scenario, there are two servers using different cells.

Use the following steps to set up the scenarios.

Procedure

  • Scenario one: Install on multiple clusters.
    1. Install InfoSphere MDM on cluster one. Verify that the physical MDM and virtual MDM test cases are successful in the IVT folder.
    2. Shut down cluster one.
    3. Perform a full database backup.
    4. Drop all of the tables from the database schema.
    5. Install InfoSphere MDM on cluster two, pointing the installation to the same clean schema as the cluster one installation. Again, verify that the physical MDM and virtual MDM test cases are successful in the IVT folder.
      Note: The installation application creates two records in the schema.APPINSTANCE table, one for each cluster:
      values(1,1001,'Server3',current_timestamp,'installer')
      values(2,1001,'Server4',current_timestamp,'installer')
    6. Shut down cluster two.
    7. Manually configure the MDM configuration repository to support both clusters:
      • Use the server name as the instance name (such as server3 and server4).
      • Update the instance ID for cluster two:
        update schema.APPINSTANCE set instance_id=201, last_update_dt=current_timestamp where instance_name='Server3';
        update schema.APPINSTANCE set instance_id=202, last_update_dt=current_timestamp where instance_name='Server4';
        update schema.CONFIGELEMENT set instance_id=201, value='3', last_update_dt=current_timestamp where instance_id=1 and name='/IBM/DWLCommonServices/KeyGeneration/instancePKIdentifier';
        update schema.CONFIGELEMENT set instance_id=202, value='4', last_update_dt=current_timestamp where instance_id=2 and name='/IBM/DWLCommonServices/KeyGeneration/instancePKIdentifier';
      • Create instances for each member in cluster one:
        insert into schema.APPINSTANCE values(101,1001,'Server1',current_timestamp,'installer')
        insert into schema.APPINSTANCE values(102,1001,'Server2',current_timestamp,'installer')
        insert into schema.CONFIGELEMENT values(2001,1001,'/IBM/DWLCommonServices/KeyGeneration/instancePKIdentifier','1',,101,CURRENT_TIMESTAMP,'INSTALLATION','0',);
        insert into schema.CONFIGELEMENT values(2002,1001,'/IBM/DWLCommonServices/KeyGeneration/instancePKIdentifier','2',,102,CURRENT_TIMESTAMP,'INSTALLATION','0',);
      • Ensure that the instance name for each member is configured in the WebSphere Application Server variables at the server level:
        MDM_INSTANCE_NAME=Server1 for server1 in Cluster1.
        MDM_INSTANCE_NAME=Server2 for server2 in Cluster1.
        MDM_INSTANCE_NAME=Server3 for server3 in Cluster2.
        MDM_INSTANCE_NAME=Server4 for server4 in Cluster2.
    8. If your implementation uses WebSphere default messaging, ensure that each cluster's SIB data store uses different schema:
      1. Open the WebSphere Application Server administration console.
      2. Navigate to Buses > <bus-name> > Messaging engines > <engine-name> > Data store > Schema name.
      3. Modify the schema name for cluster one to a different name, such as schema1.
    9. Restart cluster one.
    10. Rerun the installation verification script (verify.sh) to verify that both physical MDM and virtual MDM test cases are successful in the IVT folder.
    11. Restart cluster two.
    12. Rerun the installation verification script (verify.sh) to verify that both physical MDM and virtual MDM test cases are successful in the IVT folder.
  • Scenario two: Install on multiple instances.
    1. Install InfoSphere MDM on server one in node one and cell one. Verify that the physical MDM and virtual MDM test cases are successful in the IVT folder.
    2. Shut down server one.
    3. Perform a full database backup.
    4. Drop all of the tables from the database schema.
    5. Install InfoSphere MDM on server two in node two and cell two, pointing the installation to the same clean schema as the server one installation. Again, verify that the physical MDM and virtual MDM test cases are successful in the IVT folder.
      Note: The installation application will not create any records in the schema.APPINSTANCE table.
    6. Shut down server two.
    7. Manually configure the MDM configuration repository to support both instances:
      • Create an application instance for each server:
        insert into schema.APPINSTANCE values(101,1001,'Server1',current_timestamp,'installer')
        insert into schema.APPINSTANCE values(102,1001,'Server2',current_timestamp,'installer')
        insert into schema.CONFIGELEMENT values(1001,1001,'/IBM/DWLCommonServices/KeyGeneration/instancePKIdentifier','1',,101,CURRENT_TIMESTAMP,'INSTALLATION','0',);
        insert into schema.CONFIGELEMENT values(1002,1001,'/IBM/DWLCommonServices/KeyGeneration/instancePKIdentifier','2',,102,CURRENT_TIMESTAMP,'INSTALLATION','0',);
        Note: When inserting information into the APPINSTANCE and CONFIGELEMENT tables, ensure that the DEPLOYMENT_ID value (1001) matches the corresponding primary key in the APPDEPLOYMENT table.
      • Add an instance name for each server in the WebSphere Application Server, at the server level:
        MDM_INSTANCE_NAME=Server1 for server one in node one and cell one.
        MDM_INSTANCE_NAME=Server2 for server two in node two and cell two.
    8. Set a value for the MDM_INSTANCE_NAME variable for each application server where InfoSphere MDM is installed.
      1. In the WebSphere Application Server Integrated Solutions Console (admin console), go to Environment > WebSphere Variables.
      2. Set the value of the MDM_INSTANCE_NAME variable to be the same as the server name.
    9. If your implementation uses WebSphere default messaging, ensure that each instance's SIB data store uses different schema:
      1. Open the WebSphere Application Server administration console.
      2. Navigate to Buses > <bus-name> > Messaging engines > <engine-name> > Data store > Schema name.
      3. Modify the schema name for server one to a different name, such as server1.
    10. Restart server one.
    11. Rerun the installation verification script (verify.sh) to verify that both physical MDM and virtual MDM test cases are successful in the IVT folder.
    12. Restart server two.
    13. Rerun the installation verification script (verify.sh) to verify that both physical MDM and virtual MDM test cases are successful in the IVT folder.