Overview of migration, coexistence, and interoperability

Migrating to a new version of WebSphere® Application Server requires careful consideration of factors such as your product edition, profile types, server configuration, and application deployment. This overview introduces the concepts, terminology, tools, and strategies to help you successfully migrate the product.

Common migration terminology

The following terms are frequently used to discuss migration:
  • Version or release: An update to the product that includes significant new function.
  • Edition: Within a version, product packaging that includes certain sets of features. For example, Network Deployment.
  • Profile: A set of files that defines the runtime environment for an application server process, such as a deployment manager or an application server. Profiles contain the configuration for how the application server behaves and where applications are deployed.
  • Source: The origin of the data and objects for the migration, such as source profile or source machine.
  • Target: The destination of the data and objects for the migration, such as target profile or target machine.
  • Node: A grouping of managed or unmanaged servers or server clusters. Each node that is managed by a cell can have a unique configuration.
  • Cell: A group that contains a deployment manager that manages one or more nodes or configurations. Nodes in the cell are federated to the deployment manager. The cell-level configuration is common across all nodes.
  • Mixed-cell environment: When the release of at least one federated node is older than the release of the deployment manager that manages the cell. Nodes cannot be more than three releases older than the deployment manager.

Basic migration concepts

Migration is the process of moving the configuration from an older release to a new release, such that the new configuration behaves as closely as possible to the old configuration. The main unit for a migration is the profile, which is migrated in 3 basic steps:
  1. Take a snapshot of the source profile from the old installation.
  2. Create a compatible target profile in the new installation.
  3. Merge the data from the snapshot into the target profile.

Migrating a cell, which contains the deployment manager and federated nodes, requires special attention. Because the deployment manager controls the configuration in the cell, each node must be synchronized with the new deployment manager as it is migrated.

Migration strategies

When you plan your migration, consider the following possible migration strategies:
Standard vs. clone migration
  • Standard: The source configuration is disabled after it is migrated to the target configuration.
  • Clone: The source configuration remains functional after it is migrated to the target configuration.

Mixed-cell environments

Note: Only standard migrations support mixed-cell environments. Clone migrations do not support mixed-cell environments.

A cell can contain nodes from different WebSphere Application Server versions. A WebSphere Application Server Version 9.0 mixed cell can contain nodes that support WebSphere Application Server Version 9.0 and Version 7.0 or later. In a mixed-cell environment, if a member of a cell is older than Version 7.0, the tools cannot migrate the deployment manager. The administrator must either migrate the nodes to at least Version 7.0 or remove them from the cell.

A mixed cell environment can exist in two ways:
  1. You perform incremental node migration of your existing system.
    1. You migrate the deployment manager to Version 9.0. The deployment manager has to be at the level of the highest node version. If you have nodes of the previous version, then this migration of the deployment manager creates a mixed cell at the highest version of WebSphere Application Server.
    2. Then when you migrate one node at a time to this new highest version, the cell becomes a cell at the highest version of WebSphere Application Server.
      Note: This cell cannot be at a higher version than the deployment manager.
  2. You migrate the deployment manager to Version 9.0 and then federate older version nodes to the new version deployment manager. This form of migration is supported for only Version 7.0 or later nodes.
    1. First, you migrate the deployment manager to Version 9.0. The deployment manager has to be at the level of the highest node version.
    2. You then can federate nodes from Version 7.0 or later to the new highest deployment manager version.
    Avoid trouble: This method of incremental migration leaves your system in a mixed cell environment with nodes administered by a Version 9.0 deployment manager. Your migration planning should eventually include migrations of all nodes to the Version 9.0 level to ensure consistent administration of the nodes.

Existing functions continue to work in a mixed-cell environment. You should be able to perform reasonable operations, such as run existing applications, perform management operations, such as addNode, create mixed cluster, configure the system, call Mbeans, and deploy applications. New function support in a mixed cell environment can be decided on a case by case basis - based on function, priority and available resources.

Avoid trouble: When running in a mixed-cell environment, clients might suddenly encounter a situation where the port information about the cluster members of the target cluster has become stale. This situation most commonly occurs when all of the cluster members have dynamic ports and are restarted during a time period when no requests are being sent. The client process in this state will eventually attempt to route to the node agent to receive the new port data for the cluster members, and then use that new port data to route back to the members of the cluster.

If any issues occur that prevent the client from communicating with the node agent, or that prevent the new port data being propagated between the cluster members and the node agent, request failures might occur on the client. In some cases, these failures are temporary. In other cases you need to restart one or more processes to resolve a failure.

To circumvent the client routing problems that might arise in these cases, you can configure static ports on the cluster members. With static ports, the port data does not change as a client process gets information about the cluster members. Even if the cluster members are restarted, or there are communication or data propagation issues between processes, the port data the client holds is still valid. This circumvention does not necessarily solve the underlying communication or data propagation issues, but removes the symptoms of unexpected or uneven client routing decisions.

If you neither migrate nor coexist with an earlier version of WebSphere Application Server, you are choosing to ignore the previous installation and you can run only one version at a time because of conflicting default port assignments. It is possible for both versions to run at the same time without conflict if you use non-default ports in one version.

Frequently asked questions

The following questions cover common topics concerning migration for z/OS®.

  • Can I simply point to the new WebSphere Application Server for z/OS Version 9.0 datasets and restart my servers?

    No. WebSphere Application Server for z/OS Version 9.0 requires that you migrate your Version 7.0 or later configuration to the Version 9.0 level.

    Be aware of the following issues when migrating to Version 9.0:
    • Any variables that belong to applications or products other than WebSphere Application Server are not migrated but are brought over to the new environment as is. Therefore, check any other product upgrades before migrating to ensure that all of these variables are still accurate after migration.
    • Before performing migration from Version 7.0 or later to Version 9.0, verify that you do not have any region constraints (such as IEFUSI limits) in place. These constraints can cause unpredictable Java™ Virtual Machine (JVM) errors.
  • What is the basic migration process?
    1. Install the SMP/E code for WebSphere Application Server for z/OS Version 9.0.
      • The SMP/E code contains the Installation Manager. Installing the SMP/E code gives you entitlement to retrieve the WebSphere repository and build the WebSphere product code on your system.
    2. Use the z/OS Migration Management Tool or the zmmt command to create the migration utilities that you need to perform the migration.
    3. Run these jobs.

      A new Version 9.0 configuration is created—separate from your existing Version 7.0 or later configuration—that is based on the Version 7.0 or later configuration information.

  • Is migration a node-by-node activity?

    Yes. The process of migrating the configuration involves running the supplied utilities against each node in your configuration.

    Graphic depicting running the supplied utilities against each node in your configuration.

    Although a stand-alone application server only has one node, that node needs to be migrated. The steps are essentially the same as the steps for migrating any other node, except that you do not have to have a deployment manager running. Read Migrating a z/OS stand-alone application server: Checklist for a checklist of activities for migrating a stand-alone application server node.

  • What do the migration utilities do?

    The migration utilities serve the following purposes:

    Table 1. Migration utilities and their purposes . The table lists the various migration utilities and their purposes.
    Utility Purpose
    BBOWMG1B (stand-alone application server migrations)

    BBOWMG1F (federated node migrations)

    Enables all servers on the node being migrated to be configured to start in Peer Restart and Recovery (PRR) mode

    After this job completes, you must start all servers on the node being migrated and wait for them to stop. PRR processing mode resolves any outstanding transactions, clears the transaction logs, and stops the server. This job is not needed for a deployment manager migration, and it is optional for configurations that do not use distributed transaction (XA) connectors.

    This job is required only if you are using XA adapters and you need to migrate the XA logs. Check your resource providers in the Version 7.0 or later administrative console by going to Resources > JDBC providers and checking to see if you have chosen any XA providers such as DB2®, Apache Derby, and so on.

    BBOWMG2B (stand-alone application server migrations)

    BBOWMG2F (federated node migrations)

    Disables PRR mode and returns all servers to normal operating state

    You are not required to start all servers after this job completes. This job is not needed for a deployment manager migration, and it is optional for configurations that do not use XA connectors.

    This job is required only if you are using XA adapters and you need to migrate the XA logs. Check your resource providers in the Version 7.0 or later administrative console by going to Resources > JDBC providers and checking to see if you have chosen any XA providers such as DB2, Apache Derby, and so on.

    BBOMBHFS or BBOMBZFS (stand-alone application server migrations)

    BBOMDHFS or BBOMDZFS (deployment manager migrations)

    BBOMMHFS or BBOMMZFS (federated node migrations)

    Optional: Creates a file system and mount point for the Version 9.0 configuration root, and mounts the file system

    If you want to use an existing file system to contain the Version 9.0 configuration, you must manually create the mount point specified when you create the migration definition and verify that the file system is mounted rather than run this job. In either case, the configuration file system and mount point must be created and the file system must be mounted before proceeding with the migration.

    For stand-alone application server migrations, the following utilities:
    • BBOWMG3B
    • BBOWBPRO
    • BBOWBPRE
    • BBOWBPOS
    For deployment manager migrations, the following utilities:
    • BBOWMG3D
    • BBOWDPRO
    • BBOWDPRE
    • BBOWDPOS
    For federated node migrations, the following utilities:
    • BBOWMG3F
    • BBOWMPRO
    • BBOWMPRE
    • BBOWMPOS
    For Administrative Agent migrations, the following utilities:
    • BBOWMG3A
    • BBOWAPRO
    • BBOWAPRE
    • BBOWAPOS
    For Job Manager migrations, the following utilities:
    • BBOWMG3J
    • BBOWJPRO
    • BBOWJPRE
    • BBOWJPOS

    BBOWMG3x runs the complete migration of the node from Version 7.0 or later to Version 9.0.

    BBOWxPRO just creates the WebSphere Application Server home and default profile.

    BBOWxPRE just runs the migration pre-upgrade process.

    BBOWxPOS just runs the migration post-upgrade and finish-up (change file permission) processes.

    BBOMBCP (stand-alone application server migrations)

    BBOMDCP (deployment manager migrations)

    BBOMMCP (federated node migrations)

    Copies the generated Job Control Language (JCL) procedures to start the servers to the specified procedure library

    If you choose to have your Version 9.0 configuration make use of different JCL start procedure names, this utility updates the new Version 9.0 configuration, substituting your new JCL names for the names that existed in your original Version 7.0 or later configuration.

  • Where should you run the migration jobs?

    Run the jobs on the same system on which the node being migrated resides.

  • What happens when a node is migrated?

    The migration utilities transform the contents of your present WebSphere Application Server Version 7.0 or later configuration file system and merge them into a new, separate Version 9.0 configuration file system. For standard migrations, the old node is disabled. For clone migrations, the node remains unaffected.

  • Are my existing configuration lost during migration?

    During the migration, the original WebSphere Application Server Version 7.0 or later configuration tree is unaffected. If for some reason the migration fails before completing, your previous configuration still exists. For clone migrations, the old cell remains unaffected. For standard migrations, however, once the Deployment Manager is migrated, the old nodes my be synchronized with the new Deployment Manager. It is recommended to perform a complete backup of all nodes in the cell before performing a standard migration.

  • If my node has multiple application servers, are all of them migrated?

    Yes. The utility detects all servers and migrate all, including the node agent. One invocation of the migration utilities against the node migrates all the servers in the node.

  • Must I stop the servers in a node to perform the migration?

    For clone migrations, you do not need to stop the servers, the node agent, or the deployment manager in order to migrate. Otherwise, yes. In a multinode configuration it is possible to have the other nodes still running. But any node that you want to migrate must have its servers stopped.

    When an application server node that is part of a WebSphere Application Server Network Deployment configuration is being migrated, the previously migrated Version 9.0 deployment manager for that cell must be running. This is because part of the migration involves the use of the wsadmin scripting function to synchronize the newly migrated application server node with the deployment manager. The deployment manager must be running in order to perform that synchronization.

  • Is it possible to have a cell operating with only some of the nodes migrated and others not?

    Clone migrations do not support mixed-cell environments. If you migrate the deployment manager using the clone option, then all nodes in the cell must also use that option.

    For standard migrations, yes, that is possible. WebSphere Application Server Version 7.0 or later can coexist with Version 9.0 in the same cell and on the same logical partition (LPAR).

  • Can my newly migrated WebSphere Application Server for z/OS Version 9.0 deployment manager still communicate with the Version 7.0 or later nodes?

    For clone migrations, no they cannot. The newly migrated deployment manager will not support mixed-cell environments. If you migrate the deployment manager using the clone option, then all nodes in the cell must also use that option.

    For standard migrations, yes. A deployment manager that is migrated to the Version 9.0 level of code can manage a Version 7.0 or later node. Changes made through the administrative console are applied to the node. Remember the following points:
    • When a deployment manager is migrated to Version 9.0, a new Version 9.0primary configuration is created. The Version 7.0 or later primary configuration still exists. But when the Version 9.0 deployment manager makes changes to the configuration, the changes are made to the new Version 9.0 primary configuration. While it is still possible to use the Version 7.0 or later code, therefore, any changes made in Version 9.0 are not seen when the older code is restored.
    • A Version 7.0 or later deployment manager has no ability to manage a Version 9.0 node.
  • Is there a sequence to performing a multinode migration?
    Yes. Migrate according to the following sequence:
    1. Always migrate the deployment manager first.
    2. Application server nodes on the same system as the deployment manager or on other Multiple Virtual Storage (MVS™) images can then be migrated.
  • Is it possible to have cells at WebSphere Application Server for z/OS Version 9.0 coexist with other cells at Version 7.0 or later?
    Yes. It possible to have cells at WebSphere Application Server for z/OS Version 9.0 coexist with other cells at Version 7.0 or later for a sysplex or any given MVS image. The following restrictions exist:
    • At the completion of a clone migration, both the old and new cells are completely independent and functional and can coexist.
    • A cell can contain servers at Version 7.0 or later levels.
    • A cell can contain z/OS and non-z/OS nodes; however, the deployment manager must be at the highest version level in the cell and any nodes on platforms other than that on which the deployment manager is located must be at Version 7.0 or later.
    • A server on a z/OS node cannot be clustered with a server on a non-z/OS node.
    • An LPAR can contain more than one node from the same cell.
    • Each LPAR has at most one daemon per cell with servers on that LPAR regardless of how many nodes from that cell are configured for that LPAR.
    • For a given LPAR, a daemon must be at or above the maintenance level of all servers on that LPAR that are in the daemon cell, regardless of node.
    • All servers in the same node must be at the same version level.
    • The deployment manager must be at or above the version level of any server in the cell.
    • The controller and its servants must be at the same version level.
    • No two cells can have the same cell short name.
    • Other considerations exist for separate cells, regardless of whether they are at different versions of the code. For example, you must have a separate configuration file system mount point and separate JCL procedures.