Switching into production mode

When you have completed the migration steps and tested your system, you should be able to switch Tivoli Workload Scheduler for z/OS into production with a minimum interruption to your normal processing. An example explains the process in the following steps:

  1. Closing down your production system
  2. Converting VSAM files to Tivoli Workload Scheduler for z/OS format
  3. Starting the new system
  4. Validating the new system

Consider this scenario:

The objective is to stop the OPCA and OPCB and migrate them to Tivoli Workload Scheduler for z/OS with as little impact on users as possible. One possible method is described here. Modify it as required to suit the specific needs of your installation.

In the example, you should first make sure that:

  1. You prepared JCL to:
    • Back up the Tivoli Workload Scheduler for z/OS environment.
    • Allocate the Tivoli Workload Scheduler for z/OS VSAM and non-VSAM files.
  2. All trackers (OPCB, OPCC, and OPCD) are active at the beginning of the migration process. The sequence in which the trackers are started and stopped is the key to a successful migration to a new Tivoli Workload Scheduler for z/OS system.

Closing down your production system

If your trackers have a large CSA area defined, you do not have to worry about losing events. It is assumed here that area is quite small, so you should deliberately slow down the event-generating rate as much as possible. To do this, perform the following actions:

  1. From the Service Functions dialog on the production system (OPCA), deactivate job submission for jobs running in the host environment and on fault-tolerant workstations, and hold JES job queues, if ETT is used.
  2. After all jobs in the current plan that are currently active in the host environment have completed, stop the two controlled systems, OPCC and OPCD. If there are many jobs still on the JES job queues, or if many new jobs are still arriving from outside processes, hold the job queues on MVS2 and MVS3.
  3. Stop all fault-tolerant workstations in the network using one of the available Tivoli® Workload Scheduler interfaces, or locally on the Fault-Tolerant workstation using the conman stop command.
  4. Before you proceed with the next steps, wait until in the EQQTWSIN and EQQTWSOU data sets all the events are processed. To verify this, use the sample utility EQQCHKEV, provided in the sample library.

    The EQQCHKEV utility will check the data set structure of EQQTWSIN and EQQTWSOU which are the input and output end-to-end event data sets from the version you are migrating. The utility will provide an informational message indicating the number of events that still need to be processed. When the data set will contain 0 unprocessed events you can proceed with the migration. The utility also checks the integrity of the data sets and will issue an appropriate error message in case of corruption or inconsistency.

  5. From the Daily Plan dialog on the production system (OPCA), create a replan or plan-extend batch job. Change the job card to contain TYPRUN=HOLD, and submit the job. Save the JCL in a data set in case you have to resubmit it to correct an error.
  6. If you specified CHECKSUBSYS(YES) on the BATCHOPT statement used by the batch job, change it to CHECKSUBSYS(NO). In the BATCHOPT statement used by the batch job, comment out the TPLGYPRM keyword if it is used.
  7. Using the Query Current Plan dialog on the production system (OPCA), check which JS file is currently in use on this system.
  8. Stop the OPCA and OPCB systems. Release the daily plan job from hold, and make sure that it runs successfully.
  9. When the daily plan job has finished, verify that it ran successfully. This is indicated by a return code of 0 or 4. If required, correct any problems and rerun the job until a new current plan (NCP) data set has been created. If you have fault-tolerant workstations and you have commented out the TPLGYPRM keyword in the BATCHOPT statement, the warning message EQQ3041W is displayed in the daily plan job output for each fault-tolerant workstation.

Converting VSAM files to the new system format

The next step is to create VSAM files for the new system. You can do this as follows:

  1. Create a backup copy of the Tivoli Workload Scheduler for z/OS VSAM files.
  2. Allocate VSAM clusters for Tivoli Workload Scheduler for z/OS using the EQQPCS01 job.
  3. Review the EQQICNVS sample job. Ensure that input and output data set names are correctly specified. Make sure to select the current JS file. When defining input and output files for the CP file conversion, use the NCP file, as a new current plan has just been created.
  4. Run EQQICNVS to convert the VSAM data to Tivoli Workload Scheduler for z/OS format.
  5. Verify that the conversion program ran successfully. If there are any problems converting the VSAM files, you should abandon the migration.
  6. Backup the Tivoli Workload Scheduler for z/OS non-VSAM data sets.
  7. Allocate Tivoli Workload Scheduler for z/OS non-VSAM data sets using the EQQPCS01 and EQQPCS02 jobs.
  8. If you have stopped migrating, start OPCA, OPCB, OPCC, and OPCD. Release any held queues and restart any drained initiators.

Starting the new system

In the following procedure, it is assumed that VSAM file conversion was successful. Ensure that the data sets referred to in Empty data sets are empty. To start the new system, perform the following actions:

  1. Modify the JCL procedure for OPCA to include the new DD names and data sets added in Tivoli Workload Scheduler for z/OS. Use ISPF browse to ensure that all job-tracking logs (EQQJTnn), the job-tracking archive (EQQJTARC), and the checkpoint (EQQCKPT) are empty data sets. If you use dual job-tracking logs (EQQDLnn), they should also be empty data sets.
  2. Modify initialization parameters for OPCA. The CKPT data set is not yet initialized the first time you start OPCA after migration, and hence you must specify CURRPLAN(NEW) in JTOPTS. Specify BUILDSSX(REBUILD) and SSCMNAME(EQQSSCMJ,TEMPORARY) on the OPCOPTS initialization statement. Specify the PIFHD keyword on the INTFOPTS initialization statement.

    As soon as OPCA starts, change back to CURRPLAN(CURRENT), to prevent OPCA from recovering from the new current plan each time it is starts.

    Note:
    You might find it useful to specify JOBSUBMIT(NO) and FTWJSUB(NO) in the JTOPTS initialization statement so that work is not submitted when you start OPCA. When you have checked that OPCA has started without errors, you can activate job submission using the Service Functions dialog.

    To initialize the checkpoint data set, you must specify OPCHOST(YES) in OPCOPTS. This so that, when the scheduler starts, the NMM task initializes the checkpoint data set with FMID and LEVEL corresponding to SSX. The OPCHOST value can then be changed. For example, you can change the value to OPCHOST(PLEX) when the subsystem is used as the controlling system in XCF.

  3. Run the EQQPCS05 job to create the work directory. Optionally, back up any important data that you have in the old work directory, for example, the LOCALOPTS file, to merge it later in the new work directory.
  4. Start OPCA. Verify that no errors occurred during initialization. If required, correct any errors and restart OPCA.
  5. Modify initialization parameters for OPCB. Specify BUILDSSX(REBUILD) and SSCMNAME(EQQSSCMJ,TEMPORARY) on the OPCOPTS initialization statement. Specify the PIFHD keyword on the INTFOPTS initialization statement.
  6. Start OPCB and OPCS.
  7. Restart drained initiators on the MVS1 system.
  8. Enter the Service Functions dialog on OPCA, and activate job submission (if it is not already active).
  9. Start the OPCC and OPCD systems. Release held queues, and restart drained initiators if required.
  10. Change JTOPTS CURRPLAN(NEW) to CURRPLAN(CURRENT).
  11. Uncomment the TPLGYPRM keyword in the BATCHOPT statement if you commented it out. Submit a daily plan replan or extend as soon as possible after migration. In addition to the current plan, this will also generate a new symphony file. Until a new current plan is created, any references to special resources will cause the resource object to be copied from the EQQRDDS to the current-plan-extension data space. This processing has some performance overheads.

    The new-current-plan-extension data set (EQQNCXDS) is built during daily planning to contain all special resources referenced by operations in the new current plan.

Before the next IPL of the system, remove the BUILDSSX and SSCMNAME keywords from OPCA and OPCB initialization statements if the subsystem name table (IEFSSNnn) in SYS1.PARMLIB has been updated to correctly specify EQQINITJ and EQQSSCMJ.

Validating the new system

Now you must validate that your new system works as expected. To do this, perform the following steps:

  1. From the Ready List dialog, review the status of active operations.
  2. Check that the operations that are becoming ready on the workstations representing the three z/OS systems are successfully submitted to the intended system. Also check that the ending status is correctly reflected in the ready lists.
  3. Verify that the current plan and the long-term plan can be extended successfully.
  4. Verify that other Tivoli Workload Scheduler for z/OS-related processes (for example, the dialogs, batch programs, and PIF-based programs) work as expected.

Migration steps for a system in a heavy workload environment

If your production environment has such a heavy workload that you cannot suspend job processing and phase out production, you can use the procedure described in the following steps as an alternative to the standard process described in Switching into production mode. The standard process is recommended in all other cases.

The scenario used involves the same systems as in the standard process.

To migrate your production system, you perform the following steps:

  1. Close down your production system
  2. Convert VSAM files to the new system format
  3. Initialize the new system
  4. Produce a checkpoint data set containing data from the old production system
  5. Start the new system
  6. Validate the new system
Close down your production system

  1. From the Service Functions dialog on the production system (OPCA), deactivate job submission for jobs running on fault-tolerant workstations.
  2. Stop all fault-tolerant workstations in the network using one of the available Tivoli Workload Scheduler interfaces, or locally on the Fault-Tolerant workstation using the conman stop command.
  3. Before you proceed with the next steps, wait until all the events are processed in the EQQTWSIN and EQQTWSOU data sets. To verify this, use the sample utility EQQCHKEV, provided in the sample library.

    The EQQCHKEV utility checks the data set structure of EQQTWSIN and EQQTWSOU which are the input and output end-to-end event data sets from the version you are migrating. The utility provides an informational message indicating the number of events still to be processed. When the data set contains zero unprocessed events you can proceed with the migration. The utility also checks the integrity of the data sets and issues an appropriate error message in case of corruption or inconsistency.

  4. From the Daily Plan dialog on the production system (OPCA), create a replan or plan-extend batch job. Change the job card to contain TYPRUN=HOLD, and submit the job. Save the JCL in a data set in case you have to resubmit it to correct an error.
  5. If you specified CHECKSUBSYS(YES) on the BATCHOPT statement used by the batch job, change it to CHECKSUBSYS(NO). In the BATCHOPT statement used by the batch job, comment out the TPLGYPRM keyword if it is used.
  6. Using the Query Current Plan dialog on the production system (OPCA), check which JS file is currently in use on this system.
  7. Stop OPCA and OPCS, release the daily plan from hold and make sure it runs successfully.
Convert VSAM files to the new system format

  1. Create a backup copy of the Tivoli Workload Scheduler for z/OS VSAM files.
  2. Allocate VSAM clusters for Tivoli Workload Scheduler for z/OS using the EQQPCS01 job.
  3. Review the EQQICNVS sample job. Ensure that input and output data set names are correctly specified. Make sure you select the current JS file. When defining input and output files for the CP file conversion, use the NCP file, because a new current plan has just been created.
  4. Run EQQICNVS to convert the VSAM data to Tivoli Workload Scheduler for z/OS format.
  5. Verify that the conversion program ran successfully. If there are any problems converting the VSAM files, you should abandon the migration.
  6. Back up the Tivoli Workload Scheduler for z/OS non-VSAM data sets.
  7. Allocate Tivoli Workload Scheduler for z/OS non-VSAM data sets using the EQQPCS01 and EQQPCS02 jobs.
  8. If you have stopped migrating, start OPCA, OPCB, and OPCC. Release any held queues and restart any drained initiators.
Initialize the new system

Before you perform the steps described in this section, ensure that the VSAM file conversion described in the preceding section was successful.

  1. Ensure that the data sets referred to in “Empty data sets are empty. Use ISPF browse to ensure that all job-tracking logs (EQQJTnn), the job-tracking archive (EQQJTARC), and the checkpoint (EQQCKPT) data sets are empty. If you use dual job-tracking logs (EQQDLnn), they should also be empty.
  2. Modify the JCL procedure for OPCA to include the new DD names and data sets added in IBM® Tivoli Workload Scheduler for z/OS.
  3. Modify initialization parameters for OPCA. The CKPT data set is not yet initialized the first time you start OPCA after migration, so you must specify CURRPLAN(NEW) in JTOPTS. Specify BUILDSSX(REBUILD) and SSCMNAME(EQQSSCMJ,TEMPORARY) in the OPCOPTS initialization statement. Specify the PIFHD keyword in the INTFOPTS initialization statement. As soon as OPCA has started, change back to CURRPLAN(CURRENT), to prevent OPCA from recovering from the new current plan each time it starts.
    Note:
    You might find it useful to specify JOBSUBMIT(NO) and FTWJSUB(NO) in the JTOPTS initialization statement so that work is not submitted when you start OPCA. When you have checked that OPCA has started without errors, you can activate job submission using the Service Functions dialog.

    To initialize the checkpoint data set, specify OPCHOST(YES) in OPCOPTS. This is so that, when the scheduler starts, the NMM task initializes the checkpoint data set with FMID and LEVEL corresponding to SSX. The OPCHOST value can then be changed. For example, you can change the value to OPCHOST(PLEX) when the subsystem is used as the controlling system in XCF.

  4. Run the EQQPCS05 job to create the work directory. Optionally, back up any important data that you have in the old work directory, for example, the LOCALOPTS file, to merge it later in the new work directory.
  5. Start OPCA. Verify that no errors occurred during initialization. If required, correct any errors and restart OPCA.
  6. Stop OPCA.
Produce a checkpoint data set containing data from the old production system

Produce a checkpoint data set containing data from the old production system:

  1. Merge OLD.CKPT, from the version you are migrating, and the newly allocated CKPT, created in the previous section, into CKPT.NEW using a job such as the following, which you customize for your environment:
    //COPY     EXEC PGM=IDCAMS,REGION=512k
    //CKPTOLD  DD DSN=OPCAHLQS.CKPT.OLD,DISP=SHR
    //CKPT     DD DSN=OPCAHLQS.CKPT,DISP=SHR
    //CKPTNEW  DD dsn=OPCAHLQS.CKPT.NEW,DISP=MOD
    //SYSPRINT DD SYSOUT=*
    //SYSIN    DD *
        REPRO IFILE(CKPT) OFILE(CKPTNEW) COUNT(1)
        REPRO IFILE(CKPTOLD) OFILE(CKPTNEW) SKIP(1)
    /* " 
  2. Back up the current CKPT and then rename CKPT.NEW to the current CKPT.
Start the new system
  1. Change JTOPTS CURRPLAN(NEW) to CURRPLAN(CURRENT).
  2. Start the controller OPCA. The merged checkpoint data set will enable it to continue reading the event records.
  3. Start all the trackers without BUILDSSX. Ensure that the load modules invoked are still those for the version from which you are migrating.
  4. Stop the trackers after the events in CSA are processed.
  5. Modify initialization parameters for OPCB. Specify BUILDSSX(REBUILD) and SSCMNAME(EQQSSCMJ,TEMPORARY) on the OPCOPTS initialization statement. Specify the PIFHD keyword on the INTFOPTS initialization statement.
  6. Start the OPCB and OPCS.
  7. Restart drained initiators on the MVS1 system.
  8. Enter the Service Functions dialog on OPCA, and activate job submission (if it is not already active).
  9. Start the OPCC and OPCD systems. Release held queues, and restart drained initiators if required.
  10. Submit a daily plan replan or extend as soon as possible after migration. Until a new current plan is created, any references to special resources will cause the resource object to be copied from the EQQRDDS to the current-plan-extension data space. This processing has some performance overheads.

    The new-current-plan-extension data set (EQQNCXDS) is built during daily planning to contain all special resources referenced by operations in the new current plan.

Validate the new system
  1. From the Ready List dialog, review the status of active operations.
  2. Check that the operations that are becoming ready on the workstations representing the three z/OS systems are successfully submitted to the intended system. Also check that the ending status is correctly reflected in the ready lists.
  3. Verify that the current plan and the long-term plan can be extended successfully.
  4. Verify that other Tivoli Workload Scheduler for z/OS-related processes (for example, the dialogs, batch programs, and PIF-based programs) work as expected.