Method of use

A sample for controlling VM workload from Tivoli Workload Scheduler for z/OS is delivered on the sample file under member name EQQCVM. The sample member includes these parts:

To use Tivoli Workload Scheduler for z/OS to drive VM operations, follow these steps:

  1. Define a new Tivoli Workload Scheduler for z/OS workstation, for instance VM, as a general workstation with the automatic reporting attribute. The advantage of having a separate workstation is that separate ready lists and reports can be produced for VM operations based on the workstation name.
  2. Define an application VMJJJJ with these operations:
       CPU1 010  JOBNAME RJJJJ      'SEND START ORDER TO VM'
       VM   020  JOBNAME JJJJ       'TRACKS REAL EXECUTION OF EXEC'

    The run cycle and other characteristics should be the same as for normal z/OS applications.

    The figure below shows the member RJJJJ in the Tivoli Workload Scheduler for z/OS JCL library. This JCL will be executed by the CPU1 operation.

    Figure 1. Member RJJJJ in the Tivoli Workload Scheduler for z/OS JCL library
      
         //RJJJJ    JOB  Job statement parameters according to
         //              your installation standards
         /*JOBPARM CARDS=100
         /*ROUTE PUNCH VMNODE.VMUSER
         //******************************************************************
         //*
         //*       A z/OS job to signal VM when the controlled job
         //*       is ready to be started on the VM system.
         //*
         //******************************************************************
         //B        EXEC PGM=IEBGENER
         //SYSPRINT DD SYSOUT=Q
         //SYSUT1   DD *
         JJJJ
         /*
         //SYSUT2   DD SYSOUT=B
         //SYSIN    DD DUMMY

    The first record in the //SYSUT1 data stream contains the name of the EXEC to be executed, followed by any parameters that should be passed to the EXEC.

  3. Set up a VM AUTOLOG user to await the arrival of any reader files. You can use a wait EXEC to drive the VM AUTOLOG user (such as the OPCWATCH EXEC supplied in member EQQCVM in the Tivoli Workload Scheduler for z/OS sample library). When reader files are sent to this user, it processes the EXEC named in the //SYSUT1 data. The EXECs that are processed are logged in the file OPCA LOG A.
    Note:
    • Each VM EXEC that is processed should set a return code to indicate whether it has run successfully.
    • The wait EXEC depends on the module WAKEUP to invoke its execution. The WAKEUP module is available in the VM/IPF distribution.

Figure 2 shows an example of controlling VM operations from Tivoli Workload Scheduler for z/OS. In this example:

  1. Tivoli Workload Scheduler for z/OS running under z/OS sends EXEC name JJJJ to a VM user, which is running the OPCWATCH EXEC.
  2. OPCWATCH invokes two other VM EXECs, JJJJ and OPCSTAT.
  3. Before and after JJJJ is being processed, OPCSTAT reports the status to an Tivoli Workload Scheduler for z/OS general automatic-reporting workstation VM.
  4. In MVS™, the jobs from VM execute a program to perform automatic-event reporting for the particular combination of application ID VMJJJJ, job name JJJJ, and status.
Figure 2. Using automatic-event reporting to control VM operations
Figure showing an example of how to control VM operations from IBM Tivoli Workload Scheduler for z/OS.

If Tivoli Workload Scheduler for z/OS running under z/OS fails or if the communication link fails, a printed workstation plan and an Tivoli Workload Scheduler for z/OS ready list from daily planning will still exist. This information tells which jobs must be run and the order in which they must be run. A Tivoli Workload Scheduler for z/OS VM-user log lists the EXECs that have been started and those that have been completed. With this information, processing can continue either automatically when the link is reestablished or manually.

The jobs transmitted to and from VM add little additional load to z/OS. To avoid delays, a dedicated job class and initiator should be reserved for VM communication.

Because Tivoli Workload Scheduler for z/OS does not require shared DASD for the preceding method, this method can be used to drive multiple VM users in different locations. You can also use this method to drive other z/OS systems. This method is particularly useful when a remote z/OS system runs a small number of backups and cleanups that are initiated and controlled from a central site.