Relationship between the Scheduler and z/OS

Tivoli Workload Scheduler for z/OS is a z/OS subsystem, initialized during IPL. Routines run during subsystem initialization establish basic services, such as an event queue in ECSA. Tivoli Workload Scheduler for z/OS uses standard interfaces to SMF and JES to gather relevant information about the workload on the z/OS system.

The functions of the controller are available when an address space has been created for it, and the required subtasks have been successfully initialized. The controller can run either as a started task or as a batch address space. Normally, the address space is started during the IPL process, that is by a z/OS start command in COMMNDnn, or by console automation. Alternatively, a z/OS operator can issue a START command from the operator console. The z/OS operator can also stop or modify the address space, using the STOP and MODIFY commands.

A TSO user accesses Tivoli Workload Scheduler for z/OS services using the dialogs. A dialog is a sequence of ISPF panels. Many of the functions supported by the dialogs pass service requests from the TSO user’s address space to the controller address space for processing.

Before performing any function you request, the dialog function passes the request to the system authorization facility (SAF) router. If RACF®, or a functionally equivalent security product, is installed and active on the z/OS system, the SAF router passes the verification request to RACF to perform this authority check.

A typical dialog service request is to access one or more records in VSAM files that are maintained and controlled by Tivoli Workload Scheduler for z/OS. Such a request is passed to Tivoli Workload Scheduler for z/OS through the z/OS subsystem interface (SSI). This interface invokes a routine that resides in common storage. This routine must be invoked in APF-authorized mode.

Consider that all long term plan (LTP) and CP batch planning jobs have to be excluded from SMARTBATCH DA (Data Accelerator) processing. When the SMARTBATCH DATA ACCELERATOR is used with the scheduler LTP and CP batch planning jobs, the normal I/O to EQQCKPT is delayed until END OF JOB (or at least END OF JOBSTEP). This interferes with the normal exchange of data between the batch job and the controller started task so that when the batch job signals the controller to check the EQQCKPT to determine whether a new current plan has been created, the required updates to the CKPT have not yet been made. This causes the controller to conclude that no NCP has been created, and no turnover processing is done. As a result, even if the plan jobs run successfully, the NCP is not taken into production by the controller unless a CURRPLAN(NEW) restart is performed.

The Data Store uses the MVS/JES SAPI functions to access sysout data sets, allowing concurrent access to multiple records from a single address space.

Batch optimizer utilities, such as BMC Batch Optimizer Data Optimizer and Mainview Batch Optimizer, prevent correct communication between the scheduler's controller and CP/LTP batch planning jobs. The scheduler's logic depends on an exchange of enqueues and real-time updates of several sequential data sets to pass information back and forth between the controller's STC and the CP/LTP batch planning jobs. These optimizers hold I/O from the batch jobs until END OF STEP or END OF JOB, then preventing the required communication from taking place. When such utilities are allowed to "manage" I/O for the scheduler's CP or LTP batch planning jobs, communication between the jobs and the controller is disrupted. This causes numerous problems that are hard to diagnose. Most commonly, the CURRENT PLAN EXTEND or REPLAN jobs will run to normal completion, and an NCP data set will be successfully created, but the controller will fail to automatically take the new plan into production until it is forced to do so via a CURRPLAN(NEW) restart of the CONTROLLER. Use of BATCHPIPES with these batch planning jobs will result in the same sorts of problems.