Allocating non-VSAM data sets

This section describes the physical sequential (PS) and partitioned (PDS) data sets. Table 26 shows the non-VSAM data sets and their characteristics. Before you allocate the non-VSAM data sets, review the following sections, which contain important information about each of these data sets.

Table 26. Tivoli Workload Scheduler for z/OS non-VSAM data sets
Sample DD Name RECFM LRECL BLKSIZE DSORG Data set
EQQPCS02 AUDITPRT FBA 133 13300 PS Input to EQQAUDIT
EQQPCS01 U 6300 PS CLIST library (optional)
EQQPCS01 EQQCKPT U 8200 PS Checkpoint
  EQQDLnn U 6300 PS Dual job-tracking-log
EQQPCS01 EQQDMSG VBA 84 3120 PS Tivoli Workload Scheduler for z/OS diagnostic message and trace
EQQPCS02 EQQDUMP FB 80 3120 PS Tivoli Workload Scheduler for z/OS diagnostic
EQQPCS02
EQQEVDS/
EQQEVDnn/
EQQHTTP0
F 100 100 PSU Event
EQQPCS01 EQQEVLIB FB 80 3120 PDS Event-driven workload automation (EDWA) configuration file repository
EQQPCS02 EQQINCWK FB 80 3120 PS JCC incident work
EQQPCS01 EQQJBLIB FB 80 3120 PDS Job library
EQQPCS01 EQQJCLIB FB 80 3120 PDS JCC message table
EQQPCS01 EQQJTABL F 240 240 PS Critical job table log file
EQQPCS01 EQQJTARC U 6300 PS Job-tracking archive
EQQPCS01 EQQJTnn U 6300 PS Job-tracking-log
|EQQPCS01 |EQQLOGRC |F |128 |128 |PS |Joblog and Restart Information pending requests |Log data set
EQQPCS02 EQQLOOP VBA 125 1632 PS Loop analysis message log
EQQPCS02 EQQMLOG VBA 125 1632 PS Message log
EQQPCS01 |EQQMONDS |F |160 |160 |PSU |Monitoring task data set used to store events for IBM® Tivoli® Monitoring
EQQPCS09 EQQOCPBK Data set to allocate the GDG root. The GDG entry is allocated during DP batch run and contains a backup of the old current plan.
EQQPCS01 EQQPARM FB 80 3120 PDS Initialization-statement library
EQQPCS01 EQQPRLIB FB 80 3120 PDS Automatic-recovery-procedure library
EQQPCS06 EQQSCLIB FB 80 3120 PDS Script library for end-to-end scheduling with fault tolerance capabilities
EQQPCS01 EQQSTC FB 80 3120 PDS Started-task submit
EQQPCS01 EQQSUDS/ user-defined F 820 820 PSU Submit/release
EQQPCS02 EQQTROUT VB 32756 32760 PS Input to EQQAUDIT
EQQPCS06 EQQTWSCS FB 80 3120 PDSE Data set for centralized script support in end-to-end with fault tolerance capabilities
EQQPCS06 EQQTWSIN and EQQTWSOU F 160, 160 160, 160 PSU Event data sets for end-to-end with fault tolerance capabilities
EQQYPARM       PDS/PS PIF
EQQPCS01 EQQPCS02 SYSMDUMP F 4160 4160 PS System dump data set
   – FB 80 3120 PS Job-completion-checker incident log

You can allocate these non-VSAM data sets using the samples listed in Table 26 that are generated by the EQQJOBS installation aid.

Note:
The data sets cannot be defined as compressed SMS data sets. If you have not tailored the members as described on page ***, you can allocate a partitioned data set by running a job like this:
Allocating a Tivoli Workload Scheduler for z/OS partitioned data set
//ALLOCPDS JOB STATEMENT PARAMETERS
//*-----------------------------------------*
//* ALLOCATE A PARTITIONED DATA SET *
//*-----------------------------------------*
//ALLOC   EXEC PGM=IEFBR14
//SYSUT1   DD  DSN=OPCESA.INST.EQQSTC,
//             DISP=(,CATLG),
//             VOL=SER=volser,
//             SPACE=(TRK,(5,0,1)),
//             UNIT=3390,
//             DCB=(RECFM=FB,LRECL=80,BLKSIZE=3120)

This example allocates a started-task-submit data set (EQQSTC).

To allocate a Tivoli Workload Scheduler for z/OS sequential data set, you can run a job like the this:

Allocating a Tivoli Workload Scheduler for z/OS sequential data set
//ALLOCPS  JOB  STATEMENT PARAMETERS
//*----------------------------------------*
//* ALLOCATE A SEQUENTIAL DATA SET *
//*----------------------------------------*
//ALLOC   EXEC PGM=IEBGENER
//SYSPRINT DD  DUMMY
//SYSUT1   DD  DUMMY,DCB=(RECFM=F,BLKSIZE=100,LRECL=100)
//SYSUT2   DD  DSN=OPCESA.INST.EVENTS,
//             DISP=(NEW,CATLG),
//             UNIT=3390,
//             VOL=SER=volser,
//             SPACE=(CYL,3,,CONTIG),
//             DCB=(RECFM=F,BLKSIZE=100,LRECL=100,DSORG=PS)
//SYSIN    DD  DUMMY

This example allocates an event data set (EQQEVDS). The IEBGENER utility ensures that the allocated data set has an end-of-file marker in it.

Note:
If you allocate Tivoli Workload Scheduler for z/OS data sets using your own jobs, ensure that they have an end-of-file marker in them.

To allocate a Tivoli Workload Scheduler for z/OS partitioned extended data set, you can run a job such as the following one:

Allocating an extended partitioned data set 
//ALLOPDSE JOB STATEMENT PARAMETERS
//*----------------------------------------*
//*ALLOCATE A PDSE DATA SET *
//*----------------------------------------*
//ALLOC EXEC PGM=IEBR14
//SYSUT1 DD DSN=OPCESA.INST.CS,
//	DSNTYPE=LIBRARY,
//	DISP=(NEW,CATLG),
//	UNIT=3390,
//	VOL=SER=volser,
//	SPACE=(CYL,(1,1,10)),
//	DCB=(RECFM=FB,LRECL=80,BLKSIZE=3120)

This example allocates a data set for centralized script support (EQQTWSCS) in an end-to-end with fault tolerance capabilities environment.

The following sections describe the Tivoli Workload Scheduler for z/OS non-VSAM data sets. They contain important information to consider when allocating your data sets.

Internal reader data set (EQQBRDS)

When a Tivoli Workload Scheduler for z/OS subsystem is used to submit work, specify the internal reader data set, EQQBRDS, in your started-task procedures. The DD statement must contain the external-writer-data set name, INTRDR, and the class of the internal reader. The class you specify is used as a default message class for jobs that do not have a MSGCLASS parameter specified on their job cards.

Example internal reader DD statement
//EQQBRDS  DD  SYSOUT=(A,INTRDR)

Checkpoint data set (EQQCKPT)

Tivoli Workload Scheduler for z/OS uses the checkpoint data set to save the current status of the Tivoli Workload Scheduler for z/OS system. If the controller is stopped and then restarted, Tivoli Workload Scheduler for z/OS uses the checkpoint data set to return the system to the same state as when it was stopped, ready to continue processing.

Tivoli Workload Scheduler for z/OS automatically formats the checkpoint data set the first time it is used. In its initial state, the checkpoint data set specifies that a new current plan exists. The new current plan is defined by DD name EQQNCPDS. Tivoli Workload Scheduler for z/OS attempts to copy the new plan and make it the current plan. If the copy is successful, Tivoli Workload Scheduler for z/OS is fully operational. If the copy is not successful, Tivoli Workload Scheduler for z/OS has become active without a current plan.

Notes:
  1. A strong relationship exists between the Tivoli Workload Scheduler for z/OS checkpoint data set and the current plan data set. There is also a strong relationship between the event positioning record (EPR) in the checkpoint data set, EQQCKPT, and the tracker event data set, EQQEVDXX, referenced in the controller started task procedure, when a DASD connectivity is used. In fact, the EPR is associated with a specific destination and, therefore, also to a specific event data set. If this relationship is broken, the results of the synchronization processing at controller restart can be unpredictable. This is because events could be lost or reprocessed. Ensure that you do not accidentally delete or overwrite the checkpoint data set
  2. To initialize the checkpoint data set, the OPCHOST keyword of the OPCOPTS initialization statement must be set to its default value, that is, OPCHOST(YES), the first time the scheduler is started. With OPCHOST(YES), the NMM initializes the checkpoint data set with FMID and LEVEL corresponding to SSX.

    The OPCHOST value can then be changed. For example, you can change the value to OPCHOST(PLEX) when the subsystem is used as the controlling system in XCF.

The space allocation for the data set must be at least 15 tracks. This allocation can accommodate 1000 workstation destinations.

Diagnostic data sets (EQQDMSG, EQQDUMP, and SYSMDUMP)

Allocate diagnostic data sets for Tivoli Workload Scheduler for z/OS address spaces, dialog users, batch jobs, and server.

Diagnostic message and trace data set (EQQDMSG)

You should allocate EQQDMSG for each dialog user. You can allocate EQQDMSG either as a SYSOUT data set or as a DASD data set. Usually only a small volume of diagnostic information exists, so an initial allocation of two tracks of DASD should be enough. If EQQDMSG is not defined, output is written to EQQDUMP.

Diagnostic data set (EQQDUMP)

The tracker, controller, and server write debugging information to diagnostic data sets when validity checking discovers internal error conditions. When diagnostic information is logged, a 3999 user abend normally accompanies it. For service purposes, always include an EQQDUMP DD statement for every Tivoli Workload Scheduler for z/OS address space, dialog user, batch job, and server.

Diagnostic data sets are usually allocated as DASD data sets, but they can also be allocated to SYSOUT. Usually only a small volume of diagnostic information exists, so an initial allocation of two tracks on DASD should be enough.

Dump data set (SYSMDUMP)

EQQPCS02 contains two allocations for the SYSMDUMP data set. For a Tivoli Workload Scheduler for z/OS address space, the data set is allocated with the low-level qualifier SYSDUMP. Allocate a unique SYSMDUMP data set for every Tivoli Workload Scheduler for z/OS address space. For the scheduler server jobs, SYSMDUMP is allocated with the low-level qualifier SYSDUMPS. EQQPCS01 contains the allocation for the SYSMDUMP data set for Tivoli Workload Scheduler for z/OS batch jobs; this data set is allocated with the low-level qualifier SYSDUMPB. The Tivoli Workload Scheduler for z/OS batch jobs can use the same data set. It is allocated with a disposition of MOD in the JCL tailored by EQQJOBS.

Furthermore, SYSMDUMP data sets should be defined with a UACC of UPDATE, that is, WRITE-ENABLED to all user IDs under which a job scheduled by Tivoli Workload Scheduler for z/OS might possibly be submitted. This is because the SUBMIT SUBTASK of the controller or of the tracker which is submitting a given job might abend while running under the user exit EQQUX001 supplied user ID (RUSER user ID) rather than under the user ID associated with the started task. If this occurs, DUMPTASK fails with an ABEND913 if the user ID in control does not have WRITE access to the SYSMDUMP data set.

The UACC of UPDATE access should be defined to all PIF, dialog, and Dynamic Workload Console servers. If a user is not authorized to update the SYSMDUMP data set, and a server failure occurs while running a request for that user, DUMPTASK fails with an ABEND 912. No diagnostic data will be captured.

Event data sets (EQQEVDS, EQQEVDnn, and EQQHTTP0)

Every Tivoli Workload Scheduler for z/OS address space requires a unique event data set. The data set is device-dependent and must have only a primary space allocation. Do NOT allocate any secondary space. The data set is formatted the first time it is used. Each time you use the data set, Tivoli Workload Scheduler for z/OS keeps a record of where to start. When the last track of the data set is written, Tivoli Workload Scheduler for z/OS starts writing on the first track again.

Note:
The first time Tivoli Workload Scheduler for z/OS is started with a newly allocated event data set, an SD37 error occurs when Tivoli Workload Scheduler for z/OS formats the event data set. Do not treat this as an error.

The data set contains records that describe events created by Tivoli Workload Scheduler for z/OS job-tracking functions. An event-writer task writes to this data set; an event-reader task reads from it. The job-submit task also uses the event data set to checkpoint its activities, using the first record in the data set (the header record). The submit task in a controller address space takes these checkpoints when the computer workstation is the same system (the workstation destination is blank), so the address space needs the EQQEVDS event data set allocated even if there is no event writer task. When an event writer task is started in the controller address space, it shares the data set with the submit task.

The header record contains checkpoint information for up to 13 workstations per destination. If you plan to have more than 13 workstations defined to use a single destination, you can allocate the event data set with a large logical record length to accommodate the required number. To calculate the record length required, use this formula:

  LRECL = (No-of-WS-with-this-destination * 6) + 22

Because the event data set provides a record of each event, events will not be lost if an event-processing component of Tivoli Workload Scheduler for z/OS must be restarted. The submit checkpointing process ensures that submit requests are synchronized with the controller, thereby preventing lost requests caused by communication failures.

Define enough space for a single extent data set so that it does not wrap around and write over itself before an event is processed. Two cylinders are enough at most installations. The space allocation must be at least 2 tracks when the record length is 100. There must be sufficient space in the event data set to accommodate 100 records. Consider this requirement if you will define the event data set with a record length greater than 100. For example if you define an LRECL of 15 000, the minimum space allocation is 34 tracks, which equates to 102 records and an event data set that would wrap around very quickly in most installations.

To aid performance, place the event data set on a device that has low activity. If you run programs that use the RESERVE macro, try to allocate the event data set on a device that is not reserved or where only short reserves are taken. The reserve period must be less than 5 minutes.

If you use the job log retrieval function, consider allocating the event data set with a greater LRECL value than that in Table 26. This improves performance because input/output (I/O) operations will be reduced because fewer continuation (type NN) events will be created. You can specify 0, or a value from 100 to 32 000 bytes for LRECL. Any other value will cause the event writer to end, and message EQQW053E will be written to the message log. If you do not specify a value for LRECL or specify 0, the data set will be forced to have an LRECL of 100 when it is opened by Tivoli Workload Scheduler for z/OS. However, the data set must be unblocked: the block size must be equal to the logical record length. If you intend to activate job log retrieval function, use one of the these formulas to estimate the LRECL that you should specify:

Calculating the optimum LRECL
LRECL=((NN/EV) * 20) + 100   OR   LRECL=(4 * N) + 100

In the first formula, NN is the number of continuation events, and EV is the number of all other events. Event types are found in position 21 of the event records. In the second formula, N is the average number of NN events per job. If your calculation yields a value of less than 110, there will be little or no improvement in performance. In this case, you should specify an LRECL value of 100.

You will probably need to test your system first to get an idea of the number and event types that are created. You can then reallocate the event data set when you have gathered information about the events created at your installation. But, before you reallocate an event data set, ensure that the current plan is completely up-to-date. You must also stop the event writer, and any event reader, that uses the data set.

Note:
Do not move Tivoli Workload Scheduler for z/OS event data sets once they are allocated. They contain device-dependent information and cannot be copied from one device type to another, or moved on the same volume. An event data set that is moved will be reinitialized. This causes all events in the data set to be lost. If you have DFHSM or a similar product installed, you should specify that Tivoli Workload Scheduler for z/OS event data sets are not migrated or moved.

Event-driven workload automation configuration file data set (EQQEVLIB)

This data set contains the configuration files required by the event-driven workload automation (EDWA) process. The configuration files, which are created by the EQQRXTRG program, are used by the trackers to monitor the event conditions. The event-driven workload automation configuration file data set is accessed by the controller, which, when configuration files are created or modified, deploy them to the trackers by storing the files into the data set identified by the EQQJCLIB DD card. This is the same data set to which the trackers’ JCLs refer.

By using the event-driven workload automation configuration file data set, you can automate and centralize the deployment of configuration files to the trackers without having to use the EQQLSENT macro for each tracker.

Job library data set (EQQJBLIB)

The job library data set contains the JCL for the jobs and started tasks that Tivoli Workload Scheduler for z/OS will submit. It is required by a controller. If you already have a job library that you will use for Tivoli Workload Scheduler for z/OS purposes, specify this data set on the EQQJBLIB statement. If not, allocate one before you start the controller.

Note:
Allocate the job library data set with a only primary space allocation. If a secondary allocation is defined and the library goes into an extent when Tivoli Workload Scheduler for z/OS is active, you must stop and restart the controller. Also, do not compress members in this PDS. For example, do not use the ISPF PACK ON command, because Tivoli Workload Scheduler for z/OS does not use ISPF services to read it.

The limitation of allocating the job library data set with only a primary space allocation is nota applicable for PDSE data sets.

Note:
Each member in the EQQJBLIB must contain one job stream (only one job card), and the job name on the job card must match the job name in the Tivoli Workload Scheduler for z/OS scheduled operation.

Job-completion-checker data sets

You can optionally use the job completion checker (JCC) to scan SYSOUT for jobs and started tasks. Depending on the JCC functions you want to use, allocate at least one of the three data sets associated with the JCC:

JCC-message-table library (EQQJCLIB)

If the success or failure of a job or started task cannot be determined by system completion codes, the JCC function can be used to scan the SYSOUT created and set an appropriate error code. You determine how the SYSOUT data is scanned by creating JCC message tables. A general message table (EQQGJCCT) must be defined. Job-specific message tables can be created to search for specific data strings in particular jobs. These tables are stored in the PDS with a member name that matches the job name.

Every Tivoli Workload Scheduler for z/OS subsystem where you start the JCC task must have access to a message table library. If you want, you can use the same message table library for all Tivoli Workload Scheduler for z/OS systems.

If you use the data set-triggering function, the data set-selection table (EQQEVLST or EQQDSLST) must be stored in EQQJCLIB.

Note:
Allocate the JCC message table data set with only primary space allocation. The limitation is not applicable for PDSE data sets.
JCC-incident-log data set

You can optionally use the JCC to write records to an incident log data set. This data set is defined by the INCDSN keyword of the JCCOPTS statement.

When scanning SYSOUT data sets, the JCC recognizes events that you define as unusual. If the EQQUX006 exit is loaded by Tivoli Workload Scheduler for z/OS, the JCC records these events in the incident log data set. The incident log data set can be shared by several JCC tasks running on the same system or on different systems. The data set can also be updated manually or even reallocated while the JCC is active. If the JCC is unable to write to the incident log, the incident work data set is used instead.

JCC-incident work data set (EQQINCWK)

Occasionally, the JCC cannot allocate the incident log data set. This can happen if another subsystem or a Tivoli Workload Scheduler for z/OS user has already accessed the data set. In this case, the JCC writes to the incident work file, EQQINCWK, instead. If it is not empty, the work file is copied and emptied each time the incident log data set is allocated.

Job-tracking data sets (EQQJTARC, EQQJTnn, EQQDLnn)

Job-tracking data sets are a log of updates to the current plan. They optionally contain audit trail records. Job-tracking data sets comprise:

You must allocate EQQJTARC and at least two job-tracking logs (EQQJT01 and EQQJT02) for a controller. The actual number of JT logs that you should allocate is determined by the value that you specify on the JTLOGS keyword of the JTOPTS initialization statement. If you decide to allocate three job-tracking logs, specify the DD names EQQJT01, EQQJT02, and EQQJT03. If you specified EQQJT01, EQQJT02, and EQQJT04, an error occurs and Tivoli Workload Scheduler for z/OS terminates. Tivoli Workload Scheduler for z/OS uses the job-tracking logs in turn. When a current plan backup is performed, the active log is appended to EQQJTARC data set.

|The size of the CP files, JT and JTARC, |can become large, but with appropriate tuning of the size and of the |DP frequency, they will not allocate additional extents. If necessary, |use the allocation of additional extents (not additional volumes, |since just extent allocation is supported in the shipped JT allocation |samples. The JTLOG keyword default defines five job-tracking logs. |It is recommended that you specify at least three job-tracking logs. |Job-tracking logs are switched at every current plan backup. If the |interval between backups is very low and JTLOGS(2) is specified, the |previously used job-tracking log might not have been archived before Tivoli Workload Scheduler for z/OS must |switch again. If it cannot switch successfully, the normal-mode-manager |(NMM) subtask is automatically shut down, preventing further updates |to the current plan.

You can optionally allocate dual JT logs. These logs are identified by the EQQDLnn DD names in the controller started-task JCL. Allocate the same number of dual JT logs as JT logs. The numeric suffixes, nn, must be the same as for the JT logs, because Tivoli Workload Scheduler for z/OS uses the logs with the same number: EQQJT01 and EQQDL01, EQQJT02 and EQQDL02, and so on. Tivoli Workload Scheduler for z/OS writes job-tracking information to both logs, so that if the active JT log is lost it can be restored from the dual log, and Tivoli Workload Scheduler for z/OS can be restarted without losing any events. To achieve the maximum benefit from dual JT logs, you should allocate them:

Tivoli Workload Scheduler for z/OS tries to use dual JT logs if you specify DUAL(YES) on the JTOPTS initialization statement of a controller.

The job-tracking-archive data set accumulates all job-tracking data between successive creations of a new current plan (NCP). So allocate EQQJTARC with enough space for all job-tracking records that are created between daily planning jobs; that is, extend or replan of the current plan. In other words, be sure that you allocate for EQQJTARC an equal or greater amount of space than the total of the space you allocate for the JT files, or you will get a system error. When the daily planning batch job is run, the active job-tracking log is appended to EQQJTARC, and the JT log is switched. The archive log, EQQJTARC, is then copied to the track log data set referenced by the EQQTROUT DD name during the daily planning process. When Tivoli Workload Scheduler for z/OS takes over the NCP, the archive data set is emptied.

Tivoli Workload Scheduler for z/OS recovery procedures that use the job-tracking data sets are described in Tivoli Workload Scheduler for z/OS: Customization and Tuning.

Message log data set (EQQMLOG)

The message log data set can be written to SYSOUT or a data set. The data control block (DCB) for this data set is defined by Tivoli Workload Scheduler for z/OS as follows:

EQQMLOG DCB attributes
DCB=(RECFM=VBA,LRECL=125,BLKSIZE=1632)

If the message log data set becomes full during initialization, or when a subtask is restarted, Tivoli Workload Scheduler for z/OS will abend with error code SD37. In either case, you must stop Tivoli Workload Scheduler for z/OS and reallocate the message log data set with more space. In all other circumstances, if the data set fills up, Tivoli Workload Scheduler for z/OS redirects messages to the system log instead.

Note:
The scheduler ABENDs with error code sb37 or sd37 if the message log data set becomes full under any of the following circumstances:
  • During initialization
  • When a subtask is restarted
  • While processing any modify command which requires parsing of initialization parameters or specifies the newnoerr, noerrmem(member), or lstnoerr options
In the last case, the ABEND also occurs if the EQQMLOG is already full when any such command is issued. In all these cases you must reallocate more space to the message log data set. In all the other cases, if the data set fills up, the scheduler redirects messages to the system log instead.

EQQPCS02 contains two allocations for the EQQMLOG data set. For a Tivoli Workload Scheduler for z/OS address space, the data set is allocated with the low-level qualifier MLOG. For the scheduler server jobs, the data set is allocated with the low-level qualifier MLOGS.

Note:
If you allocate the message log data set on DASD, define a different data set for Tivoli Workload Scheduler for z/OS batch program. The data set must also be different from the one used by each IBM Tivoli Workload Scheduler for z/OS® address space (controller, standby controller, tracker, and server). The data set cannot be shared.

Loop analysis log data set (EQQLOOP)

The loop analysis log data set can be written to SYSOUT or a data set. The data control block (DCB) for this data set is defined by Tivoli Workload Scheduler for z/OS as follows:

EQQLOOP DCB attributes
DCB=(RECFM=VBA,LRECL=125,BLKSIZE=1632)

This data set is defined the same way as for EQQMLOG, but it is specific for loop analysis and is populated only if a loop condition occurs. It is required by daily planning batch programs (extend, replan, and trial).

Parameter library (EQQPARM)

Each Tivoli Workload Scheduler for z/OS subsystem reads members of a parameter library when it is started. Parameter library members (residing in library extent), that have been created, cannot be accessed after they have been opened. To avoid this problem, the data set that defines the EQQPARM library should be allocated without any secondary extents. The limitation is not applicable for PDSE data sets. The library contains initialization statements that define runtime options for the subsystem. Allocate at least one parameter library for your Tivoli Workload Scheduler for z/OS systems. You can keep the parameters for all your subsystems in one library, as long as it resides on a DASD volume that is accessible by all systems.

PIF parameter data set (EQQYPARM)

Allocate the PIF parameter data set if you intend to use a programming interface to Tivoli Workload Scheduler for z/OS. The data set can be sequential or partitioned. In the PIF parameter file you specify how requests from the programming interface should be processed by Tivoli Workload Scheduler for z/OS. By defining an INIT initialization statement in the PIF parameter data set, you override the global settings of the INTFOPTS statement.

The initialization statements are described in Tivoli Workload Scheduler for z/OS: Customization and Tuning.

Automatic-recovery-procedure library (EQQPRLIB)

Allocate a data set for the automatic-recovery-procedure library if you intend to use the Tivoli Workload Scheduler for z/OS automatic-recovery function. The library is used by the ADDPROC JCL rebuild parameter of the JCL recovery statement. This parameter lets you include JCL procedures in a failing job or started task before it is restarted.

Script library for end-to-end scheduling with fault tolerance capabilities (EQQSCLIB)

This script library data set includes members containing the commands or the job definitions for fault-tolerant workstations. It is required in the controller if you want to use the end-to-end scheduling with fault tolerance capabilities. See Customization and Tuningl for details on the JOBREC, RECOVERY, and VARSUB statements.

Note:
Do not compress members in this PDS. For example, do not use the ISPF PACK ON command, because Tivoli Workload Scheduler for z/OS does not use ISPF services to read it.

Started-task-submit data set (EQQSTC)

The started-task-submit data set is used by Tivoli Workload Scheduler for z/OS to temporarily store JCL when a started task is to be started. Use these attributes for this data set:

EQQSTC attributes
SPACE=(TRK,(5,0,1)),
DCB=(RECFM=FB,LRECL=80,BLKSIZE=3120)

Include an EQQSTC in the JES PROCLIB concatenation on each system where Tivoli Workload Scheduler for z/OS schedules started-task operations. The data set is used as a temporary staging area for the started-task JCL procedure. When the start command has been issued for the task and control for the task has passed to JES, Tivoli Workload Scheduler for z/OS deletes the JCL by resetting the PDS. This means that you never need to compress the data set. For more information, see Implementing support for started-task operations.

Note:
Tivoli Workload Scheduler for z/OS does not support partitioned data set extended (PDSE) libraries for a started-task-submit data set.

Submit/release data set (EQQSUDS)

The submit/release data set is device dependent and must have only a primary space allocation. Do not allocate any secondary space. The data set is formatted the first time it is used. Each time you use the data set, Tivoli Workload Scheduler for z/OS keeps a record of where to start. When the last track of the data set is written, Tivoli Workload Scheduler for z/OS starts writing on the first track again.

Two cylinders are enough at most installations.

Notes:
  1. The first time Tivoli Workload Scheduler for z/OS is started with a newly allocated submit/release data set, an SD37 error occurs when it formats the data set. Expect this, do not treat it as an error.
  2. Do not move Tivoli Workload Scheduler for z/OS submit/release data sets once they are allocated. They contain device-dependent information and cannot be copied from one device type to another, or moved on the same volume. A submit/release data set that is moved will be re-initialized. This causes all information in the data set to be lost. If you have DFHSM or a similar product installed, define Tivoli Workload Scheduler for z/OS submit/release data sets so that they are not migrated or moved.

Centralized script data set for end-to-end scheduling with fault tolerance capabilities (EQQTWSCS)

In an end-to-end with fault tolerance capabilities environment, Tivoli Workload Scheduler for z/OS uses the centralized script data set to temporarily store a script when it is downloaded from the JOBLIB data set to the agent for its submission. Set the following attributes for EQQTWSCS:

DSNTYPE=LIBRARY,
SPACE=(CYL,(1,1,10)),
DCB=(RECFM=FB,LRECL=80,BLKSIZE=3120)

If you want to use centralized script support when scheduling end-to-end with fault tolerance capabilities, you need to use the EQQTWSCS DD statement in the controller and server started tasks. The data set must be a partitioned extended data set.

Input and output events data sets for end-to-end scheduling with fault tolerance capabilities (EQQTWSIN and EQQTWSOU)

These data sets are required by every Tivoli Workload Scheduler for z/OS address space that uses the end-to-end scheduling with fault tolerance capabilities. They record the descriptions of events related with operations running on fault-tolerant workstations and are used by both the End-to-end enabler task and the translator process in the scheduler’s server.

The data sets are device-dependent and can have only primary space allocation. Do not allocate any secondary space. They are automatically formatted by Tivoli Workload Scheduler for z/OS the first time they are used.

Note:
An SD37 abend code is produced when Tivoli Workload Scheduler for z/OS formats a newly allocated data set. Ignore this error.

EQQTWSIN and EQQTWSOU are wrap-around data sets. In each data set, the header record is used to track the amount of read and write records. To avoid the loss of event records, a writer task does not write any new records until more space is available when all the existing records have been read.

The quantity of space that you need to define for each data set requires some attention. Because the two data sets are also used for joblog retrieval, the limit for the joblog length is half the maximum number of records that can be stored in the input events data set. Two cylinders are sufficient for most installations.

The maximum length of the events logged in these two data sets, including the joblogs, is 160 bytes. Anyway, it is possible to allocate the data sets with a longer logical record length. Using record lengths greater than 160 bytes does not produce either advantages or problems. The maximum allowed value is 32000 bytes; greater values will cause the E2E task to terminate. In both data sets there must be enough space for at least 1000 events (the maximum number of joblog events is 500). Use this as a reference, if you plan to define a record length greater than 160 bytes. When the record length of 160 bytes is used, the space allocation must be at least 1 cylinder. The data sets must be unblocked and the block size must be the same as the logical record length. A minimum record length of 160 bytes is necessary for the EQQTWSOU data set in order to be able to decide how to build the job name in the symphony file (for details about the TWSJOBNAME parameter in the JTOPTS statement, see Customization and Tuning).

For good performance, define the data sets on a device with plenty of availability. If you run programs that use the RESERVE macro, try to allocate the data sets on a device that is not, or slightly, reserved.

Initially, you might need to test your system to estimate the number and type of events that are created at your installation. When you have gathered enough information, you can then reallocate the data sets. Before you reallocate a data set, ensure that the current plan is entirely up-to-date. You must also stop the end-to-end sender and receiver task on the controller and the translator thread on the server that use this data set. EQQTWSIN and EQQTWSOU must not be allocated multivolume.

Note:
Do not move these data sets once they have been allocated. They contain device-dependent information and cannot be copied from one type of device to another, or moved around on the same volume. An end-to-end event data set that is moved will be re-initialized. This causes all events in the data set to be lost. If you have DFHSM or a similar product installed, you should specify that E2E event data sets are not migrated or moved.