Interpreting new 12 byte format of USRCPUT in SMF110 records
You are trying to understand the new format of fields such as USRCPUT. The old PICTURE clause in DFHCOB was TMRCPUT-TIME PICTURE 9(8) COMP. The new PICTURE clause is S9(8). Looking at the SMF 110 records generated by CICS Transaction Server for z/OS (CICS TS) V3.2, you find values like x'000000000025CFE0'. The values generated by CICS TS V2.3 and V3.1 are more in the range '00000071'. You want to know, what does the field USRCPUT represent and how to convert it to seconds?
The length of USRCPUT changed starting in CICS TS V3.2.
USRCPUT is the total CPU time for user task including DB2 time. It is the processor time for which the user task was dispatched on each CICS TCB under which the task executed. This can include TCB modes QR, RO, CD, FO, SZ, RP, SL, SO, H8, J8, L8, S8 and D2.
To convert the OLD format (prior to CICS TS V3.2) of USRCPUT CPUTIME to milliseconds, you would:
- Convert the 4-byte value from hex to decimal.
- Multiply the result by 0.016 to obtain the value in milliseconds or by .000016 to get the value in seconds.
Note that 16 microseconds is equivalent to 0.016 milliseconds and that its equivalent to 0.000016 seconds. A clock is a 32-bit value, expressed in units of 16 microseconds, accumulated during one or more measurement periods. The 32-bit value is followed by 8 reserved bits, that are in turn followed by a 24-bit value indicating the number of such periods.
For a specific example, if the total USRCPUT field had a value of X'000001BB000005' then the first 4 bytes would be the accumulative CPU time. So, for 5 dispatches of the task you had 1BB ticks. Converting 1BB to decimal gives you 443. You multiply that by .000016 to get .007088 total CPU seconds for the transaction.
The USRCPUT field for CICS TS V3.2 is a 12 byte field. Basically, clock fields like USRCPUT are now 12 bytes instead of 8 bytes. So, in CICS TS V2.3 the USRCPUT part of the SMF 110 record might look like 00000BC300000007 which you would break it down as follows:
|00000BC300000007||Example 8 byte USRCPUT in releases of CICS prior to CICS TS V3.2|
|00000BC3||CPU time in binary units where the smallest unit is 16 microseconds (bytes 1 to 4)|
|00||Reserved bits (byte 5) .|
|000007||Count of the number of times contributed to the CPU time (bytes 6 to 8)|
Now with CICS TS V3.2 you will see something like 000000000BC3F29D00000007 and you would break it down as follows:
|000000000BC3F29D00000007||Example 12 byte USRCPUT in releases CICS TS V3.2 and higher|
|000000000BC3F29D||CPU time in binary units where the smallest unit is 16 microseconds (bytes 1 to 8)|
|00||Reserved bits (byte 9) .|
|000007||Count of the number of times contributed to the CPU time (bytes 10 to 12)|
Before you could convert the CPU time to seconds by taking 00000BC3 and converting to decimal and multiplying by .000016. But, now you would take the middle 4 bytes of the CPU time (which would give you 00000BC3) and then convert it like you used to. Or, to get a little greater accuracy you could do the following:
- Disregard the bottom 3 nibbles of the CPU time (leaving you 000000000BC3F) and convert it to decimal.
- Multiply the result by .000001 to get the value in seconds.
For example, if using a value of 0000000000E6B140 for CPU time you would drop off the 140 which gives you 0000000000E6B. Then convert it to a decimal of 3691, and multiply 3691 by .000001 to get 0.003691 seconds.
If you take a look in DFH$MOLS which is found in SDFHSAMP, you can see how it does the conversion. When in the source, do a find on label FLDCLPRT. There you can see that with CICS TS V3.2 data, it will pass all 8 bytes of the time component to IPPARMS and pass that to the STCKCONV macro. But for data from releases of CICS before V3.2, the code will clear the 8-bytes in IPPARMS and then copy the 4 byte time component to the middle 4 bytes of that, and then pass that to STCKCONV.
CICS/TS CICS TS CICS Transaction Server