Handling dumps written to the z/OS UNIX file system

When a z/OS UNIX C/C++ application program is running in an address space created as a result of a call to spawnp(), vfork(), or one of the exec family of functions, the SYSMDUMP DD allocation information is not inherited. Even though the SYSMDUMP allocation is not inherited, a SYSMDUMP allocation must exist in the parent in order to obtain a HFS storage dump. If the program terminates abnormally while running in this new address space, the kernel causes an unformatted storage dump to be written to an HFS file in the user's working directory. The file is placed in the current working directory or into /tmp if the current working directory is not defined. The file name has the following format:
/directory/coredump.pid

where directory is the current working directory or tmp, and pid is the hexadecimal process ID (PID) for the process that terminated. For details on how to generate the system dump, see Steps for generating a system dump in a z/OS UNIX shell.

To debug the dump, use the Interactive Problem Control System (IPCS). If the dump was written to an HFS file, you must allocate a data set that is large enough and has the correct attributes for receiving a copy of the HFS file. For example, from the ISPF DATA SET UTILITY panel you can specify a volume serial and data set name to allocate. Doing so brings up the DATA SET INFORMATION panel for specifying characteristics of the data set to be allocated.

Figure 1 is a sample filled-in panel that shows the characteristics defined for the URCOMP.JRUSL.COREDUMP dump data set. Fill in the information for your data set as shown, and estimate the number of cylinders required for the dump file you are going to copy.
Figure 1. IPCS panel for entering data set information (AMODE 64)
--------------------------  DATA SET INFORMATION  ----------------------
Command ===>

Data Set Name  . . . : URCOMP.JRUSL.COREDUMP

General Data                          Current Allocation
 Management class . . : STANDARD       Allocated cylinders : 30
 Storage class  . . . : OS390          Allocated extents . : 1
  Volume serial . . . : DPXDU1
  Device type . . . . : 3380
 Data class . . . . . :
  Organization  . . . : PS            Current Utilization
  Record format . . . : FB             Used cylinders. . . : 0
  Record length . . . : 4160           Used extents  . . . : 0
  Block size  . . . . : 4160
  1st extent cylinders: 30
  Secondary cylinders : 10
  Data set name type  :

  Creation date . . . : 2001/08/30
  Expiration date . . : ***None***

F1=Help     F2=Split     F3=End       F4=Return    F5=Rfind     F6=Rchange
F7=Up       F8=Down      F9=Swap     F10=Left     F11=Right    F12=Cancel
Use the TSO/E OGET or OCOPY command with the BINARY keyword to copy the file into the data set. For example, to copy the HFS memory dump file coredump.00060007 into the data set URCOMP.JRUSL.COREDUMP just allocated, a user with the user ID URCOMP enters the following command:
OGET '/u/urcomp/coredump.00060007' 'urcomp.jrusl.coredump' BINARY

For more information on using the copy commands, see z/OS UNIX System Services User's Guide.

After you have copied the memory dump file to the data set, you can use IPCS to analyze the dump. See Formatting and analyzing system dumps for information about formatting Language Environment control blocks.