DataStage Job aborts with error: "The record is too big to fit in a block"

Technote (troubleshooting)


Problem(Abstract)

Jobs that process a large amount of data in a column can abort with this error:
the record is too big to fit in a block; the length requested is: xxxx, the max block length is: xxxx.

Resolving the problem

To fix this error you need to increase the block size to accommodate the record size:

  1. Log into Designer and open the job.

  2. Open the job properties--> parameters-->add environment variable and select: APT_DEFAULT_TRANSPORT_BLOCK_SIZE

  3. You can set this up to 256MB but you really shouldn't need to go over 1MB.
    NOTE: value is in KB

    For example to set the value to 1MB:
    APT_DEFAULT_TRANSPORT_BLOCK_SIZE=1048576

    The default for this value is 128kb.

When setting APT_DEFAULT_TRANSPORT_BLOCK_SIZE you want to use the smallest possible value since this value will be used for all links in the job.

For example if your job fails with APT_DEFAULT_TRANSPORT_BLOCK_SIZE set to 1 MB and succeeds at 4 MB you would want to do further testing to see what it the smallest value between 1 MB and 4 MB that will allow the job to run and use that value. Using 4 MB could cause the job to use more memory than needed since all the links would use a 4 MB transport block size.

NOTE: If this error appears for a dataset use APT_PHYSICAL_DATASET_BLOCK_SIZE.

Rate this page:

(0 users)Average rating

Document information


More support for:

InfoSphere Information Server

Software version:

7.5, 8.0, 8.1

Operating system(s):

AIX, HP-UX, Linux, Solaris, Windows

Reference #:

1416107

Modified date:

2010-06-22

Translate my page

Machine Translation

Content navigation