IBM Support

DataStage data set creation fails due to Block write failure

Troubleshooting


Problem

Parallel job with Data Set stage aborts as it cannot write to data file for the data set.

Symptom

Errors like below appears in the job log.


Item #: 439
Event ID: 438
Timestamp: 2010-12-09 20:19:09
Type: Fatal
User Name: dsadm
Message: Transformer_3,1: Write to dataset on [fd 3] failed: Error 0
The error occurred on Orchestrate node node2 (hostname DSTAGE)

Item #: 440
Event ID: 439
Timestamp: 2010-12-09 20:19:09
Type: Fatal
User Name: dsadm
Message: Transformer_3,1: Orchestrate was unable to write to any of the following files:

Item #: 441
Event ID: 440
Timestamp: 2010-12-09 20:19:09
Type: Fatal
User Name: dsadm
Message: Transformer_3,1: /opt/Ascential/DataStage/Datasets/DS.AA.dsadm.aaa.0000.0001.0000.5ae068.ceca0606.0001.3747c727

Item #: 442
Event ID: 441
Timestamp: 2010-12-09 20:19:09
Type: Fatal
User Name: dsadm
Message: Transformer_3,1: Block write failure. Partition: 1

Item #: 443
Event ID: 442
Timestamp: 2010-12-09 20:19:09
Type: Fatal
User Name: dsadm
Message: Transformer_3,1: Failure during execution of operator logic.

Cause

A persistent data set is physically represented on disk by a single descriptor file and one or more data files. Above error happens as the Operating System failed to write() to a data file.

Resolving The Problem

Find and resolve the cause of write() failure.

Typical reasons for write() failure are

1. Disk space shortage

There is not enough free space in the partition where the data set is written.

You can check the disk usage using "df -k" command.

In this case, you need to add more disk or move your resource disk to a partiton with enough free space.

2. Small file size limit

Maximum file size a process can create is limited, you can check current limit using "ulimit -f" command in job's Before-job or After-job shell. Don't forget to check in all the servers if you run DataStage in MPP configuration.

In this case, you need to increase the file size limit using ulimit command. It's recommended to add "ulimit -f unlimited" in $DSHOME/dsenv. Ask Unix administrator's help if you cannot increase it due to small hard limit.

[{"Product":{"code":"SSVSEF","label":"IBM InfoSphere DataStage"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Component":"--","Platform":[{"code":"PF002","label":"AIX"},{"code":"PF010","label":"HP-UX"},{"code":"PF016","label":"Linux"},{"code":"PF027","label":"Solaris"}],"Version":"9.1;11.3","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
16 June 2018

UID

swg21457724