Troubleshooting
Problem
The DataStage job aborts with the following error: LkUpXXXXX,0: Could not map table file "xxxx" (size 2593146648 bytes)": Not enough space Error finalizing / saving table "yyyy"
Resolving The Problem
The issue is the job is trying to load all the lookup data in a 2 GB single memory segment. This is not a limitation with file space available on disk but a memory limitation. To resolve the issue you can add additional Nodes in the configuration file and use a hash partition on the data in both input links to the lookup stage to break the data into multiple memory segments. A single 2 GB of data broke into 4 nodes with hash will be 4 500 MB memory segments which should be easily accommodated (assuming hash gives equal distribution). No need to sort. This will resolve the issue and allow for scalability.
Was this topic helpful?
Document Information
Modified date:
23 June 2018
UID
swg21410469