A fix is available
Closed as program error.
An Oracle Connector job with a large schema and array size (specific case was 200+ NVarChar columns with no length set, getting default of 4000 bytes) fail with APT_BadAlloc when using Big Buffer API (new to 9.1). Job completes when using a smaller schema and/or array size or disabling Big Buffer API.
Work Around: Disable the big buffer APIs in V9.1
Big Buffer API uses a single buffer for an entire array of records, so if the array size and the record size are large enough, the connector will attempt to allocate a buffer larger than 2 GB. This size overflows the data type used to specify the length of the buffer, so this allocation cannot be allowed to happen.
If the Big Buffer initialization code determines that a buffer greater than 2 GB is about to be allocated, the connector will revert to the pre-9.1 method of transferring data with the connector framework.
Disable Big Buffer. Reduce the schema size (possibly by specifying lengths for strings). Reduce the array size.
Reported component name
Reported component ID
Last modified date
APAR is sysrouted FROM one or more of the following:
APAR is sysrouted TO one or more of the following:
Fixed component name
Fixed component ID
Applicable component levels