Using -Xgc:preferredHeapBase with -Xcompressedrefs
"Why does the JVM report a native out-of-memory (NOOM) when using compressed references? I am using a 64bit JVM and I clearly have plenty of memory left. How can I resolve this problem?"
|*In this note, please refer to Java Versioning: Java Version.Release.ServiceRelease.FixPack
ex: Java 188.8.131.52 is the same as Java 7.1 SR4 FP6 is the same as Java 7 R1 SR4 FP6.
The IBM JVM will automatically use compressed references when using a maximum heap size less than 25GB. This automated behavior was introduced in Java 184.108.40.206 and Java 220.127.116.11*. Compressed references (CR) decreases the size of Java objects making better use of available memory space. This better use of space results in improved JVM performance. *(Java 18.104.22.168 and later uses compressed references by default on z/OS)
See Introducing WebSphere Compressed Reference Technology for detailed information on how Compressed References work.
"When using compressed references, the size of the field used in the Java object for the Class Pointer and the Monitor/Lock is 32 bits instead of the 64bits that would be available in non-compressed mode. Because we are using 32 bits to store the location of these, and they are located in native (non-Java heap) memory, they must be allocated in the first 4GB of the address space - the maximum range we can address with the 32 bits." ~IBM Java Development Team
If the Java heap itself is small (-Xmx), the JVM may allocate it in the lower 4GB of address space along with the Class Pointers and Monitors/Locks. If these Class Pointers, Monitors/Locks and Java heap (if included) cannot fit in the lower 4GB, a native out of memory (NOOM) will be thrown.
Why Use Compressed References?
Below the 4GB mark, the JVM does not have to perform any compression/decompression of the address pointer at runtime. Therefore, the best performance will be attained if the Class Pointers, Monitors/Locks and Java heap can all be contained comfortably within the lowest 4GB of the address space.
Determining Address Location of Java Heap Memory
To verify if the Java heap has memory regions below the 4GB mark, check the "Object Memory" section in the javacore:
Convert the "start" address from the hex value to a GB value. In the example below, 0x000000000F010000=0.23GB which is below the the 4GB (0x0000000100000000) mark.
Setting the Preferred Heap Base with -Xgc:preferredHeapBase
Starting with Java 22.214.171.124 and Java 126.96.36.199, and later, the JVM will determine if the Java heap will fit comfortably in the lower 4GB. If it is too large, the JVM will automatically allocate the Java heap above the 4GB mark (APAR IV37797).
NOTE: In IBM System z platforms (i.e. z/OS and z/Linux), automatic shift of the heap above the 4GB address space does NOT occur because on these platforms, there is an additional performance penalty
associated with higher shift values. To resolve Native OOM issues due to a shortage of heap memory in the lower region on z platforms, use -Xnocompressedrefs (see below).
See related: IBM Knowledge Center - JVMJ9GC089W
However, in earlier Java 6.1 and Java 7.0 versions (earlier than Java 188.8.131.52 and Java 184.108.40.206), if the Java heap can not fit in the lower 4GB, a NOOM will occur. To avoid this problem, the generic JVM argument -Xgc:preferredHeapBase=<address> can be used to ensure the Java heap is allocated above the 4GB address space. This will leave more room for the Class Pointer and Monitor/Lock memory.
This will locate the Java heap starting at the 4GB mark thus leaving the lower 4GB for the other processes.
Increase Maximum Heap Size to Force Heap Allocation Above the 4GB mark
Another way to ensure that the heap is allocated above the 4GB mark (Java 220.127.116.11 and Java 18.104.22.168 and later) is to set a maximum heap size equal to or greater than 4GB. For example -Xmx4G will ensure that the heap will have to be allocated above the 4GB mark. This will not work in earlier versions of the JVM since these earlier versions allowed the heap to straddle the 4GB mark, placing part of the memory above and some below (fixed as part of APAR IV37797) .
If after setting -Xgc:preferredHeapBase=<address> or -Xmx4G a NOOM is still encountered (Java 22.214.171.124 and Java 126.96.36.199 and later), then further investigation is required at the application level. Look to decrease the size and usage of the applications Class Pointers and Monitors/Locks. Additionally, there are some WebSphere Application Server troubleshooting methods that may help reduce the native memory footprint. See: IBM Troubleshooting native memory issues.
Reserving Low-Memory Space with -Xmcrs
If there is still free memory in the system when a Native OutOfMemory (NOOM) occurs, then the problem may be a shortage of memory in the low-memory region (under 4GB). Even if the Java heap is located above this boundary, other data associated with Java objects can be located in the low-memory region.
The OS memory allocator will deal out low-memory freely, thus memory resources in the lower boundary may run out. Later when the JVM tries to allocate memory for an artifact which is required to be allocated in low-memory (because the JVM has only reserved a 32bit pointer for it) it fails and throws an OutOfMemoryError.
Starting in Java 188.8.131.52, Java 184.108.40.206, Java 220.127.116.11, Java 18.104.22.168, there is a parameter -Xmcrs which allows the JVM to increase the amount of low memory it reserves on startup. With this setting, as long as the low-memory usage by the JVM does not exceed the -Xmcrs value, NOOM in the lower boundary will be avoided.
To set this parameter, first decide what a reasonable value for your lower memory requirements may be. Reasonable value is unique to each environment so there is not a general recommendation.
To determine <reasonable_value_for_lower_memory>, check the javacore for low memory usage when the NOOM occured. A quick formula would have you look at the "
NATIVEMEMINFO subcomponent dump routine" section. Subtract "Memory Manager (GC)" value from "VM" value and multiply the result by 1.5. In this case:
(9689267552-8771635584)*1.5=1376447952=1312.68MB=<reasonable_value_for_lower_memory>. But since we generally reserve memory in 256M denominations, round up to 1536M....-Xmcrs1536M
From javacore at time of NOOM:
Disabling Compressed References with -Xnocompressedrefs
As a last resort, if the native memory still can not be contained under the 4GB mark, you can set -Xnocompressedrefs as a generic JVM argument. Using -Xnocompressedrefs will remove the use of compressed references and therefore remove the lower 4GB memory restriction on the Class Pointers and Monitors/Locks. This will however, result in a significant increase in Java heap memory requirements. It is not uncommon for 70% more heap space to be required. Due to the increased memory requirements it is strongly advised that the Java heap size is adjusted to a larger value and garbage collection is monitored and retuned as required.
Additionally, some benchmarks show a 10-20% relative throughput decrease when disabling compressed references: "Analysis shows that a 64-bit application without CR yields only 80-85% of 32-bit throughput but with CR yields 90-95%. Depending on application requirements, CR can improve performance up to 20% over standard 64-bit." See:ftp://public.dhe.ibm.com/software/webserver/appserv/was/WAS_V7_64-bit_performance.pdf.
Before using -Xnocompressedrefs as a solution, first rule out the possibility of a native memory leak. Since using -Xnocompressedrefs will allow the the native memory to grow unbounded, a leak in native memory will lead to process size growth eventually leading to a process that needs to be paged out. The paging will incur performance overhead which will eventually lead to an unstable environment. Therefore careful consideration must be used when selecting -Xnocompressedrefs as a solution.
Memory Map Considerations
The below figure is a generalization of how the JVM handles addresses in each section of the memory map based on heap size and compressed references (CR). Please note that at each stage beyond having all of the Java memory contained below the 4GB mark, there will be performance consequences:
|No Compressed References||Overhead|
-Xmx > 25GB
|-increased memory footprint
-fewer/larger objects stored on heap leads to more frequent GC
-lower cache and translation look aside buffer (TLB) utilization
|maximum heap address used by the Java JVM process is below 4GB||none|
|maximum heap address used by the Java JVM process is above 4GB but below 32GB||compression/decompression of address pointers|
Getting Assistance From IBM Support
If further assistance will be required from IBM WebSphere Support, please set the following -Xdump parameters in the generic JVM arguments:
Then restart the JVM and recreate the problem. Once the NOOM is encountered, process the resulting system core with jextract. Send the jextracted core file, heapdump, javacore, snap trace, systemOut.log, native_stderr.log, native_stdout.log and systemErr.log to IBM Support for further analysis.
More support for:
WebSphere Application Server
Out of Memory
Software version: 7.0, 8.0, 8.5
Operating system(s): AIX, Linux, Windows
Reference #: 1660890
Modified date: 09 June 2014
Translate this page: