Why does an OutOfMemoryError (OOM) create a system dump starting in WebSphere Application Server (WAS) 126.96.36.199?
Starting in WebSphere Application Server 188.8.131.52, the IBM Java SDK changed the default OutOfMemoryError agents to produce a system dump on the first OOM in addition to the previous defaults. This change is by design.
For more information, see the WebSphere Application Server 8 Information Center topic on this change: http://publib.boulder.ibm.com/infocenter/wasinfo/v8r0/topic/com.ibm.websphere.base.doc/info/aes/ae/ctrb_java626.html
This TechNote will highlight some points from the Information Center topic and add a few additional tips:
- The purpose of this change is to reduce the time to resolve OutOfMemoryError problems because system dumps have additional information that IBM PHD heapdumps do not, including memory contents (e.g. Strings, integers, and other primitive fields), variable names, more accurate garbage collection root information, thread stacks, thread stack frame locals, native memory information, and more.
- Run the command <WAS>/java/bin/java -version -Xdump:what to see your JVM's default agents and search for those that have filter=java/lang/OutOfMemoryError. For IBM Java 6 R26 which is shipped for WAS 184.108.40.206, you will see an agent for an OOM and a system dump with a range of 1..1, meaning the first OOM. You will see that the previous defaults are still the same: a heapdump, javacore and snapdump for the first 4 OOMs.
- This version of the IBM Java SDK no longer requires that jextract is run on the system dump for memory analysis. You can simply load the system dump (coredump, minidump, SYSTDUMP, etc.) into tools such as the IBM Memory Analyzer Tool.
- It is critical to ensure that your WAS processes are running with proper system limits so that a system dump is not truncated: AIX, Linux, z/OS. The aforementioned technology which can read system dumps without jextract does have some tolerance for truncated system dumps, so you may still analyze or upload the dumps if they are truncated.
- You may find value in the IBM Extensions for Memory Analyzer which have been designed for system dump information and can display WAS-specific information such as HTTP session information (JSESSIONID, user name, attributes, etc.), thread pool information, Dynacache information, JDBC information, PMI information, retained heap by application, hung threads, and much more.
- Consider any potential security issues with transferring a system dump internally or to an IBM PMR as any Java objects' contents (such as customer names, etc.) will be available to anyone that can load the dump.
- In general, the time taken to write a system dump is in the same magnitude as writing an IBM PHD heapdump. The primary performance constraint is the available physical memory (and consequent available file cache) as the fastest way to write a system dump is to the file cache which then gets written-behind and the process continues.
- System dumps are generally much larger than IBM PHD heapdump files, so it is important to provision enough disk space in your dump directory (See the file option in -Xdump to determine the path. By default, it is the WAS profile directory). In general, system dumps are about the size of the virtual address space at the time of the dump, which will be approximately -Xmx + -Xscmx + (-Xss * Number_of_threads) + any other native memory, including some JIT and class data structures. System dumps generally compress very well.
- This behavior change (and any potential performance and disk space overhead) only affects IBM JVMs when they are already unhealthy in an OutOfMemoryError condition. However, to reduce some of the overhead of this change, you may consider removing PHD heapdumps on OutOfMemoryErrors, and only leaving system dumps. This can be done using -Xdump generic JVM arguments and is considered in the Information Center topic.
- Whereas the IBM PHD heapdump was not very helpful for native OutOfMemoryErrors, a system dump may be very helpful for diagnosing native OOMs.