Troubleshooting
Problem
Resolving The Problem
Step 1: Self-guided troubleshooting
There are 2 types of memory used by IBM Integration Bus: native heap memory and JVM heap memory. The process for resolving issues with growth of each of these types of memory is different and so the first step is to determine what type of memory is growing.
To determine the amount of JVM heap used, follow these steps:
- In the web user interface, right-click the problematic execution group and select Statistics -> Start Resource Statistics
- Right-click the problematic execution group and select Statistics -> View Resource Statistics
- In the Resource Statistics Table View select "JVM"
- You might need to wait up to 20 seconds for the first Resource Statistics Publication to arrive
- The value under the column CommittedMemoryInMB shows the size of the JVM Heap
- First determine the size of the JVM Heap. Note down this number.
- On Windows
- Open Task Manager
- Locate the task for the problematic "DataFlowEngine" process
- Right click this task and select "Go To Process"
- The Memory-Commit size column (you might have to add this column; use View > Select columns) shows the total virtual process size. Note down this number
- On Unix
- Determine the process ID or PID of the problematic DataFlowEngine process, the PID is output in the BIP2208I message that is output when the Execution Group starts up
- Execute the command:
ps aux | head -1; ps aux | grep <PID of DataFlowEngine> - Check the SZ column (AIX), the VSZ column (Linux) to get the total size of the process. Note down this number
- Subtract the size of the JVM Heap Size from the total process size to get an approximate value for the native memory used by the process
If the size total native memory in use is increasing but the JVM Heap Size is not, then check whether excessive parsers are being created; use this command:
-
mqsireportproperties <broker> -e <execGroup> -o ComIbmParserManager -r
Look for the "totalParsers" parameter as shown in the following output snippet:
Thread
threadId='4'
threadName='myMessageFlowName'
totalParsers='7'
Creation of excessive parsers is often a result of improperly written ESQL code in message flows. If the parser statistics indicate an excessive number of parsers examine the ESQL code associated with the flows that are using the highest number of parsers.
If JVM Heap Size increases but the native memory being used does not:
- You can increase the jvmMaxHeapSize for the DataFlowEngine process that is encountering the problem.
- You can use IBM Java Health Center to troubleshoot your Java™ code in the message flow to determine the cause of the problem.
- You can generate heap dumps from the Execution Group to analyze in a heap analyzer such as Memory Analyzer Tool. If the objects consuming the heap are objects associated with user code in Java Compute or User-Defined Nodes, examine that code for the source of the leak.
For all memory growth problems it may easier to isolate the problem if the message flows deployed to this Execution Group might be distributed across other Execution Groups and problem is re-created. If the faulty message flow is known, we recommend deploying it to its own Execution Group. Then, run the same test message through it repeatedly to see if the DataFlowEngine process size reaches a plateau or continues to grow indefinitely. This helps to eliminate message flows which are not exhibiting the problem from consideration.
Resource statistics may be started and viewed in IBM Integration Explorer to monitor the different resources used by message flow that is processing messages at the time of the issue. See the different measurements of the data returned by Resource statistics.
Check for known issues:
IIB diagnostic tools guide
Techniques to minimize memory usage with IIB
IIB and WMB FAQ for memory
Dynamic guide to help troubleshooting Message Broker
Message splitter pattern
IBM Integration Bus support site
Step 2: Collect Data
- mqsidc
See: mqsidc usage instructions
- This command-line tool is only available for WMB 7.0.0.5+, WMB 8.0.0.1+, and IIB 9.0.
- When running the tool, select the option for 'Broker Collector' when prompted.
- mqsimemcheck
(Unix/Linux only)
This "mqsimemcheck" script was developed by the WMB/IIB Support teams to ensure a consistent capture of memory usage information for WMB/IIB runtime processes.
Run the script for the entire duration of the test. This script records simple command-line command outputs to a plain text file, which you can provide to IBM Support.
The zip file contains a readme with additional information regarding usage. - Resource Statistics
See: Procedure for collecting resource statistics data as XML messages - Core dump (for native memory issues only)
Collect 2 core dumps of the DataFlowEngine process that is increasing in memory. Collect the first dump after processing some messages to ensure all the initialization has taken place. Take the second core at an interval that shows significant growth in process' size.
Use these commands to generate the dump, where <PID> is the process ID of the DataFlowEngine:-
gencore <PID> <output file name>
-
gcore <PID> -o <output file name>
Also, all message flow processing in the DataFlowEngine stops when this is being collected. -
- Project Interchange files for the problematic flow/set/ESQL/model
- You can export your message flow and message set project(s) into an archive file for easy transmission to IBM Support.
See: Exporting files from the workbench
- You can export your message flow and message set project(s) into an archive file for easy transmission to IBM Support.
- Traces
If the problem occurs during a runtime test or with the Toolkit test facility, WMB execution group traces can be used to gain a better understanding of the problem.- A service level trace is intended to be provided to IBM Support to assist in the diagnosis of your issue. Run the service level trace independently and not while other items are being collected.
- General Broker information
If the mqsidc tool was not run, then capture this information manually- Record the IBM Integration Bus or WebSphere Message Broker version, release, and maintenance level.
This can be captured using the command 'mqsiservice -v'. - Record the operating system version, release, and maintenance level.
- Record the version, release, and maintenance level of any related products and components for the problematic application.
- Collect the local error log. On UNIX and Linux systems, the local error log is the syslog. The location of your syslog is configured in the syslog daemon.
See: Configuring the syslog daemon - Collect the Standard Output/Error logs.
WMB writes information to both STDOUT and STDERR. These files are located under the Message Broker workpath.
See: Standard System Logs
- Record the IBM Integration Bus or WebSphere Message Broker version, release, and maintenance level.
- Additional information
- Output of command: mqsilist -r -d2
Output of command: mqsireportproperties <broker> -e <execGroup> -o ComIbmParserManager -r
- Output of command: mqsilist -r -d2
ATTENTION: A good problem description is one of the most important tools IBM needs to analyze your data!
When sending data to IBM, be sure to update your PMR or send a note with the following information:
- Tell us what errors you saw, where you saw them, and what time they happened
- Let us know if you made any changes to WebSphere Message Broker or the system before the problem
- Share any other observations which you think will help us to better understand the problem
Step 3: Submit Data to IBM
- Use IBM Service Request to open or view a problem record with IBM.
- Send your data to IBM for further analysis.
See the IBM Software Support Handbook for more information on working with IBM support.
Was this topic helpful?
Document Information
Modified date:
05 August 2022
UID
swg21299301