IBM Support

QRadar: Troubleshooting Pipeline NATIVE_To_MPC messages on Console only

Troubleshooting


Problem

Events are being dropped on Console with Pipeline NATIVE_To_MPC messages. These kinds of messages can be easily confused with other incidents when the collected events are being dropped from the pipeline of QRadar.
The mentioned events were not collected by the QRadar from the source. The customer is not losing any events in this case. The NATIVE_To_MPC events are artificially generated by the other QRadar processors in the deployment and are sent to the console. Their purpose is just to add the metadata information about the real events, which were already stored in the processors, to the open GLOBAL offenses that were generated in the console.

Cause

The problem is caused by too many of the events associated with the Global offense and which are sent from each EP in the deployment to the console causing congestion in the pipeline.

Diagnosing The Problem

Events are being dropped by the pipeline on the console, with the following messages:
 
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] ---- PIPELINE STATUS -- Initiated From: NATIVE_To_MPC
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] MPC (Filters: 0.00 pc) (Queues: 1.83 pc) (Sources: 0.00 pc)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] 100.00 pc - Queue:Processor1 (250/250)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] 100.00 pc - Queue:from_EP_via_NATIVECOMMS (250/250)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] EP (Filters: 0.44 pc) (Queues: 23.10 pc) (Sources: 0.00 pc)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] 0.61 pc - Filter:CRE EP (609/100000)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] 72.80 pc - Queue:Processor3 (182/250)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] 100.00 pc - Queue:NATIVE_To_MPC (25000/25000)
[ecs-ep] [[type=com.eventgnosis.system.ThreadedEventProcessor][parent=example.ibm.com:ecs-ep/EP/Processor3/HostProfiler]] com.q1labs.sem.monitors.PipelineStatusMonitor: [INFO] [NOT:0000006000][X.X.X/- -] [-/- -] EC (Filters: 0.00 pc) (Queues: 0.00 pc) (Sources: 0.00 pc)


qradar.log file shows messages such as:
 
[ecs-ep] [8c229b95-8625-4d34-bf76-fd8976cd98d7/SequentialEventDispatcher] com.q1labs.sem.monitors.ECSQueueMonitor: [WARN] [NOT:0060005100][X.X.X/- -] [-/- -]ECS Queue Monitor has detected a total of 18784693 dropped event(s). 61198 event(s) were dropped in the last 60 seconds. EP Queues: 61198 dropped event(s). MPC Queues: 0 dropped event(s).
[ecs-ep] [8c229b95-8625-4d34-bf76-fd8976cd98d7/SequentialEventDispatcher] com.q1labs.sem.monitors.ECSQueueMonitor: [WARN] [NOT:0000004000][X.X.X/- -] [-/- -]EP Queue [NATIVE_To_MPC] has detected 61198 dropped event(s) in the last 60 seconds and is at 89 percent capacity

Resolving The Problem

Issues that can cause Native_to_MPC error messages
  • Verify whether any rules are configured as "Global" rules. The "Global" rules can cause an excessive number of events to be processed by the console and result in the MPC queue max out. In this case, change any Global rules to Local rules, if possible.
  • Try to reduce the number of events associated with the particular Global rule. Try to reduce the time frames in the rule or enable coalescing for the particular Log Source, which significantly reduces the total size of the events being sent but keeps their number intact.
  • The Native_To_MPC dropping differs dropping events due to Performance Degradation before they are processed in the pipeline, either by the correlation engine (ecs-ep) or by the parsing engine (ecs-ec).
Whenever you notice dropped events during processing in the pipeline, check the Performance Degradation relevant technotes for more details and troubleshooting steps. By checking the output from the top command, you can find what area of the pipeline is affected and then you can follow the steps described in the related articles.


[{"Type":"MASTER","Line of Business":{"code":"LOB24","label":"Security Software"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSBQAC","label":"IBM Security QRadar SIEM"},"ARM Category":[{"code":"a8m0z000000cwtiAAA","label":"Performance"}],"ARM Case Number":"","Platform":[{"code":"PF016","label":"Linux"}],"Version":"All Versions"}]

Document Information

Modified date:
27 March 2023

UID

swg21985252