IBM Support

QRadar: Event Processor not sending logs due to disk space issues

Troubleshooting


Problem

In a distributed environment, an Event Processor (EP) cannot send logs to the Console if the ecs-ep process is down. If the disk usage reaches an excessive level, the EP can disable the process.

Cause

When disk space reaches 95% utilization, QRadar processes are automatically shut down, preventing the system from operating correctly.

Diagnosing The Problem

 Basic troubleshooting tips.
  1. Complete any search, add a filter by Event processor, then from the View, list select Real Time (streaming).
    If no flow events are visible, it might indicate a potential problem with the ecs-ep process, which oversees real-time event flow from the event processor application to the console.
  2. To verify that the ecs-ep process is running from the command-line interface of the QRadar appliance, enter:
    systemctl status ecs-ep
    If the service reports as ecs-ep is stopped, the administrator can attempt to restart the process, by using the following command:
    systemctl start ecs-ep
  3. The most frequent cause of processes not running on configured systems in a deployment is due to the disk space issue. If disk space issue occurs, a system notification is generated to alert the administrator to the issue. You can run the following command to search the logs for related errors:
    grep -i "disk usage" /var/log/qradar.error
    Example error:
    [hostcontext.hostcontext] [c0ac7072-70e9-40ea-9d87-62ac50d090c3/SequentialEventDispatcher] com.q1labs.hostcontext.ds.DiskSpaceSentinel: [ERROR] [NOT:0150064100][IP Address/- -] [-/- -]Disk usage on at least one disk has exceeded the maximum threshold level of 0.95. The following disks have exceeded the maximum threshold level: /store, Processes are being shut down to prevent data corruption. To minimize the disruption in service, reduce disk usage on this system.
Other ways verify disk space on your appliances.
  1. To view disk usage run df -h
    Example:
    [root@QRadar750]# df -h
    Filesystem                        Size  Used Avail Use% Mounted on
    /dev/mapper/rootrhel-root          13G  5.3G  7.3G  43% /
    devtmpfs                           16G     0   16G   0% /dev
    tmpfs                              16G   20K   16G   1% /dev/shm
    tmpfs                              16G   34M   16G   1% /run
    tmpfs                              16G     0   16G   0% /sys/fs/cgroup
    /dev/sda3                          32G  4.1G   28G  13% /recovery
    /dev/sda2                        1014M  163M  852M  17% /boot
    /dev/mapper/rootrhel-opt           13G  2.7G  9.9G  22% /opt
    /dev/mapper/rootrhel-tmp          3.0G   41M  3.0G   2% /tmp
    /dev/mapper/rootrhel-var          5.0G  175M  4.9G   4% /var
    /dev/mapper/rootrhel-home        1014M   33M  982M   4% /home
    /dev/mapper/storerhel-store       142G   33G  109G  24% /store
    /dev/mapper/rootrhel-varlog        15G  387M   15G   3% /var/log
    /dev/mapper/rootrhel-storetmp      15G   43M   15G   1% /storetmp
    /dev/mapper/rootrhel-varlogaudit  3.0G  131M  2.9G   5% /var/log/audit
    /dev/mapper/storerhel-transient    36G   36M   36G   1% /transient
    tmpfs                             3.1G     0  3.1G   0% /run/user/0
  2. To get the disk usage on all your appliances, use the following command (-T is optional, print file system type):
    /opt/qradar/support/all_servers.sh -C -k "df -Th"
    Example output:
    x.x.x.x -> qradar.example.com
    Appliance Type: 3199    Product Version: 2021.6.4.20221129155237
     09:07:29 up 13 days, 29 min,  2 users,  load average: 1.77, 1.80, 1.70
    ------------------------------------------------------------------------
    Filesystem                       Type      Size  Used Avail Use% Mounted on
    devtmpfs                         devtmpfs   16G  4.0K   16G   1% /dev
    tmpfs                            tmpfs      16G   14M   16G   1% /dev/shm
    tmpfs                            tmpfs      16G  1.6G   15G  10% /run
    tmpfs                            tmpfs      16G     0   16G   0% /sys/fs/cgroup
    /dev/mapper/rootrhel-root        xfs        13G  9.9G  2.6G  80% /
    /dev/mapper/rootrhel-home        xfs      1014M   33M  982M   4% /home
    /dev/mapper/rootrhel-tmp         xfs       3.0G   47M  3.0G   2% /tmp
    /dev/mapper/rootrhel-var         xfs       5.0G  270M  4.8G   6% /var
    /dev/mapper/rootrhel-opt         xfs        13G  4.3G  8.3G  34% /opt
    /dev/mapper/storerhel-transient  xfs        36G   39M   36G   1% /transient
    /dev/mapper/storerhel-store      xfs       142G   50G   92G  36% /store
    /dev/mapper/rootrhel-varlog      xfs        15G  1.7G   14G  12% /var/log
    /dev/mapper/rootrhel-varlogaudit xfs       3.0G  162M  2.9G   6% /var/log/audit
    /dev/mapper/rootrhel-storetmp    xfs        15G  498M   15G   4% /storetmp
    /dev/sda3                        xfs        32G  5.5G   27G  18% /recovery
    /dev/sda2                        xfs      1014M  311M  704M  31% /boot
    overlay                          overlay   142G   50G   92G  36% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/overlay2/55755                                                                                                                                     8039da21774fcf53cfb05eb25e8b7e6274dcbac85eb9da35eba0640f5b8/merged
    shm                              tmpfs      64M     0   64M   0% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/containers/b8c                                                                                                                                     44336bb795cd0d401c9ac41020c1119cc38bb14392a2ee7ab2e0084df6ee9/mounts/shm
    overlay                          overlay   142G   50G   92G  36% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/overlay2/8c361                                                                                                                                     3becc1c4984adbb9d72ef3b3c7bbc6c024602e6b2260f4c5804106aab7a/merged
    shm                              tmpfs      64M     0   64M   0% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/containers/5da                                                                                                                                     e366803fd3f47b9d78ea4cf096e9779df2dbe1b352c66e1d727bf549eba60/mounts/shm
    overlay                          overlay   142G   50G   92G  36% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/overlay2/e9eb1                                                                                                                                     c995db9a32005915f8eb896e1d899d578d6e496b9e05414e606a36c0295/merged
    overlay                          overlay   142G   50G   92G  36% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/overlay2/3758a                                                                                                                                     bb1b3ca9890e45a61c03c66c2f9a900348d19c7a15e0b144b6658184b87/merged
    shm                              tmpfs      64M   16K   64M   1% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/containers/2e6                                                                                                                                     3b7911884b143e3d1e3e65c0d4c9199d3f14ce33c3c1a4ba00f39c6974576/mounts/shm
    shm                              tmpfs      64M     0   64M   0% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/containers/d9d                                                                                                                                     9f53ac02a4fd278c0543fb40ee2d22d76ab02a7c43286ea355016282c403a/mounts/shm
    overlay                          overlay   142G   50G   92G  36% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/overlay2/1e12b                                                                                                                                     d581da4170e88667ae5a3952414c64fd1abe65dd0e25d1d9674ea5e1866/merged
    shm                              tmpfs      64M     0   64M   0% /store/docker-d                                                                                                                                     ata/engine/VMware-42-06-5b-f2-c1-82-ab-93-ad-5e-c8-4e-e9-8d-00-ed/containers/7c4                                                                                                                                     7e4e1901136bad28cd76381d004049e837cea41e8290b627beff911080e22/mounts/shm
    tmpfs                            tmpfs     3.2G     0  3.2G   0% /run/user/0
    
  3. Lastly, you can run /opt/qradar/support/deployment_info.sh. This script collects all information about all systems in the deployment, which includes disk space used, hardware, appliance type, and serial number into a CSV file.

Resolving The Problem

For more information on how to reduce the storage to under 95% usage see, Resolving disk usage issues.


 

[{"Type":"MASTER","Line of Business":{"code":"LOB24","label":"Security Software"},"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSBQAC","label":"IBM Security QRadar SIEM"},"ARM Category":[{"code":"a8m0z000000cwtNAAQ","label":"Deployment"}],"ARM Case Number":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions"}]

Document Information

Modified date:
12 June 2023

UID

swg21690477