Output formats for message flow accounting and statistics data

When you collect message flow statistics, you can choose the output destination for the data.

You can select one or more of the following destinations, by using the mqsichangeflowstats command: If no format is specified, accounting and statistics data is sent to the user trace log by default. For more information about specifying an output format, see mqsichangeflowstats command.

Before message flow accounting and statistics can be collected, you must ensure that the publication of events has been enabled and a pub/sub broker has been configured. For more information, see Configuring the publication of event messages and Configuring the built-in MQTT pub/sub broker.

If you start the collection of message flow statistics data by using the web user interface, the statistics are emitted in JSON format in addition to any other formats that are already being emitted. If the output format was previously not specified and therefore defaulted to the user trace, the newly specified format replaces the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.

If you use the mqsichangeflowstats command to explicitly specify the required output formats, the formats specified by the command replace the formats that are currently being emitted for the message flow (they are not added to them).

If you stop statistics collection from the web user interface, all output formats are turned off. If statistics collection is subsequently restarted by using the mqsichangeflowstats command, the output format is reset to the default value of user trace, unless other formats are specified on the command. However, if statistics collection is restarted by using the web user interface, data is collected in JSON format.

Statistics data is written to the specified output location in the following circumstances:

  • When the archive data interval expires.
  • When the snapshot interval expires.
  • When the integration node shuts down. Any data that has been collected by the integration node, but has not yet been written to the specified output destination, is written during shutdown. It might therefore represent data for an incomplete interval.
  • When any part of the integration node configuration is redeployed. Redeployed configuration data might contain an updated configuration that is not consistent with the existing record structure (for example, a message flow might include an additional node, or an integration server might include a new message flow). Therefore the current data, which might represent an incomplete interval, is written to the output destination. Data collection continues for the redeployed configuration until you change data collection parameters or stop data collection.
  • When data collection parameters are modified. If you update the parameters that you have set for data collection, all data that is collected for the message flow (or message flows) is written to the output destination to retain data integrity. Statistics collection is restarted according to the new parameters.
  • When an error occurs that terminates data collection. You must restart data collection yourself in this case.

User trace entries

You can specify that the data that is collected is written to the user trace log. The data is written even when trace is switched off.

If no output destination is specified for accounting and statistics, the default is the user trace log. If one or more output formats are subsequently specified, the specified formats replace the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.

The data is written to one of the following locations:

Windows platformWindows
If you set the work path by using the -w parameter of the mqsicreatebroker command, the location is workpath\Common\log.
If you have not specified the integration node work path, the location is:
  • On Windows:C:\ProgramData\IBM\MQSI\Common\log.
Linux platformUNIX platformLinux® and UNIX
/var/mqsi/common/log
z/OS platformz/OS®
/component_filesystem/log

For information about the user trace entries, see User trace entries for message flow accounting and statistics data.

JSON publication

You can specify that the data that is collected is published in JSON format, which is available for viewing in the web user interface. If statistics collection is started through the web user interface, statistics data is emitted in JSON format in addition to any other formats that are already being emitted.

The topic on which the data is published has the following structure:
  • For publications on an MQ pub/sub broker:
    $SYS/Broker/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name
    /libraries/library_name/messageflows/message_flow_name
  • For publications on an MQTT pub/sub broker:
    IBM/IntegrationBus/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name
    /libraries/library_name/messageflows/message_flow_name
The variables correspond to the following values:
integrationNodeName
The name of the integration node for which statistics are collected
integration_server_name
The name of the integration server for which statistics are collected
application_name
The name of the application for which statistics are collected
library_name
The name of the library for which statistics are collected
message_flow_name
The name of the message flow for which statistics are collected

For information about the JSON publication, see JSON publication for message flow accounting and statistics data.

XML publication

You can specify that the data that is collected is published in XML format and is available to subscribers registered in the integration node network that subscribe to the correct topic.

The topic on which the data is published has the following structure:
  • For publications on an MQ pub/sub broker:
    $SYS/Broker/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
  • For publications on an MQTT pub/sub broker:
    IBM/IntegrationBus/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
The variables correspond to the following values:
integrationNodeName
The name of the integration node for which statistics are collected.
record_type
Set to SnapShot or Archive, depending on the type of data to which you are subscribing. Alternatively, use + to register for both snapshot and archive data if it is being produced. This value is case sensitive and must be entered as SnapShot.
integrationServerName
The name of the integration server for which statistics are collected.
message_flow_label
The label on the message flow for which statistics are collected.

Subscribers can include filter expressions to limit the publications that they receive. For example, they can choose to see only snapshot data, or to see data that is collected for a single integration node. Subscribers can specify wild cards (+ and #) to receive publications that refer to multiple resources. Use + to receive resources on one topic level, and # to receive resources across multiple topic levels.

The following examples show the topic with which a subscriber registers to receive different sorts of data:
  • Register the following topic for the subscriber to receive data for all message flows running on an integration node named IBNODE:
    $SYS/Broker/IBNODE/StatisticsAccounting/#
    or
    IBM/IntegrationBus/IBNODE/StatisticsAccounting/#
  • Register the following topic for the subscriber to receive only archive statistics that relate to a message flow Flow1 running on integration server default on integration node IBNODE:
    $SYS/Broker/IBNODE/StatisticsAccounting/Archive/default/Flow1
    or
    IBM/IntegrationBus/IBNODE/StatisticsAccounting/Archive/default/Flow1
  • Register the following topic for the subscriber to receive both snapshot and archive data for message flow Flow1 running on integration server default on integration node IBNODE:
    $SYS/Broker/IBNODE/StatisticsAccounting/+/default/Flow1
    or
    IBM/IntegrationBus/IBNODE/StatisticsAccounting/+/default/Flow1

For help with registering your subscriber, see Message display, test and performance utilities SupportPac (IH03).

For information about the XML publication, see XML publication for message flow accounting and statistics data.

CSV records

You can specify that the data that is collected is published in comma-separated value (.csv) format. Snapshot and archive data records are written to output files, which include a header with the field name. The fields for averages are optional, and are written only if the averages property of the statistics file writer is set to true.

One line is written for each message flow that is producing data for the time period that you choose. For example, if MessageFlowA and MessageFlowB are both producing archive data over a period of 60 minutes, both message flows produce a line of statistics data every 60 minutes.

For more information about the CSV records, see CSV file format for message flow accounting and statistics data.

SMF records

On z/OS, you can specify that the data collected is written to SMF. Accounting and statistics data uses SMF type 117 records. SMF supports the collection of data from multiple subsystems, and you might therefore be able to synchronize the information that is recorded from different sources.

To interpret the information that is recorded, use any utility program that processes SMF records.

For information about the SMF records, see z/OS SMF records for message flow accounting and statistics data.