Output formats for message flow accounting and statistics data
When you collect message flow statistics, you can choose the output destination for the data.
Before message flow accounting and statistics can be collected, you must ensure that the publication of events has been enabled and a pub/sub broker has been configured. For more information, see Configuring the publication of event messages and Configuring the built-in MQTT pub/sub broker.
If you start the collection of message flow statistics data by using the web user interface, the statistics are emitted in JSON format in addition to any other formats that are already being emitted. If the output format was previously not specified and therefore defaulted to the user trace, the newly specified format replaces the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.
If you use the mqsichangeflowstats command to explicitly specify the required output formats, the formats specified by the command replace the formats that are currently being emitted for the message flow (they are not added to them).
If you stop statistics collection from the web user interface,
all output formats are turned off. If statistics collection is
subsequently restarted by using the mqsichangeflowstats command,
the output format is reset to the default value of user
trace
, unless other formats are specified on the command.
However, if statistics collection is restarted by using the web
user interface, data is collected in JSON format.
Statistics data is written to the specified output location in the following circumstances:
- When the archive data interval expires.
- When the snapshot interval expires.
- When the integration node shuts down. Any data that has been collected by the integration node, but has not yet been written to the specified output destination, is written during shutdown. It might therefore represent data for an incomplete interval.
- When any part of the integration node configuration is redeployed. Redeployed configuration data might contain an updated configuration that is not consistent with the existing record structure (for example, a message flow might include an additional node, or an integration server might include a new message flow). Therefore the current data, which might represent an incomplete interval, is written to the output destination. Data collection continues for the redeployed configuration until you change data collection parameters or stop data collection.
- When data collection parameters are modified. If you update the parameters that you have set for data collection, all data that is collected for the message flow (or message flows) is written to the output destination to retain data integrity. Statistics collection is restarted according to the new parameters.
- When an error occurs that terminates data collection. You must restart data collection yourself in this case.
User trace entries
You can specify that the data that is collected is written to the user trace log. The data is written even when trace is switched off.
If no output destination is specified for accounting and statistics, the default is the user trace log. If one or more output formats are subsequently specified, the specified formats replace the default, and the data is no longer emitted to the user trace. However, if user trace has been explicitly specified, any additional formats that are selected subsequently are emitted in addition to the user trace.
The data is written to one of the following locations:
For information about the user trace entries, see User trace entries for message flow accounting and statistics data.
JSON publication
You can specify that the data that is collected is published in JSON format, which is available for viewing in the web user interface. If statistics collection is started through the web user interface, statistics data is emitted in JSON format in addition to any other formats that are already being emitted.
- For publications on an MQ pub/sub broker:
$SYS/Broker/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name /libraries/library_name/messageflows/message_flow_name
- For publications on an MQTT pub/sub broker:
IBM/IntegrationBus/integrationNodeName/Statistics/JSON/SnapShot/integrationServerName/applications/application_name /libraries/library_name/messageflows/message_flow_name
For information about the JSON publication, see JSON publication for message flow accounting and statistics data.
XML publication
You can specify that the data that is collected is published in XML format and is available to subscribers registered in the integration node network that subscribe to the correct topic.
- For publications on an MQ pub/sub broker:
$SYS/Broker/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
- For publications on an MQTT pub/sub broker:
IBM/IntegrationBus/integrationNodeName/StatisticsAccounting/record_type/integrationServerName/message_flow_label
Subscribers can include filter expressions to limit the publications that they receive. For example, they can choose to see only snapshot data, or to see data that is collected for a single integration node. Subscribers can specify wild cards (+ and #) to receive publications that refer to multiple resources. Use + to receive resources on one topic level, and # to receive resources across multiple topic levels.
- Register the following topic for the subscriber to receive data
for all message flows running on an integration node named IBNODE:
or$SYS/Broker/IBNODE/StatisticsAccounting/#
IBM/IntegrationBus/IBNODE/StatisticsAccounting/#
- Register the following topic for the subscriber to receive only
archive statistics that relate to a message flow
Flow1
running on integration serverdefault
on integration node IBNODE:
or$SYS/Broker/IBNODE/StatisticsAccounting/Archive/default/Flow1
IBM/IntegrationBus/IBNODE/StatisticsAccounting/Archive/default/Flow1
- Register the following topic for the subscriber to receive both
snapshot and archive data for message flow
Flow1
running on integration serverdefault
on integration node IBNODE:
or$SYS/Broker/IBNODE/StatisticsAccounting/+/default/Flow1
IBM/IntegrationBus/IBNODE/StatisticsAccounting/+/default/Flow1
For help with registering your subscriber, see Message display, test and performance utilities SupportPac (IH03).
For information about the XML publication, see XML publication for message flow accounting and statistics data.
CSV records
You can specify that the data that is collected is published in comma-separated value (.csv) format. Snapshot and archive data records are written to output files, which include a header with the field name. The fields for averages are optional, and are written only if the averages property of the statistics file writer is set to true.
One line is written
for each message flow that is producing data for the time period that
you choose. For example, if MessageFlowA
and MessageFlowB
are
both producing archive data over a period of 60 minutes, both message
flows produce a line of statistics data every 60 minutes.
For more information about the CSV records, see CSV file format for message flow accounting and statistics data.
SMF records
On z/OS, you can specify that the data collected is written to SMF. Accounting and statistics data uses SMF type 117 records. SMF supports the collection of data from multiple subsystems, and you might therefore be able to synchronize the information that is recorded from different sources.
To interpret the information that is recorded, use any utility program that processes SMF records.
For information about the SMF records, see z/OS SMF records for message flow accounting and statistics data.