Message flow nodes

A message flow node is a processing step in a message flow. It can be a built-in node, a user-defined node, or a subflow node.

A message flow node receives a message, performs a set of actions against the message, and optionally passes the original message, and none or more other messages, to the next node in the message flow.

A message flow node has a fixed number of input and output points that are known as terminals. You can make connections between the terminals to define the routes that a message can take through a message flow. Message flow nodes are displayed in the node palette that is associated with the Message Flow editor. The palette is arranged in categories, which group together nodes that provide related processing; for example, transformation.

Input nodes do not have input terminals. The message flow starts when a message is retrieved from an input device; for example, a WebSphere® MQ queue. The message flow ends when none or more output messages have been sent by one or more output nodes, and control returns back to the input node. The input node either commits or rolls back the transaction. Input and output nodes can be protocol-specific, to interact with particular systems such as web services.

Most nodes are processing nodes, that you can include between your input and output nodes and connect together to define the flow of control. These nodes typically transform a message from one format to another, or route a message along a particular path, or provide more complex options such as aggregation or filtering.

You can configure a node by setting or changing the values for its properties. Some nodes have mandatory properties, for which you must set a value. Other properties must have a value, but are assigned a default value that you can leave unchanged. The remaining properties are optional properties; no value is required.

When you develop a message flow, how you set the properties of the nodes in that flow influences how the messages are processed by that flow. For example, by setting properties that define input and output WebSphere MQ queue names, you determine where the message flow receives the message from, and where it delivers the message.

You can also configure nodes by using promoted properties; promote one or more node properties to become properties of the message flow that contains those nodes. You can then change these properties at the flow level, rather than having to update one or more individual nodes. You can also promote equivalent properties from more than one node to the same message flow property; for example, you might use this technique to set, at the flow level, the name of the database that all the nodes in the message flow must connect to.

A subset of node properties is configurable properties; that is, you can change their values when you deploy the message flow to an integration node for execution. You might find this ability useful if you deploy a message flow to more than one integration node, and want it to behave in a slightly different way on each integration node. For example, when you deploy the message flow to a test integration node, you can set a configurable property to force the flow to interact with a test database. When you deploy the same message flow to a production integration node, you can set the same property to the value of a production database, without having to update the message flow itself.

Another subset of node properties is operational properties; that is, you can control their values by using an operational policy. An operational policy enables you to define a common approach to controlling certain aspects of message flow behavior, and particular node properties such as connection credentials. You can create and update an operational policy at any time in the solution lifecycle. For more information about operational policies, see Operational policy.

The mode that your integration node is working in can affect the types of node that you can use; see Restrictions that apply in each operation mode.

You can add nodes of three types into your message flows:

Built-in node
A built-in node is a message flow node that is supplied by IBM® Integration Bus. The built-in nodes provide input and output, manipulation and transformation, decision making, collating requests, and error handling and reporting functions.

For information about all of the built-in nodes that are supplied by IBM Integration Bus, see Built-in nodes.

For information about the nodes that you can use to connect IBM Integration Bus to your applications, see Nodes for connectivity.

User-defined node
A user-defined node is an extension to the integration node that provides a new message flow node in addition to the nodes that are supplied with the product. A user-defined node must be written to the user-defined node API provided by IBM Integration Bus for both C and Java™ languages.
Subflow
A subflow is a directed graph that is composed of message flow nodes and connectors and is designed to be embedded in a message flow or in another subflow. To connect your subflow to other nodes in the main flow, you can add Input and Output nodes to the subflow. You can define subflows in one of two resource types, either a .subflow file or a .msgflow file. A subflow that is defined in a .subflow file can be deployed as an individual resource. A subflow that is defined in a .msgflow file must be deployed with the main flow in which it is embedded.

A message is received by an Input node and processed according to the definition of the subflow. That definition might include being stored through a Database node, or delivered to another message target, for example through an MQOutput node. If required, the message can be passed through an Output node back to the main flow for further processing.

The subflow, when it is embedded in a main flow, is represented by a subflow node, which has a unique icon. The icon is displayed with the correct number of terminals to represent the Input and Output nodes that you included in the subflow definition.

The most common use of a subflow is to provide processing that is required in many places within a message flow, or is to be shared between several message flows. For example, you might code some error processing in a subflow, or create a subflow to provide an audit trail (storing the entire message and writing a trace entry).

For more information, see Subflows.

A node does not always produce an output message for every output terminal: often it produces one output for a single terminal based on the message received or the result of the operation of the node. For example, a Filter node typically sends a message on either the True terminal or the False terminal, but not both.

If you connected more than one terminal to another node, the processing in the node determines the order in which the message is propagated to the nodes that it is connected to; you cannot change this order. The node sends the output message on each terminal, but sends on the next terminal only when the processing has completed for the current terminal.

Updates to a message are never propagated to nodes that have been previously executed, only to nodes that follow the node in which the update has been made. The order in which the message is propagated to the different output terminals is determined by the integration node; you cannot change this order. The only exception to this rule is the FlowOrder node, in which the terminals indicate the order in which the message is propagated to each.

All built-in nodes include error handling as part of their processing. If an error is detected within the node, the message is propagated to the failure terminal. What happens then depends on the structure of your message flow. You can use only the basic error handling that is provided by the integration node, or you can enhance your flow by adding error processing nodes and flows to provide more comprehensive failure processing. For more information about these options, see Handling errors in message flows.

The message flow can accept a new message for processing only when all paths through the message flow (that is, all connected nodes from all output terminals) are complete, and control is returned to the input node that commits or rolls back the transaction.