[z/OS]

Overview of request flow prioritization

With Intelligent Management, you can define performance goals and bind them to specific subsets of the incoming traffic. The on demand router (ODR) and associated autonomic managers support business goals in times of high load by making smart workload management decisions about the work that is being sent through the ODR. Not all the work in your configuration is equally important. The ODR can support this concept by forwarding different flows of requests more or less quickly to achieve the best balanced result and maintain the quality of service.

Role of the ODR

The ODR is a server that acts as an HTTP proxy or a SIP proxy. An ODR contains the autonomic request flow manager (ARFM). ARFM prioritizes inbound traffic according to service policy configuration and protects downstream servers from being overloaded. Traffic is managed to achieve the best balanced performance results, considering the configured service policies and the offered load. Note that for an inbound User Datagram Protocol (UDP) or Session Initiation Protocol (SIP) message, the ODR can route the message to another ODR to properly check for and handle UDP retransmissions.

The on demand configuration (ODC) component allows the ODR to sense its environment. ODC dynamically configures the routing rules at runtime to allow the ODR to accurately route traffic to those application servers. An ODR is able to route HTTP requests to WebSphere® Application Server Network Deployment servers, and servers that are not running WebSphere software. The ODR, like the Web server plug-in for WebSphere Application Server, uses session affinity for routing work requests. After a session is established on a server, later work requests for the same session go to the original server, which maximizes cache usage and reduces queries to back-end resources.

Service policies

A service policy is a user-defined categorization that is assigned to potential work as an attribute that is read by the ARFM. You can use a service policy to classify requests based on request attributes, including the URI, the client name and address, and the user ID or group. By configuring service policies, you apply varying levels of importance to the actual work. You can use multiple service policies to deliver differentiated services to different categories of requests. Service policy goals can differ in performance targets as well as importances.

The autonomic request flow manager (ARFM)

The ARFM exists in the ODR and controls request prioritization. When the following components that the ARFM contains are working together, they are able to properly prioritize incoming requests:

  • A compute power controller per target cell. That is, a cell to which some ARFM gateway directly sends work. This is an HAManagedItem that can run in any node agent, ODR, or deployment manager.
  • A gateway per a used combination of protocol family, proxy process, and deployment target. A gateway runs in its proxy process. For HTTP and SIP, the proxy processes are the on demand routers; for Java™ Message Service (JMS) and IIOP, the proxy processes are the WebSphere application servers.
  • A work factor estimator per target cell. This is an HAManagedItem that can run in any node agent, ODR, or deployment manager.

Dynamic workload management (DWLM)

Dynamic workload management (DWLM) is a feature of the ODR that applies the same principles as workload management (WLM), such as routing based on a weight system, which establishes a prioritized routing system. DWLM autonomically sets the routing weights to WLM. With WLM, you manually set static weights in the administrative console. With DWLM, the system can dynamically modify the weights to stay current with the business goals. DWLM can be shut off. If you intend to use the automatic operating modes for the components of dynamic operations, then setting a static WLM weight on any of your dynamic clusters could get in the way of allowing the on demand aspect of the product to function properly. The WebSphere Application Server Network Deployment WLM is not limited to the on demand routers, but also applies to IIOP traffic when the client is using the WebSphere Application Server Java Development Kit (JDK) and object request broker (ORB) and prefer local routing is not employed.

The following diagram shows an equal amount of requests flow into the ODR. Platinum, gold, and bronze are used to depict a descending order of importance, respectively. After the work is categorized, prioritized, and queued, a larger volume of more important work (platinum) is processed, while a smaller volume of less important (bronze) work is queued. Because bronze is delayed, the long-term average rate of bronze coming out of the ODR is not less than the long-term average rate of bronze going in. Dynamic operations keep the work within the target time allotted for completion.

Figure 1. Flow of requests into and through the on demand router
Platinum, gold, and bronze requests flow through the on demand router, which categorizes, queues, and routes these requests according to their defined importance.