Skip to main content

     
  TPF : Library : Newsletters

It's on a Need-to-Know Basis

Mark Gambino, IBM TPF Development

Perception is not reality, or perhaps reality is not perception. The Skiing On Snow (SOS) travel agency specializes in ski vacations. The majority of the customers who call SOS request any place that has immediate availability. For large groups, SOS carefully balances the requests across the available ski locations in an effort to not overload any one location. For individual customers who do not have a preference, SOS selects a location using a round-robin approach. There are some stubborn customers who request a specific location and SOS must reject the request if that location is not available (no substitutions are allowed in this instance).

In many ways the SOS travel agency performs the same functions that the Advanced Peer-to-Peer Networking (APPN) control point (CP) does in a TPF loosely coupled complex. When most users log on to a processor shared TPF application, they do not know or care on which TPF processor the session gets set up. From their perspective, users see one big TPF complex and, as long as the session comes up, they are happy. Other users log on to processor unique TPF applications; therefore, the sessions must be assigned to a specific host and, if that host is not active, the logon request is rejected. These users view the TPF complex as individual hosts rather than one big TPF complex. So how does TPF present this dual image to the different sets of users? The answer is with the TPF implementation of APPN. From the number and types of questions that have come up recently, there is a need to know more detailed information about the TPF APPN design.

Figure 1 shows a four-way TPF loosely coupled complex. TPFA, TPFB, and TPFC each have two active APPN links, which are identified by their transmission group (TG) number. TPFD is an inactive processor. The CP-CP sessions currently reside on TPFA. Applications JEDI, STAR, and WARS are processor shared. The control point (whose name is OBE1 in this example) is always processor shared. Application YODA is processor unique and resides only on TPFB. Application SOLO is also processor unique and resides only on TPFD.

figure 1

Figure 1

Figure 1 showed the TPF view of the network. The remainder of the APPN network views the entire TPF complex as a single node as shown in Figure 2.

figure 2

Figure 2

Because the network views the TPF complex as a single node, all session requests come to one focal point, which is the TPF host that has the CP-CP sessions. Assume remote LU DARTH.VADER logs on to application JEDI and gets assigned to TPFA. That session ends, DARTH.VADER logs back on, and this time the session is set up on TPFB. No matter what powerful force the remote LU may possess, it has no idea that its current session and previous session were connected to different physical TPF processors. Next, assume remote LU YOUNG.LUKE searches for application YODA. This session must be established on TPFB because YODA resides only on TPFB. If remote LU JABBA.THEHUT requests a session with application SOLO, the request will be rejected because SOLO resides only on TPFD and that processor is not active.

To understand how all of this is accomplished, we need to examine the blueprints of the death star . . . I mean the blueprints of the APPN components in the TPF system. There are four major components involved:

  1. The control point, which is responsible for building and processing APPN control information.
  2. CP-CP sessions, over which TPF sends and receives APPN control information to and from the Network.
  3. APPN interprocessor communications (IPC), which sends and receives APPN control information to and from other TPF hosts in the loosely coupled complex.
  4. Systems Network Architecture (SNA) session services, which sends and receives all traffic on the application (LU-LU) session, including the BIND, user data, and UNBIND.

Figure 3 shows the interaction of the different components in a two-way TPF complex. The CP-CP Sessions can reside only on one TPF host at any point in time; however, the CP function resides on all TPF hosts. By separating the CP function from the physical CP-CP Sessions, every TPF host can perform the CP functions.

 figure 3

Figure 3

In Figure 3, the CP-CP sessions reside on TPFA. To understand what each component does, we will show sample session activation sequences. Figure 4 shows an example of a remote secondary logical unit (SLU) logging on to a TPF application and the session is set up on TPFA. The steps are as follows:

  1. The LOCATE request is received by TPFA over the Conloser CP-CP Session.
  2. The LOCATE request is passed to the CP function for processing.
  3. Because a new session is starting, the UAPN user exit is invoked to select the TPF host for this session. TPFA is selected.
  4. Because this host (TPFA) was selected, the Process LOCATE component is called to continue the processing.
  5. In this example, we are assuming that the LOCATE request contains all of the necessary information; therefore, a call to SNA Session Services is made to build the BIND request.
  6. The BIND request is sent to the remote LU.
  7. The BIND response is received by SNA Session Services on TPFA.
  8. After the BIND response is processed, control is passed to the Build LOCATE component of the CP.
  9. The LOCATE reply is built and passed to Send APPN Message.
  10. Because the CP-CP Sessions reside on this host, the LOCATE reply is queued directly on the Conwinner CP-CP Session.
  11. The LOCATE reply is sent to the Network.

figure 4

Figure 4

Figure 5 shows a remote SLU logging on to a TPF application and the session gets set up on TPFB. In the figure:

  1. The LOCATE request is received by TPFA over the Conloser CP-CP Session.
  2. The LOCATE request is passed to the CP function for processing.
  3. Because a new session is starting, the UAPN user exit is invoked to select the TPF host for this session. TPFB is selected.
  4. Because a different host (TPFB) was selected, TPFA passes the LOCATE request to the APPN IPC component.
  5. APPN IPC on TPFA passes the LOCATE request to TPFB, and APPN IPC on TPFB receives the LOCATE request from TPFA.
  6. APPN IPC on TPFB passes the LOCATE request to the CP for processing.
  7. SNA Session Services is called and builds the BIND request.
  8. TPFB sends the BIND request to the remote LU.
  9. SNA Session Services on TPFB receives the BIND response.
  10. After the BIND response is processed, control is passed to the Build LOCATE component of the CP on TPFB.
  11. The LOCATE reply is built and passed to Send APPN Message on TPFB.
  12. Because the CP-CP Sessions reside on a different host, APPN IPC is called to pass the LOCATE reply to the host that has the CP-CP Sessions (TPFA).
  13. APPN IPC on TPFA receives the LOCATE reply from TPFB.
  14. APPN IPC on TPFA queues the LOCATE reply on the Conwinner CP-CP Session.
  15. The LOCATE reply is sent to the Network.

figure 5

Figure 5

In the previous example, the LU-LU session was established on TPFB even though the CP-CP Sessions reside on TPFA. Only the control information (LOCATE commands) flow through TPFA when the session is started. After that, all of the data for the LU-LU session flows directly from TPFB to the remote LU. Once the LU-LU session is established, if the CP-CP Sessions fail, the LU-LU session on TPFB is unaffected because CP-CP Sessions are only needed to start new LU-LU sessions; active LU-LU sessions remain active when the CP-CP Sessions fail.

More complicated session setup examples have multiple LOCATE commands flowing either because the original LOCATE request did not contain all the necessary information or because the route suggested by the network is not acceptable. Regardless of how many LOCATE commands flow, the basic principles are the same. All the LOCATE commands are sent to and received from the Network on one TPF host; however, a LOCATE command is built or processed on the TPF host where the LU-LU session will reside. The only part of the CP component that knows on which host the CP-CP sessions reside is Send APPN Message. This is the common output interface for all APPN information control and processes as follows:

If the CP-CP Sessions reside on this host, queue the APPN message on the Conwinner CP-CP session. Otherwise, pass the APPN message via APPN IPC to the host that has the CP-CP Sessions to be sent.

For inbound traffic, the LOCATE Received component of the CP is invoked only on the host that has the CP-CP Sessions and is the first component called when a LOCATE of any kind is received from the network. If this is a new logon request, the UAPN user exit is called to select the TPF host (and, optionally, the link to use) for the new session. For sessions with processor unique applications, the host is already selected; therefore, UAPN is called to select a link. When a LOCATE is received and it is not the first LOCATE to flow for this session request, UAPN is not called and, instead, an internal table is examined to find out which TPF host owns this LU-LU session. For all LOCATE commands received, LOCATE Received determines which TPF host will process the LOCATE command. Call that host TPFx:

If TPFx is this TPF host, pass control to Process LOCATE on this host. Otherwise, pass the LOCATE command via APPN IPC to TPFx, which will then pass the LOCATE command to the Process LOCATE component on TPFx to be processed.

By keeping information on a need-to-know basis, most of the CP functions within the TPF system do not need to know anything about the location of the CP-CP Sessions. By presenting itself as a single APPN node, the TPF complex allows users to view TPF as a single image. Only in cases where a session needs to be set up on a specific host does the TPF system tell the Network additional need-to-know information. The information is to select a route based on a subset of the available routes, but do this only for this one session. This approach allows the TPF system the flexibility to meet the needs of different types of users and is not harsh like an evil imperial empire network design that limits you to one, and only one, network view.