Communications Server

Network Administration Guide


Chapter 2. Communications Server and SNA

This chapter discusses the SNA network functions provided by Communications Server and includes the following topics:


Overview of SNA Functions

This section provides an overview of how Communications Server implements SNA on a workstation. It is not a comprehensive discussion of the SNA functions. For more detailed information on SNA, refer to the following books:

SNA defines the standards, protocols, and functions used by devices in the network (from mainframes to terminals) to communicate with one another. This communication enables these devices to transparently share information and process resources. In other words, a user at a workstation does not have to know what happens in the background to access information at a host or to communicate with another user.

An SNA network is organized as a system of nodes and links. It is important to remember that this organization is logical. SNA classifies the nodes according to their capabilities and the amount of control that they have over other nodes in the network. The node type is not necessarily associated with a specific type of hardware. The capabilities of a node can be performed by different devices. A workstation acting as a gateway can perform the same functions as a communications controller. It is even possible for a single device to contain more than one node.

The SNA network is responsible for moving data between two end users in an efficient, orderly, and reliable manner. For example, when a user at a workstation sends a message to another workstation, SNA does the following things:

These tasks are defined in SNA as separate functional layers. These layers are not part of the discussion here, but it is important to remember that the SNA layers are all part of a logical link.

As mentioned previously, the SNA nodes are classified according to their communications capabilities and the amount of control the node has over other nodes in the network. The SNA nodes are broadly classified as subarea nodes and peripheral nodes. The subarea nodes are like hubs and can communicate with the peripheral nodes and with other subarea nodes. The subarea nodes activate and control resources at the peripheral nodes. Subarea nodes are also classified as type 4 or type 5 nodes. Type 5 nodes contain a system service control point (SSCP) that provides a central control point for the type 4 nodes attached to it. A type 5 node is sometimes referred to as a host node. A peripheral node can only communicate directly with the subarea node that it is attached to. However, a peripheral node can control devices to which it is attached. For example, a cluster controller acting as a peripheral node can support the terminals that are attached to it. Peripheral nodes are also referred to as type 2 or type 2.1 nodes.

Each node contains network accessible units (NAUs) that perform control and communication functions. One of these NAUs is a physical unit (PU). The PU manages the physical resources of the node. Other NAUs called logical units (LUs) provide logical access points to the network that enable communication between users and applications at each node. The communication between the logical units is referred to as a session. Sessions not only support communication between users and applications, but also support communication between applications in order to share processing resources. Communication between applications is known as advanced program-to-program communication (APPC). APPC is a set of programming conventions and protocols that implement LU 6.2. (APPC is the name given to the LU 6.2 capability in products that implement this LU type.)


Data Link Control Profiles

Data link control (DLC) enables orderly exchanges of data between two nodes through a logical link. The DLC provides the protocols necessary for reliable delivery of basic transmission units (BTUs) between a pair of nodes in the SNA network. You must configure the appropriate Communications Server DLC profiles for a workstation to access an SNA network.

Refer to the online Tutorial for more information on configuring the appropriate DLC profiles.


Advanced Peer-to-Peer Networking

Advanced Peer-to-Peer Networking (APPN) is an extension of SNA that adds communications functions beyond those described in the previous sections. Its basic components include:

APPN Node Types

This section discusses the three types of nodes implemented by Communications Server that can participate in an APPN network:

In addition, branch extender is an extension to a network node. Though it provides services to the end nodes as a network node, it appears in the network like an end node connected to other network nodes.

Each node is distinguished from other nodes in the network by a unique name consisting of two parts--a network ID and a local node name (also known as a control point [CP] name). The name identifies each node to all other nodes in the network. Also, the node can have multiple PU names for simultaneous access to multiple PU T4/5 hosts.

A node can be configured to be an end node or a network node, but when an end node does not have CP-CP sessions (see CP-CP Sessions) to an APPN network node, it acts as a LEN node. A LEN node does not support APPN functions.

The node types are described in more detail in the following sections. Figure 9 illustrates a sample APPN network that includes all of these node types.

Figure 9. A Portion of a Sample APPN Network. This view of network topology shows five network nodes (NNs). Three end nodes (ENs) are connected, as well as a LEN node and subarea. APPC application programs written for any node in this network can communicate with any other.


Sample APPN Network

Network Node

A network node supports its own end users; provides directory, route selection, and management services to end nodes; and performs intermediate routing of data on sessions that traverse it. The network node performs distributed searches of the network to locate partner LUs and calculates the best route from origin node to destination node based on user-specified criteria.

A network node server refers to the role of a network node in acting as the network entry point for specific end nodes attached to it. These end nodes are defined as being in its domain. For example, all directory requests regarding resources (such as LUs) in these end nodes (as well as its own resources) pass through directory services in the network node server. The network nodes are able to collect and control directory information that passes into the APPN network.

A network node provides the following:

End Node

An end node operates in a peer environment for LU-LU sessions (using LU 6.2 protocols) while providing additional APPN functions. An end node provides APPN functions such as directory services and route selection services to end users at its own node. It can participate in the APPN network by using the services of an attached network node server for session requests that involve nodes not directly connected; it does this by exchanging requests and replies for directory services with an adjacent network node (its server) using CP-CP sessions.

APPN end nodes can register their local LUs with their network node server. By registering the end nodes, the network operator at the network node server does not need to predefine the LU names for the LUs in all the attached end nodes for which the network node provides services.

An APPN end node can be attached to multiple network nodes, but it can only have CP-CP sessions active with one network node at a time--its network node server. The other network nodes can be used to provide intermediate session routing for the end node, or as a substitute network node server if the main network node server becomes unavailable. CP-CP sessions are never established between two end nodes.

LEN Node

A LEN node is a node that implements the basic T2.1 protocols without the APPN enhancements. In a LEN node, all potential connections with partner LUs are predefined before initiating sessions to them. A LEN node, connected to an adjacent APPN network node, uses the advanced functions of APPN by predefining potential connections with partner LUs as if they existed at that network node. The network node, in turn, can automatically act as the LEN node's network node server and locate the actual destination of the partner LU and select the best route to it. By going through a network node, the LEN node can participate in an APPN network without requiring direct connections to all nodes.

Control Points

The control point (CP) is responsible for managing the node and its resources. To obtain APPN network services, the control point in an APPN end node must communicate with the control point in an adjacent network node. Also, to manage the network, the control point in an APPN network node must communicate with the control points in adjacent network nodes. The control point directs such functions as adapter activation and deactivation and link activation and deactivation, and assists LUs in session initiation and termination.

When setting up a workstation, you must define the control point name (also known as the local node name). The control point is also an LU, and you can choose to have the control point LU be the only LU defined in your workstation.

CP-CP Sessions

To perform directory services and topology and route-selection services, adjacent nodes throughout the APPN network use a pair of parallel CP-CP sessions to exchange network information. Network nodes use CP-CP sessions to monitor nodes in a network link, as well as to track directory and session services. A network node establishes two parallel sessions with each adjacent network node and with each served end node. An APPN end node establishes two parallel sessions with a single adjacent network node acting as its current server. LEN nodes do not support CP-CP sessions.

After a connection has been established, the nodes exchange identification information (XID). Then, CP-CP sessions are started between the control points in the directly attached nodes. The CP-CP sessions use LU 6.2 protocols and both sessions of a given pair must be active for the partner control points to begin and sustain their interactions. All CP-CP sessions are used to conduct directory searches.

After the CP-CP sessions are established, the two nodes exchange control point capability messages that inform each node of the other's capabilities. When both nodes are network nodes, they exchange topology database update (TDU) messages. The TDU messages contain identifying information, node and link characteristics, and resource sequence numbers to identify the most recent updates for each of the resources described in the TDU.

CP-CP Connection Activation

When Communications Server is started, it attempts to activate the preferred NN server connection first. Communications Server tries to activate all other connections defined as activate at startup. If an alternate parallel link to the preferred NN server connection exists, rather than waiting for the results of the activation attempt for the preferred NN server connection before attempting an alternate link, Communications Server attempts to activate the CP-CP sessions on the alternate link.
Note:If the connection was deactivated by operator request from the local node, CP-CP sessions are not redriven. If the connection was deactivated by operator request from the remote node, CP-CP sessions are redriven at the local node. For links between NN nodes, only demand activated (links with adjacent CP name specified and it is not defined as activate at startup) links are activated.

CP-CP Connection Reactivation

Communications Server provides support for CP-CP connection reactivation. Loss of CP-CP sessions between an end node and its network node server and between adjacent network nodes can interfere with the operation of an APPN network. CP-CP connection reactivation support improves reliability of an APPN network by reestablishing these important sessions when they are terminated due to failure or connection inactivation.

A CP-CP connection reactivation attempt is initiated by a CP-CP link activation, CP-CP session failure, or by a CP-CP retry timer expiring. CP-CP sessions are initiated by Communications Server with the first of the following:

  1. The preferred server CP-CP (if it has not previously been attempted).
  2. If DLUR is configured, use an adjacent CP that supports DLUR registration.
  3. The last (most recent) activation of NNs that has not been attempted, to which an active connection exists.
  4. The first of any other uplevel NNs for branch extender.
Note:If the CP-CP connections were terminated due to a link failure, Communications Server does not reactivate the link. You can configure a connection as auto-reactivate (infinite retry) to keep important connections active.

Branch Extender

The branch extender is a border node subset that is designed to interconnect a branch office to an APPN WAN backbone network. The interconnected networks can be native (that is, they have the same network ID) or nonnative. A node that supports the branch extender is a branch network node that typically has LAN and WAN interfaces, and can also include DLUR and HPR.

Links at a node that support branch extender are defined branch uplinks or branch downlinks. Figure 10 provides an example of the way that a branch network node works in a network. In this figure, the node in the center is a branch network node. Usually, the adjacent CP (branch uplink node) will be the network node server (NNS) for the branch network node, which looks like an end node to the branch uplink node.

Figure 10. Conceptual Overview of Branch Uplinks and Branch Downlinks


REQTEXT

Branch uplinks are defined at the branch network node as upstream to the backbone network. You can consider a node with an uplink to be peripherally attached to the backbone network.

Branch downlinks are defined from the branch network node as downstream. The node sees downlinks as connections to end nodes (control points) in the domain. Branch downlinks are typically LAN links (but are not required to be). You can consider end nodes attached through branch downlinks to be local resources. The branch network node is the network node server for these end nodes. On branch downlinks, it provides network node services for domain end nodes, LEN end nodes, dependent T2.0 nodes and T2.1 nodes, and local LUs and PUs.

A branch network node works as a network node server for its domain. It maintains topological information about all of its branch downlink nodes, but does not maintain complete information about the entire uplink network. If the information the node has is not sufficient, it passes the LOCATE requests to its uplink network node server, which may be another branch network node or an APPN network node.

The branch extender optimizes the peer-to-peer communication environment for administrators who want to connect LAN-based branches to one large WAN primarily based on a switched network. The branch extender enhances performance in large APPN networks. Specifically, it:

Figure 11. The Branch Extender in a Network


REQTEXT

Figure 11 shows how branch network nodes work in a network. Dashed lines represent logical links. In the figure, nodes 1, 2, 3, and 4 are configured to support the branch extender and function as both end nodes and network nodes. They hide their downlink topology from the WAN network (that is, to the upstream network node servers, they appear as end nodes). To the nodes on the LANs downstream, the nodes function as network node servers. Node 5 appears as an end node to Node 2, but is a network node server (NNS) for other end nodes on its LAN. To the NNS, Node 5 appears to be an LU on Node 2.

When an end node served by Node 1 attempts to establish a session to an end node across the WAN, Node 1 can send a Locate (Send) request on its branch extender link to its network node server. If the target CP is found, the network node server determines a route from Node 1 to the target CP. Node 1 will modify the route before returning it to the source end node. The source end node uses this route for its session.

Branch Extender Restrictions

The following restrictions apply to networks configured to use the branch extender:

Branch Extender Configuration

To configure the branch extender, you must first configure a branch network node. Then you must configure either a DLC (for an implicit link) or a branch network node. A link configured to support the branch extender is a branch uplink. On a branch network node, any links that are not configured to support the feature are branch downlinks.

If branch network nodes have links defined between one another, they must be defined to be peer connections, which give them a link type of LEARN in the ACG file. Alternatively, you can define the link as ACTIVATE_AT_STARTUP=1 so the link is always active. When branch nodes connect to each other, loops in the topology occur. This is acceptable as long as the links are always active or the links are learned as they are activated. If links are defined between branch network nodes as END_NODE or NETWORK_NODE links, the topology reported upstream might interpret the links inappropriately and cause allocation failures.

You can also use an ACG file to configure branch extender.

Branch Extender Administration

The following sections describe how to verify a configuration and restrictions on how you configure your network.

Verifying the Configuration

You can use SNA Node Operations at a node that supports the branch extender to determine whether a local branch has been configured successfully. At run time, a display of the topology from the node should never have more than two network nodes, itself and the uplink network node server.
Note:Only one uplink is available for CP-CP. Each workstation should be configured as an end node with the branch network node defined as its preferred network node server.

You can use SNA Node Operations to verify whether the DLC or link has been configured to support branch extender correctly. You can also use SNA Node Operations to determine whether an active link is a branch uplink or branch downlink. Downstream end nodes registered using AnyNet will not register their resources.

Supported Functions

Communications Server supports all the APPN Version 2 base functions (both end node and network node). Additionally, the following options are supported:

Data Link Control (DLC)

The DLC provides the protocols necessary for reliable delivery of basic transmission units (BTUs) between a pair of nodes in the APPN network and maintaining the logical connections between nodes.

Connections

A connection links a pair of adjacent nodes across the underlying DLC.

Parallel Links

Your local node can have multiple links to an adjacent node. This association is referred to as parallel links. Parallel links are each assigned a unique number (transmission group number) and can have different link characteristics assigned to them. To have two parallel links between two nodes, the link stations for the links can be on a single adapter in one node but must be on separate adapters in the other node; that is, the combination of adapter number and adjacent (or destination) link station address must be unique for each link.

Link Activation

The message unit that is used to convey node and link characteristics to an adjacent node is referred to as an Exchange Identification (XID). If USE_PU_NAME_IN_XID=1, the PU name is used in the CP name field of the XID. Otherwise, the control point name is used in that field. XIDs are exchanged between nodes before and during link activation to establish and negotiate link and node characteristics, and after link activation to communicate changes in these characteristics.

APPN nodes exchange XID format 3 (XID3) with other T2.1 or boundary nodes to perform role negotiation. For PU 2.0 connections, you use the LINK_STATION keyword to specify a PU name and node ID that are exchanged on XID3. If USE_PU_NAME_IN_XID=1, the PU name is used in the name field of the XID. Otherwise, the control point name is used in that field. Information about the sending node's characteristics is contained in the XID3, including link station role (primary, secondary, or negotiable), TG number, node type, logical link number, the maximum basic transmission unit size that can be received, node ID, and PU name. The PU name is normally the control point name, but alternate PU name and node ID can be specified on the LINK_STATION keyword to support simultaneous PU 2.0 attachments.


Link Types

The following six types of links are typically defined in Communication Server nodes:

Communications Server configuration provides a way to define and control the use of these link types. This section describes the node, DLC, and link configuration parameters used as well as the relationships and dependencies that exist between these parameters. The following categories are described:

The following parameters are described:

Note:These parameters are in the ACG file and might not be available on the panels (through SNA Node Configuration).

Link Definition and Activation Parameters

This section describes definition and activation parameters.

Activate at Startup

An activate at startup link is typically used for primary network access links that are initially activated when you start Communications Server. The link is activated when Communications Server is started at your machine and stays active as long as Communications Server is running.

To define a link to activate at startup, specify ACTIVATE_AT_STARTUP=1 on the LINK_STATION keyword of the ACG File. Links are generally configured to activate at startup when they are important for network connectivity. An important link can also be configured for automatic link retry (see Automatic Link Retry).

The link from an APPN end node (EN) to its preferred network node (NN) server is an example of this type of link.

Examples of a non-activate at startup links, coded as ACTIVATE_AT_STARTUP=0, could be a host link that is not needed immediately when Communications Server is started, an inbound link that a node would like to control the link characteristics of, or a link that may be too costly to have active all the time. These links are defined as non-activate at startup and are activated when the link resources are requested by an application (see Activate on Demand) or the partner.

Activate on Demand

An activate on demand (also referred to as "auto-activate") link is typically used for access to a partner LU which requires dynamic activation of the link. When Communications Server is started, the link remains inactive. However, it is placed into the topology as an available link if an adjacent CP name is specified. The link is activated when a transaction program (TP) requests a connection to a remote LU that requires the link to be active. Communications Server uses the fully qualified partner LU defined to activate the link.

To define a link as activate on demand, ACTIVATE_AT_STARTUP=0 and FQ_ADJACENT_CP_NAME=(netid.cpname) must be configured on the LINK_STATION keyword in the ACG file of the originating node. If the partner is not the FQ_ADJACENT_CP_NAME, configure the PARTNER_LU keyword.

Frequently, an activate on demand link is also configured as either a limited resource (see Limited Resource) or with an inactivity timeout (see Inactivity Timeout) so that the link will be deactivated when it is no longer required.

An example of an activate on demand link is a link defining a connection to a partner which needs to be active for a limited amount of time. The link may cost more than you are willing to pay to keep active at all times. For example, you might have a collection of one or more computers communicating on a regular basis. At the end of each day, one of the machines is required to activate a link to some remote machine, in order to send the daily results, or to make a backup of the data.

Another example might be when you have connections to a data server or a print server. The connection requires resources at the server. To avoid limitations on the maximum number of link stations and sessions at the server, configure an activate on demand link to free the resources at the data server after the requests over the activate on demand link are complete.

An activate on demand link is not necessarily a limited resource link, but it might be defined as one by including the LIMITED_RESOURCE=1 parameter on the LINK_STATION keyword (see Limited Resource).

Automatic Link Retry

Automatic link retry is an error recovery function that enhances the availability of a failed link. If a link needs to be reactivated (without user intervention) after a failure, automatic link retry can be used. Automatic link retry causes automatic reactivation attempts of the link if the following parameters have been specified on either the LINK_STATION or PORT keywords:

If one of these parameters is specified on the PORT keyword, the values are used by the LINK_STATION keyword if the INHERIT_PORT_RETRY_PARMS parameter has been specified.

After a successful activation, the interval timer is reset to 0.

It might be beneficial to use automatic link retry on any of the following kinds of links:

Maximum Activation Attempts

Maximum activation attempts is a link activation parameter that provides a mechanism to prevent dependent LU host traffic (for example, LUA, 3270, an LU 2 gateway and/or DLUR trying to activate a host link for a downstream application) from indefinitely retrying link activation. The INHERIT_PORT_RETRY_PARMS maximum activation attempts parameter represents the number of times an activate link request is attempted. After this number of attempts is reached, subsequent requests are rejected until the number of attempts is reset. Dependent LU link activation requests issued after maximum activation attempts is reached are immediately rejected without an actual attempt to activate the link. In this case, a primary return code of X'0003' and a secondary code of X'00000005', DLC retry, with a sense code of X'00000000' is returned. Independent LU link activation requests issued after maximum activation attempts is reached are attempted, but along with the primary X'0003' and secondary X'00000005' return code, a sense code of 081C0001 is returned to indicate the maximum activation attempts limit has been reached.
Note:If a TP, such as an LUA application, is in a loop trying to activate a session to the same host as the dependent LU application, the TP causes the maximum activation attempts number to be exceeded before the dependent application can make its first request.

The maximum activation attempts number is reset as follows:

If SNA Node Operations is attempting a link activation, the maximum activation attempts number is decremented by 1, and the result is ignored. If the limit has been exceeded on a gateway host link, activation of the link will be attempted after 30 minutes if a workstation link has become active. This enables retries to cease, although they are restarted later when the workstations are restarted. When gateway, LUA, or SNA Node Operations successfully activates a link, the maximum activation attempts number is set to 0.

To configure maximum activation attempts on a link, the MAX_ACTIVATION_ATTEMPTS= n parameter is configured on the LINK_STATION keyword, where n is the number of attempts from -1--127. A -1 indicates that the value on the PORT keyword should be used, and 0 indicates infinite retry.

User Requested Reactivation

User requested reactivation is when the user requests to reactivate a link from either SNA Node Operations or from the command line using, for example, Communications Server.


Link Deactivation Parameters

The following sections describe deactivation parameters.

Inactivity Timeout

Inactivity timeout is an SDLC link deactivation parameter that controls when links are deactivated. The inactivity timeout value specifies the time (in seconds) that a link may be idle before it is deactivated. This is similar to the LINK_DEACT_TIMER on the LINK_STATION keyword. The difference is the LINK_DEACT_TIMER waits for all sessions to end (session count reaches 0) before it triggers the link deactivation. The INACTIVITY_TIMER ignores the session count and triggers deactivation after the link has been idle for the specified time.

The inactivity timeout function was implemented to handle the situation where an emulator session, LUA, 3270, or a LEN connection, was accidentally left active for long periods of time. When the node detects no activity over this type of connection for the INACTIVITY_TIMER duration, the link is automatically deactivated, regardless of whether or not sessions and conversations exist on the link. By definition, the LINK_DEACT_TIMER is considered non-disruptive, but the INACTIVITY_TIMER is considered disruptive.

Notes:

  1. Inactivity timeout can be used on limited resource or non-limited resource links. If a link is defined as limited resource and conversations remain active, limited resource timeout will not expire, and the link will be deactivated when the inactivity timeout expires. If a link is defined as non-limited resource, inactivity timeout is used to deactivate the link to free resources at the remote end.

  2. Currently with HPR, the inactivity timeout is ignored. This is because HPR cannot tell what type of traffic is on the link, and the HPR keep alive protocol generates enough traffic so that the link will never be idle.

To configure inactivity timeout on an SDLC connection, the INACTIVITY_TIMER= n parameter is coded on the LINK_STATION_SDLC_SPECIFIC_DATA parameter of the LINK_STATION keyword in the ACG file, where n is 40-160. To configure inactivity timeout on a link, the LINK_DEACT_TIMER= n parameter is coded on the LINK_STATION keyword of the ACG file, where n is 0--1 000. The 0 indicates no timeout (the link will stay active). The SDLC default is 80 and the default for the link is 10.

Limited Resource

A limited resource link is typically used for limited use primary network access links and secondary network access links. A limited resource link is a link that is automatically deactivated when its session count reaches 0. The limited resource link can be defined as ACTIVATE_AT_STARTUP=1 or ACTIVATE_AT_STARTUP=0. If the link is ACTIVATE_AT_STARTUP=1, it is started when Communications Server is started. If the link is ACTIVATE_AT_STARTUP=0, it is placed in the topology when Communications Server is started, if the adjacent CP name is specified, and activated when services are requested.
Note:Activate at startup links are not placed into the topology unless they are active.

To configure a limited resource link, LIMITED_RESOURCE=1 is specified on the LINK_STATION keyword of the ACG file. The LINK_DEACT_TIMER= n is specified on the LINK_STATION keyword and the ADJACENT_NODE_TYPE=LEARN parameter must be specifed.
Note:If CP_CP_SESSION_SUPPORT=1, the link is not a limited resource link. Configuration verification will flag this as a warning. Active CP-CP sessions will keep the link from deactivating.

Connection Networks

Connection networks enable APPN nodes in a LAN to have direct links with each other without requiring logical link definitions at each node. This feature greatly reduces system definition without adding the performance burden of routing all sessions through a network node. It also enables new nodes that are added to the LAN to fully participate in APPC conversations without requiring definition changes at every other node.

A network node in the connection network assumes that all the nodes in one connection network can have links directly between one another. When calculating the route for a session, the network node considers the direct link and normally selects the direct link as the optimal route. Having calculated the direct route, the network node simply sends the end node the address of the partner to use for activating the link.

The connection network route might not be taken when connection network security is less than required. If the connection network DLC is not secure and a mode like #BATCHSC is used on the MODE_NAME parameter, the network node attempts to find a secure route, ignoring the connection network.

If LAN bridges are being used, APPN views the entire bridged LAN as a single logical network. Because links can be activated between any two systems on the LAN, only one connection network is needed. The connection network should be defined at all the APPN systems on the LAN.

A network node learns connection network information during EN registration and APPN directory searches. The network node server then has enough information to calculate a direct connection between the session endpoint nodes without routing through intermediate nodes.

Only end nodes and network nodes can take advantage of the connection network; links to LEN nodes must still be explicitly defined.

Figure 12 illustrates a sample connection network. This view of a LAN shows a connection network given a name of LOCALNET.IBMLAN. With this type of definition, any EN can connect directly to any other EN as long as NN1 is the active network node server for all the end nodes.

Figure 12. A Sample Connection Network


Sample Connection Network


Other Link Parameters

Other LINK_STATION keyword parameters that are mentioned in the preceding sections, but may not be obvious or implicitly defined are:

This section contains a brief definition of each.

Adjacent Node Type

The adjacent node type specifies the type of node that is adjacent to the node defining the link. Valid types include:

See the Configuration File Reference for further details.

Preferred Network Node Server

The preferred network node server specifies whether the adjacent network node is to be used as the network node server over the link being defined.

Solicit SSCP Sessions

Solicit SSCP sessions specifies whether or not SSCP-PU sessions are requested from the host over the link being defined.


Directory Services

A network node provides directory services to the LUs located in the network node and to the LUs in the end nodes that the network node serves. The network node also assists in the directory services provided by the other network nodes in the network by responding positively to received directory search requests when the resource named is found in the local directory. The local directory maps an LU name to the control point name of the node where that LU is located. If the destination control point is a LEN or end node, the directory includes the name of the serving network node.

The directory services component resides in every node; however, its scope and functions vary depending on the level of directory support in the node.

An end node maintains a local directory containing entries for locally resident LUs. In addition, the end node maintains directory entries for LUs in adjacent nodes with which the end node has been in session. For an LU-LU session with an adjacent peer node, a search of the local directory returns the appropriate destination control point associated with the LU searched for, permitting the proper logical link to be selected.

In a LEN node, all partner LUs are entered in the directory, as the example in Figure 13 shows. Those not in an adjacent peer end node but out in the APPN network are associated in its directory with its designated network node server. The LEN node sends an LU-LU session activation (BIND) request to its network node server for any LU associated in its directory with its server; the server automatically locates the destination LU for it and forwards the BIND appropriately. The network node can send a Locate search, wait for a response, and then send the BIND.

Figure 13. LEN Node Directory. The LEN node directory must contain all the LUs with which it communicates. Because the adjacent network node (NN) serves the LEN node even without CP-CP sessions, the LEN node must define the network node control point as the "owning control point" of all the LUs, including LUs located at the end nodes (ENs).


Sample LEN Node Directory

When an LU is not represented in an end node directory, the end node initiates a Locate search to find the desired LU. To activate the search, the end node invokes the services of its network node server. Figure 14 shows an example of an end node directory.

Figure 14. End Node Directory. The end node (EN) uses the services of its network node server to find the location of the LUs. None of the LUs in the APPN network need to be defined in the end node. The adjacent LEN node LU, however, must be defined because it is not connected to the network node and is not part of the APPN network.


Sample EN Node Directory

A network node provides distributed directory services to its served end nodes in cooperation with all other network nodes in the APPN network. The origin network node receives the name of a destination LU in a Locate search request from a served end node, or the name of a secondary LU in a BIND from a LEN node. The network node verifies the current location of the LU if it is represented in the network node's directory (but is not in the network node itself). The verification is done by sending a directed search to the destination network node server.

If the LU is not in the origin network node's directory, the network node initiates a search of the network. The search is initiated by sending a broadcast search to every adjacent network node, each of which in turn propagates the broadcast and returns replies indicating success or failure. For its future needs, a network node caches information obtained from successful broadcast searches.

An APPN end node can also receive (and respond to) Locate search requests from its network node server to search for, or ensure the continued presence of, specific LUs in the end node.

Each end node can register its LUs with its network node server by sending the network node a registration message. If the end node is registered with the network node server, the network node maintains current directory information pertaining to the end nodes in its domain.

Figure 15 shows an example of a network node directory.

Figure 15. Network Node Directory. The network node (NN) directory contains all the LUs it serves. The end nodes (ENs) register their LUs; the LEN node LU must be configured.


Sample NN Directory


Topology and Route-Selection Services

A network node provides route selection services to itself and to the end nodes it serves. It maintains an internal network topology database that has complete and current topology information about the network. This topology information consists of the characteristics of all network nodes in the network and of all links between network nodes. All network nodes contain a copy of the topology database.

A network node uses the network topology database to compute routes for sessions that originate at the LUs in it and at the end nodes that it serves. Each route that a network node computes is the current least-weight route from the node containing the origin LU to the node containing the destination LU. To provide an appropriate path through the network, the algorithm used to select the route first assigns weights to links and nodes. Based on the relative significance of the characteristics for the requested class of service, the weighting algorithm computes a scalar value for each node and logical link.

Topology Database

The network topology database in a network node contains information about all network nodes and all transmission groups interconnecting them. It is a fully replicated database that is shared among all network nodes in the network and used for route selection. The maintenance of the database requires broadcast updates among all network nodes. The updates are accomplished through topology database update (TDU) messages, which contain node-identifying information, node and link characteristics, and update-sequence numbers to identify the most recent changes for each of the resources described in a TDU.

A local topology database in an end node contains information about itself and directly attached nodes only.

The topology and routing services component uses the CP-CP sessions between network nodes to exchange information to build and maintain a topology database. This topology database in network nodes is kept current using updates that are transmitted among all network nodes whenever a resource (node or link) is activated or deactivated, or the characteristics of an existing resource change.

A local configuration database and a network topology database are maintained at each network node as illustrated in Figure 16. The local configuration database is unique to the node, while the network topology database is replicated at all network nodes.

Table 2 shows the information contained in the configuration database at the local network node.

Table 2. Local NN Configuration Database
Node Links Connection
NN5 e NN5--EN1
a NN5--NN7
b NN5--NN6
NN7 a NN7--NN5
d NN7--NN8
NN6 b NN6--NN5
f NN6--EN2
c NN6--NN8
g NN6--EN3
NN8 c NN8--NN6
d NN8--NN7
j NN8--EN3
h NN8--EN4

Table 3 shows the information contained in the network topology database at the local network node.

Table 3. Local NN Network Topology Database
Node Links Connection
NN5, NN6, NN7, NN8 a NN5--NN7
a NN7--NN5
b NN5--NN6
b NN6--NN5
c NN6--NN8
c NN8--NN6
d NN7--NN8
d NN8--NN7

Figure 16. Local Configuration Database and Network Topology Database in Network Nodes


Config Database and Top Database

Modes

The mode determines the values for the session characteristics and number of sessions between session partners. For example, the size of the largest request unit (RU) to be exchanged on a session (that is, the maximum RU size) is one of the characteristics of a mode. The mode also specifies a class of service, which is used to select the route for the session.

Class of Service

At session initiation time, the BIND specifies a mode name. This mode name is associated with a class-of-service (COS) definition that is used to determine the most desirable route between the origin and destination nodes of the session. The COS definitions specify the characteristics that nodes and links must possess to be included in the route selected for the session. This specification enables the route-selection algorithm to determine if a node or link is acceptable. From the set that is acceptable, the algorithm calculates the best route for the session.

Because COS definitions can vary, different sessions can use different routes between the same origin and destination nodes, depending on the specified mode name. Each network node is capable of computing the least-weight (the most desirable) route to any destination.

When a session goes through both an APPN network and a subarea network, it uses two classes of service:

In both cases, each network uses the mode name to find the COS name, but the two COS names are not necessarily the same.

SNA Transmission Priority

The transmission priority is a value specified in the class of service. The transmission priority is sent in the BIND in the Class of Service/Transmission Priority (COS/TPF) control vector. Once the session is established, subsequent session data flows at the transmission priority specified in the COS/TPF control vector.

Data flowing on sessions using a class of service with high priority can pass data on sessions with lower priority. You should give high priority to sessions carrying interactive traffic where response time is important for example, emulator sessions. Sessions carrying high volumes of data, for example file transfers for NetView Distribution Manager, should be given lower priority. Transmission priority support helps to prevent high volume sessions from blocking traffic on the interactive sessions.

The four transmission priorities are network, high, medium, and low. Network priority is used for network control data such as topology and directory services. The other priorities are used for user data.

Communications Server supports transmission priority for LAN, SDLC, and X.25 links. The benefit is most apparent when the network contains congested low speed links.

Route Selection

After the network node server receives a response from its locate search, the topology and routing services component calculates the best route from the origin node to the destination node for the COS requested. Because the topology and routing services component sends and receives topology database updates as characteristics of any resource change, every route is calculated with the most current information.

Route Selection for VTAM Users

To route APPC traffic through a subarea, the workstations connected to the subarea must be defined as network nodes in Communications Server. In each network node, a link is defined that connects the node to the subarea. From the viewpoint of the network node, partner LUs on the other side of the subarea are defined as being located at the host (a LEN node). From the viewpoint of the host, each network node connected to the subarea must be defined to the VTAM program with a PU macro. All destination LU 6.2 logical units within the APPN network for a particular connection are defined under the PU (network node) as if they are actually located at the PU. However, the LUs can actually be located at other nodes within the APPN network connected to the network node. The host sees only the network node PU. The network node PU can also be a gateway PU. SETN traffic (CP_CP_SESS_SUPPORT=NO) is not allowed when the parameter is set to YES on the NCP and there are no PU or control point sessions.

If the PU name in the VTAM definition is the same as the control point name defined in Communications Server, be aware that you will not be allowed to define the control point as an LU in the VTAM definitions. Names must be unique in the VTAM program, whether they are PU or LU names.

The PU macro must contain XID=YES to use an XID exchange during activation of the PU. This parameter is coded in the NCP major node. It must not be in the PU statement of a switched major node.

For switched SNA devices, you can use a new parameter in the PU macro: CPNAME=cccccccc. It specifies the control point name of the network node connected to the subarea. Either CPNAME or IDBLK and IDNUM must be specified on a switched PU definition statement. Both can be specified. The network node provides its control point name to the VTAM program in the XID exchange during the connection sequence. The VTAM program uses the control point name to locate the corresponding PU macro. If there is no PU macro with the corresponding control point name, the VTAM program uses IDNUM and IDBLK to locate the PU macro.

To route APPC traffic from an APPN network through the subarea and out to another portion of the APPN network, the network name (NETID) of the owning VTAM must agree with the network ID of the APPN network. In Communications Server, the network ID (of the network node connected to the subarea) can be found using the SNA local node characteristics profile.

Intermediate Session Routing

Intermediate session routing is a function performed by a network node. This capability enables a network node to receive and route data destined for another node. The origin and destination of the data can either be an end node, a network node, or a LEN node. The piece of the session between two adjacent nodes is called a session stage.


High Performance Routing (HPR) Support

Communications Server supports high performance routing (HPR) over Enterprise Extender (IP), synchronous data link control (SDLC), LAN, WAN, channel, Multi-Path Channel (MPC), and X.25 connections.

HPR automatic network routing (ANR) minimizes the storage and processing requirements in intermediate nodes, which is a better solution than APPN intermediate session routing (ISR) for high-speed networks with low error rates.

HPR improves SNA routing with these major features:

Rapid Transport Protocol (RTP)

RTP is a set of message formats and protocols designed to utilize modern data communication media, minimize overhead in intermediate nodes, and automatically switch paths when a link in the path fails.

RTP connections are established within an HPR subnet and are used to transport session traffic. An HPR subnet is the portion of an APPN network that is capable of establishing RTP connections and transporting HPR session traffic. RTP connections can be thought of as transport pipes over which sessions are carried. These connections can carry data at very high speeds using low-level intermediate routing and minimize traffic over the links for error recovery and flow control. These flows are managed by the RTP connection endpoints.

An RTP connection's physical path can be switched automatically to reroute data around a failed node or link without disrupting the sessions. Data in the network at the time of the failure is recovered automatically.

RTP does error recovery on an end-to-end basis, rather than on a link-level basis. Performance is improved by reducing the number of flows required to do error recovery. Link-level error recovery protocols (ERPs) are also supported for all connections. ERP is a method of detecting a lost packet at one end of a link and recovering by asking the other end of the link to retransmit the packet. If ERP is used, HPR packets are sent as numbered information frames (I-FRAMES). When a frame is lost, the DLC detects the loss and the sender retransmits the frame. If ERP is not used, HPR packets are sent as unnumbered information frames (UI-FRAMES). When a frame is lost, the DLC cannot detect the loss and HPR rapid transport protocol (RTP) must detect and recover lost packets at connection end points.

In either case, RTP always detects and recovers lost packets at connection end points. For any given connection, there are no restrictions on the number of links that use ERP, or do not use ERP.

ERP can be enabled or disabled on a link-by-link basis. Because RTP detects and recovers lost packets at connection end points, you can use either ERP links or non-ERP links when you build the network. This lets you specify link-level ERP on links that have a high rate of packet loss and maximize throughput on other links by specifying that they do not use link-level ERP. In general, the use of ERP is not recommended in LANS.
Note:ERP is always enabled in a wide area network (WAN) environment.

Flow control and congestion control are also done by RTP on an end-to-end basis. RTP uses a technique called automatic rate based (ARB) flow control to fully utilize network bandwidth when possible. RTP increases the rate at which packets are sent when the network supports this increased send rate. Congestion is automatically recognized and the send rate will be decreased accordingly when congestion occurs. The configured effective capacities of links in the connection path are used to determine both the initial send rate and the send rate increment.

Support for control flows (CF) over RTP connections is now available with HPR in Communications Server. Previously, control flows, including CP-CP sessions and route setup messages, used APPN connections while the data flows used HPR connections. Now, both control flows and data flows can use RTP connections. The benefits of this support include automatic path switching for CP-CP sessions.

Control flows automatically flow over RTP if both endpoints of the connection support this function.

Automatic Network Routing (ANR)

Automatic network routing (ANR) is a stateless routing technique enabled by RTP where a message arrives with a label that uniquely identifies the next hop in the path. Because of its simplicity, ANR can be performed at a low level with no knowledge of the connections using the path. ANR minimizes cycles and storage requirements for routing packets through intermediate nodes.

The ANR fast packet switching function improves performance in intermediate nodes by routing at a lower level than APPN and performing error recovery, segmentation, flow control, and congestion control at the end node, rather than on the intermediate node.

Intermediate ANR nodes are not aware of the SNA sessions or the RTP connections. Routing information for each packet is carried in a network header with the packet. Each node strips off the information it has used in the header before forwarding the packet, so the next node can find its routing information at a fixed place in the header. There is no need to keep the routing tables for session connectors as in base APPN, so switching packets through nodes can be done more quickly.


LU Support

SNA defines LU types 0, 1, 2, 3, 4, 6.0, 6.1, 6.2, and 7. LU types 0, 1, 2, 3, 4, and 7 support communications between application programs and different kinds of workstations. LU types 6.0 and 6.1 provide communications between programs located at type 5 subarea nodes. LU type 6.2 supports communications between two programs located at type 5 subarea nodes or type 2.1 peripheral nodes, or both, and between programs and devices.

Communications Server supports LU types 0, 1, 2, and 3, which support communications with host applications that support devices such as:

LU type 0
3650 and 4700 financial terminals

LU type 1
3270 printers

LU type 2
3270 interactive displays

LU type 3
3270 printers

Communication occurs only between LUs of the same LU type. For example, an LU 2 communicates with another LU 2; it does not communicate with an LU 3. Communications Server also supports LU type 6.2 or APPC.

The Communications Server SNA functions enable applications to use the APPC application programming interface (API) to provide a distributed transaction processing capability in which two or more programs cooperate to carry out a processing function. This capability involves communication between the two programs so they can share local resources such as processor cycles, databases, work queues, and physical interfaces such as keyboards and displays.

Communications Server supports APPC through the APPC APIs. Refer to the following publications for more information:

The following Communications Server functions support a range of LU types:

SDDLU Support

The self-defining dependent LU (SDDLU) support enables you to dynamically define and activate a dependent LU at the host (VTAM). In VTAM, this is known as dynamic definition of dependent LUs (DDDLU). SDDLU is enabled in Communications Server by coding an LU_MODEL statement on an LU definition.

To enable the DDDLU facility in VTAM, code the LUGROUP operand on the PU definition statement for the PU, and code an LU group major node. To use the IBM-supplied SDDLU exit routine that generates the LU names for you, you should also code the LUSEED operand on the PU statement.

The LUGROUP operand specifies the name of the model LU definition group that VTAM will use when dynamically defining LUs for this PU. The LU group major node contains the model definition statements. Dynamic definitions for LUs are built using the model LU definitions contained in this major node.

The LUSEED operand provides a pattern name that is used with the SDDLU exit routine to create a name for the dynamically created LU. Once the correct statements have been added to the PU statement and the LU group major node coded, these major nodes need to be active for the SDDLU function to be enabled.

Dependent Logical Unit Requester Support

Dependent Logical Unit Requester (DLUR) is an architecture intended to provide dependent LU support in an APPN network. Communications Server supports all base DLUR functions and the following optional functions:

Using DLUR

To use the DLUR function, you configure a DLUR_DEFAULTS definition and use the link name from that definition as the host link for your LUA, dependent LU 6.2, or gateway definitions. Communications Server sends the PUNAME, CPNAME, and NODEID to the DLUS. The PUNAME is sent as part of the signalling information (CV X'0E').

Connections to the network using the connectivity of your choice (Token Ring, SDLC, AnyNet, and so on) must be configured and active before the DLUR-to-DLUS connection can be established. Once an APPN connection exists between the DLUR and DLUS, a pair of control sessions are established between the DLUR and DLUS using a special mode, CPSVRMGR. This pair of control sessions is also referred to as the CP-SVR pipe and appears as a link to Communications Server. It can therefore be activated, deactivated, and displayed using SNA Node Operations.

Once the pipe is activated, SSCP-to-PU and SSCP-to-LU support can be provided to PUs and LUs that have defined the pipe as their host link. LU-to-LU sessions do not use the pipe, but will use the best path available through the network.

In the DLUR environment, any number of dedicated PUs can be defined on the LU 6.2 sessions. This enables the gateway to provide network management access through the dedicated PU to downstream workstations without requiring numerous physical links to the hosts.

Figure 17 shows a Communications Server workstation acting as a DLUR gateway for both a workstation and a 4702 controller.

Figure 17. DLUR Connection to a Host through a Communications Server Gateway


DebL2071

LU-LU Sessions

Communications Server LUs can both initiate sessions and respond to session initiation requests. An LU initiates and responds to requests according to the type of LU: independent or dependent.

Independent LU

An independent LU is able to activate an LU-LU session (that is, send a BIND request) without assistance from the SSCP; therefore, it does not have an SSCP-LU session. An independent LU is capable of sending and receiving BINDs. The BIND sender is referred to as the primary LU (PLU); the BIND receiver is referred to as the secondary LU (SLU).

Only an LU 6.2 can be an independent LU. Communications Server supports independent LU protocols to other type 2.1 nodes as well as to low entry networking-level type 5 subarea nodes.

Independent LUs can have parallel sessions between the same pair of LUs and can have multiple sessions between one LU and several other LUs. Their session limits are established on a mode name basis, which can be from 1 to 32 767.

Figure 18 shows how multiple and parallel sessions can be established by an independent LU. LUx supports parallel sessions with LUy and a single session with LUz. The direction of the session arrows shows the PLU-SLU relationship. LUx acts as the PLU for the session with LUz and for one of the sessions with LUy. LUx also acts as the SLU for one of the parallel sessions with LUy.

Figure 18. Multiple and Parallel Sessions


multiple and parallel sessions

Dependent LU

A dependent LU is an LU that is controlled by an SNA host system. To activate an LU-LU session, a dependent LU requires assistance from an SSCP. It requires an SSCP-LU session to send a BIND. Dependent LU protocols are supported by Communications Server, but only to type 5 subarea nodes using type 2.0 protocols, not to other type 2.1 peripheral nodes. Dependent LUs act as SLUs only and have an LU-LU session limit of 1. However, multiple PU support in Communications Server enables you to establish multiple simultaneous SSCP-PU sessions with dependent LU sessions.

The dependent LU requester (DLUR) function enables Communications Server to take advantage of the enhanced SSCP support provided by a dependent LU server (DLUS). Some of the benefits of this function are:

To use DLUR, configure a DEFINE_DEPENDENT_LU_SERVER parameter and use the link name from that definition for your LUA, dependent LU 6.2, or gateway definitions.

LU 6.2

Independent LUs are defined to the VTAM program by coding LOCADDR=0. There can be as many LUs defined with LOCADDR=0 as you want. Note, however, that not all LU 6.2s are independent LUs.

When you define the LUs of one part of the APPN network to the VTAM program, you must define them as being in the network node that connects this part of the APPN network to the subarea network. Following the PU definition of this network node, define each LU that you want to reach from the other part of the APPN network. Do not forget that control points are LUs.

An LU must be defined in VTAM to establish a session with another LU if this session goes through the subarea network. There is no effective way to get around it (such as the wildcards of APPN); VTAM must know the name of each destination LU.

Because an APPN network is intended to change easily, you should define the LUs of the APPN network in a special major node whenever it is possible. You can also define, in VTAM, LUs that do not exist yet.

Other LUs

If the network node uses its connection to the subarea network for 3270 emulation, the LU type 2 LUs of the 3270 emulation are defined in the same PU macro as the LU type 6.2 LUs of the APPN network. The link will also be used for the connection between the 3270 emulation and the host.

APPN Network Node and T2.1 Support

APPN is an enhancement to IBM's SNA and type 2.1 (T2.1) node architecture. APPN enables interconnection of systems of widely differing sizes into networks of a dynamic topology. An APPN network is easier to use, is more reliable, and provides more flexibility than traditional SNA networks.

Refer to 3174 APPN Implementation Guide for additional information on APPN network nodes.


Data Compression

Data compression is the process of compressing repeated bytes or repeated data strings to shorten the length of records or blocks. This reduces the transfer time needed for communications. By reducing the amount of data transferred between host and workstation sessions, you can increase the throughput on slow speed lines and lower the cost per bit on expensive lines.

The performance gain, as measured by the number of transferred bytes, that you can expect by using data compression is often a ratio of about 2:1. This means that, with data compression active, you save about every second byte in the buffers needed for lower level protocol conversion.

Data compression is beneficial to those who need:

However, data compression should not be applied to every session you are running as there are disadvantages you should consider:

For detailed and technical descriptions of different compression algorithms see the following publications:

The following sections describe SNA session-level compression and Communications Server implementation.

SNA Session-Level Compression Architecture

SNA session level compression implements data compression in the LU-LU half session. With Communications Server it is available to all supported LU types, that is: LU types 0, 1, 2, 3, and 6.2. Data compression at the session level provides these advantages:

Two algorithms are generally defined for SNA session level compression, Run length encoding (RLE) and a form of Lempel-Ziv (LZ). Communications Server supports SNA session level compression using the following algorithms:

Typically, LZ compresses data better than RLE, but at a greater cost of memory and CPU function.

SNA session level compression views the session in two directions, PLU-SLU and SLU-PLU. The primary logical unit (PLU) is the LU responsible for activating the session. The secondary logical unit (SLU) is the responding LU. The PLU activates a session by sending a Bind Session request (BIND) to the SLU, which responds with a BIND. This means that different compression algorithms can be used in the PLU-SLU and SLU-PLU directions. This is performed through BIND negotiation of the compression levels. LU 6.2 can use any combination of compression levels for a session (for example, PLU could use RLE and SLU-PLU could use LZ9). All other LU types have compression enabled or disabled. When enabled, the PLU-SLU compression level is LZ9 and SLU-PLU is RLE.

Communications Server Data Compression

Communications Server supports SNA session level data compression with RLE, LZ9, and LZ10 compression algorithms. With Communications Server, you can specify the use of data compression for communications over CPI-C sessions (through APPC session), APPC (LU 6.2) sessions, and LUA (LU 0, LU 1, LU 2, and LU 3) sessions.

A two-part configuration is used to enable data compression. The Communications Server node must be enabled for data compression and the LU (APPC and LUA) must first be enabled. The two node compression fields (level and tokens) are on the local node characteristics window (NODE keyword in the .ACG file).

The compression level field sets the maximum level that any session can be started with: NONE, RLE, LZ9, or LZ10. This field takes precedence over all compression levels configured or attempted (the only exception is stand-alone DFT, which does not require the node definition). If you are configuring a session using LUA (LU 0, LU 1, LU 2, and LU 3) to support 3270 emulation or printers, LZ9 is required for data compression. The other compression levels do not allow data compression for these LU types.

The default request unit size for compressed modes is two times the connection's basic transmission unit (BTU) size. If you are using packet switching, you might not want to use this default size; instead, use a larger size and segment the packets.

LU 6.2 compression can be enabled by:

Three mode compression fields, compression need, PLU->SLU compression level, and SLU->PLU compression level, are displayed on the Mode Definition panel (MODE keyword in the .ACG file).

Compression need can have two values.

Prohibited
No compression.

Requested
Use this to request data compression with the values defined in PLU->SLU compression level and SLU->PLU compression level. The requested level might not be obtained for the following reasons:

The SLU honors the compression levels requested by the PLU, unless limited by its node compression settings.

For more information on Communications Server compression, refer to the online help text for the product or the Configuration File Reference.


SNA Session-Level Encryption

SNA session-level encryption enables you to encrypt either all of the data or selected data that is transferred between the workstation and the host. If you want to protect any workstation data by using encryption, the host must also be configured to use encryption.

An IBM SecureWay 4758 PCI Cryptographic Coprocessor (referred to as the IBM 4758) adapter must be installed on the server to enable data confidentiality. This adapter must be initialized by following the instructions provided with the adapter.

In Communications Server, LU 6.2 session-level encryption is configured based on the mode description used for a given transaction program. There are two levels of encryption:

To configure a mode for encryption, bring up the SNA Features window and select MODES. Then inside the Mode Definition window, select Setup.... The Compression and Session-level Encryption Support window appears. The parameters for the encryption configuration are in two parts:

Communications Server requires other products for key storage and translation. A common cryptographic architecture (CCA) product is required for key storage, managed by the utilities supplied with the IBM 4758 adapter. Communications Server calls a CCA product, which interacts with the IBM 4758 adapter to get the keys and encrypt the data.
VTAM Users:

Communications Server does not encrypt the SNASVCMG session. You must specify ENCR=OPT in the APPL statement of your VTAM application definition. In working with VTAM, you must have encryption specified on the MODEENT statement. For example:

       ENCR=B'0011' FOR MANDATORY ENCRYPTION

To use the VTAM encryption facility, the IBM Programmed Cryptographic Facility (PCF) must be initiated before starting VTAM.

Beginning with VTAM V3R4.1, VTAM uses a new interface to Integrated Cryptographic Service Facility/MVS (ICSF/MVS) for cryptographic services, such as providing session-level cryptography. This interface complies with the Common Cryptographic Architecture (CCA) as implemented by ICSF/MVS. With this support, you can start and stop the cryptographic service after VTAM has been started, and you can change the master key without disrupting VTAM or active LU-LU sessions.

For information on how to define data encryption, refer to OS/390 eNetwork Communications Server: SNA Network Implementation.


Management Services

Communications Server's management services (MS) are functions distributed among network components to operate, manage, and control a network. This capability is based on the SNA management services architecture documented in Systems Network Architecture Management Services Reference.

Focal Points, Service Points, and Entry Points

Communications Server provides programming support that enables installation of management services focal point (FP), service point (SP), and entry point (EP) applications. Management services SP applications are just a variation of management services EP applications and differ only in the kinds of function they provide; otherwise, they interact with a management services focal point as management services EP applications.

Focal Point
A management services focal point is a central point of control for managing a network. From a management services SP or management services EP application, the management services focal point can request certain data relating to the operation of a network such as problem and performance data or product identification.

The management services focal point can also accept certain unsolicited management services data from the nodes it manages based on the category of the management services data. An example of a management services category is MS alerts. A management services focal point can manage one or more categories of management services data, and there can be one or more management services focal points in a network. IBM Communications Server, the IBM NetView program, and the IBM OS/400 operating system are examples of products that provide management services focal point capability.

Service Point
A management services SP is the function in a node that can request and capture data from devices that, by themselves, cannot serve as management services EPs, such as devices connected by LAN protocols (but not higher-level SNA protocols) to the management services SP node. Aside from collecting nonlocal data, a management services SP functions like a management services EP in its relation to the management services focal point. The IBM NetView/PC and IBM LAN Network Manager program products are examples of management services EP applications that provide management services SP functions and management services EP functions.

Entry Point
A management services EP is the function in a node that captures local management services data and sends it to a management services focal point for processing, either upon request or unsolicited. Communications Server provides the management services EP function for sending alerts to the alert management services focal point. These alerts can originate within Communications Server or the DLCs it uses. Communications Server also provides programming support for applications, such as the IBM NetView/PC and IBM LAN Network Manager program products, by supplying alerts to be sent to the alert management services focal point.

Levels of SNA Management Services Architecture

An SNA product implements a particular level (or generation) of the SNA Management Services architecture, and some products support several levels of the architecture. Communications Server can send management services data to, and receive management services data from, SNA products that implement any of three levels of the management services architecture. These levels are:

Multiple Domain Support (MDS) level
An SNA product that implements the MDS level of the management services architecture, such as Communications Server and IBM NetView Version 2 Release 2 (or later). It can send and receive MDS message units (MDS-MUs). IBM NetView Version 2 Release 2 provides MDS level as a subarea LU, not a control point (CP), and uses SNASVCMG-mode sessions for transporting MDS-MUs. As a focal point, it supports explicit, implicit (primary) and implicit (backup) FP-EP relationships. NetView Version 2 Release 2 also continues to support the host FP-EP relationship to EP products that do not have MDS-level support.

Migration level
An SNA product that implements the previous level of the management services architecture, such as IBM OS/400 Version 1 Release 3 Modification Level 0 (or earlier). A migration-level product can support explicit, default, and domain FP-EP relationships. The domain FP-EP relationship is inferred when the CP-CP sessions are activated to a migration-level node. A migration-level serving network node (NN) does not send MS Capabilities for FP Notification to its served end nodes (ENs), and a migration-level served EN does not accept MS Capabilities for FP Notification from its serving NN. It can send and receive CP-MSUs but not MDS-MUs. As a focal point, it supports only the alert MS category.

Network Management Vector Transport (NMVT) level
An SNA product that implements the NMVT level of SNA management services architecture, such as IBM NetView Version 2 Release 1 (or later). NMVT is a management services request unit (RU) that flows over an active session between PU management services and control point management services. If an NMVT is routed from a workstation through a gateway, then the gateway adds its control point name to the NMVT.

Flow Control

To manage the flow of data over a network, Communications Server uses adaptive session-level pacing. The pacing occurs between each pair of adjacent nodes participating in the session route. The pacing between two adjacent nodes is independent of the pacing used between other adjacent nodes in the route.

Session-Level Pacing

Adaptive session-level pacing uses a window-based scheme, where a sender can send only a limited number, or window, of request units per explicit grant of permission to proceed. The window size can be changed based on conditions at the receiver. This function permits a node to control the amount of data that is sent and received during normal session operation. The window control enables the receiving node to manage its rate for receiving data into its session buffers. Adaptive session-level pacing provides a node supporting many sessions a dynamic means to allocate resources to a session that has a burst of activity and to reclaim unused resources from sessions that have no activity. Adaptive session-level pacing enables the receiving node to use its available buffer resources efficiently.

Because each session stage between the endpoints is independently paced, both endpoint nodes and intermediate nodes can adapt the pacing for the sessions they handle in accordance with their own local congestion conditions. This action is the basis for global flow control and congestion management in APPN networks.

If, however, an interactive session and a session transferring a large file share a link, the interactive session data should be transmitted as quickly as possible. There are two ways to do this:

  1. Assign a lower priority to the file transfer session. #BATCH uses low priority.
  2. Use fixed pacing with a small window size for the file transfer session to enable interactive session data to use the link when the file transfer session is waiting for the pacing response. If connected directly to an NCP host, two-way fixed window pacing can be used to set pacing in both directions to the receive window on the defined mode.

Adaptive BIND Pacing

BIND traffic can occur in bursts, particularly at node or network startup. Therefore, adaptive BIND pacing exists to control the flow of BINDs between two adjacent nodes. The same window algorithm used for session-level pacing is employed.

Segmenting and Reassembly

To transmit RUs longer than the maximum-size basic transmission unit allowed by a particular link, Communications Server supports data segmentation and reassembly. These segments are reassembled into whole RUs at the partner node. This action enables the RU size defined for a session to be independent of the link that is used for the route.

High Performance Routing Pacing

High performance routing (HPR) provides a new method of flow control called adaptive rate-based congestion control (ARB). ARB regulates traffic flow by predicting congestion in the network and reducing a node's sending rate into the network, preventing congestion rather than reacting to it.

Fixed Pacing

Fixed pacing enables you to share a physical connection between two sessions. Without fixed pacing, the data to be transmitted is placed on a common data link control (DLC) queue and interactive data follows previously queued data. Fixed pacing also reduces the amount of storage that can be used to place data on the DLC queue. Two-way fixed pacing can be used with an NCP to avoid defining host fixed pacing. However, in general, adaptive pacing is the most efficient method of data transfer between nodes.

Transmission priority, like fixed pacing, enables sharing of a physical link between sessions. However, it will lock storage as the data is placed on the DLC queues, but it does not require the additional pacing responses that are required for fixed pacing.

Partitioning LUs among Hosts

When you define multiple subarea host connections, it is required that traffic from the domain of a given host enter only on one logical link. Note that manual dial connections appear to be a single link. You must define a different PU to support each different host. Only the host links defined on the control point can have CP-CP sessions and participate in the APPN network. Links that have USE_PU_NAME_IN_XID=1 cannot have CP-CP sessions. Otherwise, the host links can have CP-CP sessions and can also participate in APPN communication. The LINK_STATION keyword specifies the PU name and logical link to be used for the PU. If parallel links are required (in situations where there are more than 254 dependent LUs), one of the links must have CP-CP session support set to No.

Each dependent LU can be defined as associated with only one PU. The LOCAL_LU keyword specifies the host link name used for a dependent LU type 6.2. The LU_0_TO_3 keyword specifies the host link name for LUA, and the 3270 profile specifies the host link for each 3270 emulation session.

The control point automatically defines a PU with the same name as the control point. Dependent LUs that are defined at one subarea SSCP can be associated with the control point PU. (In fact, the automatically defined LU for the control point can also be specified as dependent.) Each additional PU 2.0 for a different subarea host requires a separate link and PU definition. A PU is defined by specifying PU on a LINK_STATION keyword. This is unnecessary if all LUs are independent.

If the SSCP-PU session is to send alerts to the host, define the focal point link with the NODE keyword in the .ACG file. If a host is to receive the alerts, specify a logical link for the control point to the host. If the link to that host is not available, Communications Server logs the alerts. The only network management requests that are accepted from a PU 2.0 host are the ones for the control point PU. Replies are sent to the same host using the control point PU.

Each dependent logical unit must have a configured local address that is the same as the one configured at the host. However, use of the SNA gateway allows for a gateway address translation. A dependent LU 6.2 can act as an independent LU to a peer node; that is, it is dependent only to the subarea. Such an LU should not be a part of an APPN network with any other connection to the same subarea, that is, the subarea containing the controlling SSCP. Only one subarea connection is permitted for an APPN network, unless the subareas are independent. Although a gateway can have parallel links to the same subarea, only one can have CP-CP sessions and APPN traffic.

An SSCP in the subarea network activates the dependent LUs it controls after the link to it is established. Until an LU is activated, the LU cannot start a session. When APPC is taken down, disconnect is requested from each active host in session with a PU in the node. Each host frees the link after deactivating first the LUs and then the PU on the SSCP-PU session.


SNA Gateway Support

Communications Server provides a full-function Systems Network Architecture (SNA) gateway. The gateway allows multiple LAN-attached workstations to access System/370 or System/390 hosts through one or more physical connections to one or more hosts. This helps reduce the cost per workstation of host connections.

Figure 19. Example of SNA Gateway Configuration


REQTEXT

The Communications Server gateway supports the SNA protocols LU 0, 1, 2, 3, and dependent LU 6.2 (APPC). With the AnyNet SNA over TCP/IP function, downstream workstations can now communicate with the SNA gateway over an IP network. The gateway also supports LU 0, 1, 2, or 3 to an AS/400 host using SNA pass-through. The AS/400 host passes the data through to a System/390 host.

A gateway can also act as a protocol converter between workstations attached to a LAN and a WAN host line.

The LUs defined in the gateway can be dedicated to a particular workstation or pooled among multiple workstations. Pooling allows workstations to share common LUs, which increases the efficiency of the LUs and reduces the configuration and startup requirements at the host. You can also define multiple LU pools, each pool associated with a specific application. And you can define common pools that are associated with multiple hosts. When a client connects to the gateway, the gateway retrieves an LU from the pool to establish a session. The LU is returned to the pool for access by other workstations when the session is ended.

In addition, an SNA gateway can support the forwarding of network management vector transports (NMVTs) between the workstations and the host.

Each host views the SNA gateway as an SNA PU 2.0 node, supporting one or more LUs per workstation. As far as the host is concerned, all LUs belong to the SNA gateway PU. The SNA gateway can have multiple host connections simultaneously and can direct different workstation sessions to specific hosts.

To the supported workstations, the SNA gateway looks like an SNA PU 4 communications controller and forwards such host requests as BIND and UNBIND. The workstation LUs are not aware of the SNA gateway. The SNA gateway, however, is aware of all LUs at the workstations.

Downstream applications using standard SNA connectivity protocols for LU 0, 1, 2, and 3 and dependent 6.2, and communicating through an SNA gateway to a host, are supported by Communications Server. Table 4 summarizes the SNA gateway features.

Table 4. SNA Gateway Summary
Feature Description
Active workstations
254 (LAN) per adapter
128 (X.25)

DLCs
AnyNet (SNA over TCP/IP)
Twinaxial (upstream only)
LAN (Any NDIS** compliant network adapter)
X.25
SDLC (synchronous, asynchronous, and AutoSync)
OEM Channel (upstream only)
MPC Channel (upstream only, requires DLUR)
Enterprise Extender

Downstream workstations Any product that supports standard SNA connectivity protocols for LU 0, 1, 2, 3, and 6.2.
Dynamic additions and changes Yes
Implicit workstation support Yes
LU pooling Yes
Maximum number of LUs 254 per PU; no limit on the number of PUs
Mode of operation Multiple downstream PUs (not apparent to the host) PUs not visible to the host (except when through DLUR)
Multiple PU support Yes
Segmenting support Yes
Supported LU types LU 0, 1, 2, 3, and dependent 6.2


[ Top of Page | Previous Page | Next Page | Table of Contents | Index ]