In the Description section, the Availability of OSA-Express Enhancements
section was updated.
zSeries FICON Support of Cascaded Directors
Today, IBM is announcing an enhancement to Native FICON (FC Channel Path
Identifier [CHPID] type) channels, adding zSeries FICON support of
cascaded directors, planned to be available on
January 31, 2003. This support is for a two-switch
configuration only. With cascading, a Native FICON (FC CHPID type)
channel or a FICON Channel to Channel (CTC) can connect a server to a
device or other server with two native FICON directors in between. This
cascaded director support is planned to be delivered in conjunction with
IBM's re-marketed INRANGE FC/9000 and McDATA Intrepid FICON directors.
Cascaded director support (sometimes referred to as cascaded switching)
is for single-vendor fabrics only.
This announcement pertains to all Native FICON (FC CHPID type) channels
implemented on either existing FICON features on z900, or new/existing
FICON Express features on z900 and z800. This announcement is for Native
FICON (FC CHPID type) channels only including FICON CTC.
IBM plans to support a limited number of early installations for
qualified customers through IBM's Early Support Program. Contact your
IBM representative if you are interested in participating in this
Cascaded support is important for disaster recovery and business
continuity solutions. It can provide high-availability connectivity as
well as the potential for fiber infrastructure cost savings for extended
storage networks. FICON two director cascaded technology can allow for
shared links and therefore improved utilization of intersite connected
resources and infrastructure. Solutions such as Geographically Dispersed
Parallel Sysplex (GDPS) can benefit from the reduced inter-site
configuration complexity that Native FICON cascaded directors provide.
While specific cost savings vary depending upon infrastructure, workloads
and size of data transfers, generally customers who have data centers
separated between two sites may reduce the number of cross site
connections by using cascaded directors. Further savings may be realized
in the reduction of the number of channels and switch ports.
Another important value of FICON cascaded directors is its ability to
provide high integrity data paths. The high integrity function is an
integral component of the FICON architecture when configuring FICON
channel paths through a cascaded fabric. To support the introduction of
FICON cascaded switching, IBM has worked with the FICON director vendors
to help ensure that robustness in the channel to control unit path is
maintained to the same high standard of error detection, recovery, and
data integrity, that has existed for many years with both ESCON® and
the initial implementation of FICON.
End-to-end data integrity is designed to be maintained through the
cascaded director fabric. Data integrity helps ensure that any changes
to the customer's data streams are always detected, and the data frames
(data streams) are delivered to the correct end point (an end point being
a FICON channel port or a FICON Control Unit [CU] port). For FICON
channels, Cyclic Redundancy Checking (CRC), and Longitudinal Redundancy
Checking (LRC) are bit patterns added to the customer data streams to
allow for detection of any bit changes in the data stream. With FICON
cascaded switching, new integrity features are introduced within the
FICON channel and the FICON cascaded switch fabric to help ensure the
detection and reporting of any miscabling actions occurring within the
fabric during operational use that may cause a frame to be delivered to
the wrong end point.
A FICON channel when configured to operate with a cascaded switch
fabric, requires that the switch fabric supports high integrity. During
initialization, the FICON channel will query the switch fabric to
determine that it supports high integrity, and if it does, then the
channel will complete the initialization process allowing the channel to
operate with the fabric. Both McDATA and INRANGE may offer features to
support high integrity cascading. For more information on availability,
contact McDATA and INRANGE directly.
Once a FICON switch fabric has been customized to support FICON cascaded
switching and the required switches have been customized in the fabric
switch list, the fabric will check that its inter-switch-links (ISLs) are
installed to the correct switches before they are made operational. Once
the ISLs are operational, any changes to the ISL connections will be
checked by switches within the fabric before they can be used (the
connected switches must be in the switch fabric list). With this
checking, if an ISL is incorrectly installed, the fabric is designed to
stop using the links for customer data streams thereby preventing frames
from being delivered to the wrong end points.
FICON Express Features Support 2 Gbps Link Speeds
IBM announces the availability of 2 Gbps links for the zSeries FICON
Express features configured as native FICON (FC CHPID type) or Fibre
Channel Protocol (FCP CHPID type). This 2 Gbps support is available for
the FICON Express features on zSeries 900 and zSeries 800.
This announcement is consistent with the industry's movement to support
of 2 Gbps link data rates as defined by the industry standard, Fibre
Channel architecture. The FICON Express features are capable of
auto-negotiation with the attached device to operate at 1 or 2 Gbps.
Applications with highly sequential large data transfers of all reads or
all writes (previously limited to 100 Megabytes per second [MB/s]
with the 1 Gbps link) can now achieve up to 150 to 170 MB/s with the 2
Gbps link, which means up to a 50 to 70 % improvement in the effective
To benefit from 2 Gbps link data rates, the infrastructure needs to be
positioned to take advantage of this higher data rate capability. For 2
Gbps link capability, each FICON/Fibre Channel port and each FICON/Fibre
Channel device must be capable of 2 Gbps, for a 2 Gbps end-to-end data
For FICON/Fibre Channel directors, 2 Gbps support is, or is planned to
be, available for IBM remarketed McDATA and INRANGE directors. The
McDATA Intrepid 6064 director introduced a 2 Gbps Fibre Channel
capability earlier this year (IBM 2032-064). Refer to
ibm.com/storage/mcdata for additional details. The INRANGE FC/9000 Fibre
Channel director is expected to support a 2 Gbps link data rate, (64-port
IBM 2042-001, 128-port IBM 2042-128). Refer to ibm.com/storage/inrange
for additional details.
For IBM Disk Drives, 2 Gbps support is available with the IBM
TotalStorage Enterprise Storage Server (ESS) Machine Type 2105,
Model 800. As announced on July 15, 2002, the ESS General
availability is planned to be August 16, 2002. Refer to
, dated July 15, 2002.
The major benefit for disk transfers will be increased data throughput.
Theoretically, up to twice the data may be transferred over the same
number of connections when the link speed is doubled.
In most applications, the sustained throughput is 70 to 80% of the
maximum theoretical speeds.
The following matrix contrasts the unrepeated distances capable when
using single mode or multimode fiber at 1 Gbps and 2 Gbps.
1 Gbps link 2 Gbps link
Fiber Type Light Unrepeated Link Unrepeated Link
in Microns (u) Source Distance Budget Distance Budget
9 u single mode LX laser 10 km 7.8 dB 10 km 7.8 dB
6.2 miles 6.2 miles
50 u multimode SX laser 500 meters 3.9 dB 300 meters 2.8 dB
1640 feet 984 feet
62.5 u multimode SX laser 250 meters 2.8 dB 120 meters 2.2 dB
820 feet 394 feet
These numbers reflect the Fibre Channel Physical Interface
specification. The link budget above is derived from combining the
channel insertion loss budget with the unallocated link margin budget.
The light budget numbers have been rounded to the nearest 10th.
Refer also to Planning for Fiber Optic Links (ESCON, FICON, Coupling
Links, and Open Systems Adapters), GA23-0367-07b or higher, which can be
found on ResourceLink.
Care should be taken to ensure that the tactical as well as the strategic
requirements for your data center, Storage Area Network (SAN), and
Network Attached Storage (NAS) infrastructures are taken into
consideration as you employ 2 Gbps and beyond link data rates.
Mode Conditioning Patch (MCP) cables are not supported at the 2 Gbps
link data rate.
Additionally, as distance solutions are planned, attention must be paid
to the types of fiber optic cabling employed and the associated distance
limitations. Dense Wave® Division Multiplexing (DWDM) and optical
amplifier technologies may soon be or may be available in 2 Gbps link
speeds from Cisco Systems and Nortel Networks. For more information on
availability, contact these vendors directly.
Note that in the FICON/FCP environment, a single vendor fabric is
supported; there cannot be a mix of Nortel Networks and Cisco Systems
equipment in the same SAN. Neither can there be a mix of McDATA and
INRANGE directors in the same SAN.
Network Connectivity Update
Support for Multiple Secondary Router Settings:
Presently an OSA-Express feature allows the setting of two routing
stacks/interfaces (one primary and one secondary) that serve as receptors
for all Internet Protocol (IP) packets with IP destination addresses not
matching a registered IP address on the OSA-Express feature. For QDIO,
these designations are presently set through a statement in the TCP/IP
profile on z/OS and z/VM and in the chandev.conf file on Linux.
You will now be able to define multiple secondary routers (only one
Primary) for QDIO Gigabit Ethernet and Fast Ethernet only. IP packets
received from the Local Area Network (LAN) with an unknown destination
address will be routed in the following manner:
If the primary is defined, the IP packets will be routed to the primary.
If one or more secondary stacks/interfaces are defined and the primary
stack/interface is not active, the IP packets will be forwarded to one of
the active stacks/interfaces that has the secondary routing indicator
There is no way to explicitly set the order of the secondary routers.
The latest level of zSeries Licensed Internal Code (LIC) is required for
this support. There is no prerequisite software level required for this
Availability of OSA-Express Enhancements
IBM announced OSA-Express enhancements for networking on
April 30, 2002. The hardware support became available
May 15, 2002. Now, IBM is announcing the availability of the
following functions in the z/OS Version 1 Release 4
Internet Protocol Version 6 (IPv6) support for the OSA-Express
Gigabit Ethernet and Fast Ethernet features when configured in Queued
Direct Input/Output (QDIO) mode.
Direct SNMP Query support for all of the OSA-Express features when
configured in QDIO mode.
TCP/IP Broadcast support for all of the OSA-Express features when
configured in QDIO mode, and supporting the Routing Information Protocol
(RIP) Version 1.
ARP Cache Management support Purge ARP entries in cache for Internet
Protocol Version 4 (IPv4). This Address Resolution Protocol (ARP)
enhancement is being offered for all of the OSA-Express features when
configured in QDIO mode when using IPv4.
Refer to Software Announcement
, dated August 13, 2002.
Geographically Dispersed Parallel Sysplex (GDPS) Enhancements:
GDPS, an industry leading e-business availability solution,
available through IBM Global Services, is a multisite solution that is
designed to provide the capability to manage the remote copy
configuration and storage subsystems, automate Parallel Sysplex
operational tasks, and perform failure recovery from a single point of
control, thereby helping to improve application availability. GDPS
supports both the synchronous Peer-to-Peer Remote Copy (PPRC), as well as
the asynchronous Extended Remote Copy (XRC) forms of remote copy.
Depending on the form of remote copy, the solution is referred to as
GDPS/PPRC or GDPS/XRC. GDPS/PPRC and GDPS/XRC have been enhanced to
include the following new functions:
GDPS/PPRC hyperswap function is designed to broaden the continuous
availability attributes of GDPS/PPRC by extending the Parallel Sysplex
redundancy to disk subsystems. GDPS/PPRC hyperswap function provides the
ability to transparently switch all primary PPRC disk subsystems with the
secondary PPRC disk subsystems for a planned switch reconfiguration.
Planned to become available in the second half of 2002, it is designed to
provide the ability to perform disk configuration maintenance and planned
site maintenance without requiring any applications to be quiesced.
Peer-to-Peer Virtual Tape Server (VTS) support for a GDPS/XRC
configuration. Peer-to-Peer (PtP) VTS support was initially announced
for a GDPS/PPRC configuration in November 2001. PtP VTS support has
now been extended to a GDPS/XRC configuration. The PtP VTS provides a
hardware-based duplex tape solution and GDPS is designed to automatically
manage the duplexed tapes in the event of a site failure. By extending
GDPS support to data resident on tape, the GDPS solution is designed to
provide continuous availability and near transparent business continuity
benefit for both disk and tape resident data. Enterprises should no
longer be forced to develop and utilize processes that create duplex
tapes and maintain the tape copies in alternate sites.
Enhanced HMC support for GDPS/PPRC and GDPS/XRC configuration. GDPS/PPRC
and GDPS/XRC configurations are significantly enhanced in terms of
availability and simplified configuration. These enhancements are made
available for GDPS configurations by use of new operating system support
which is designed to eliminate the need for a previously required
workstation in the GDPS configuration, thus simplifying the
configuration. This support is available with zSeries driver 3G and
G5/G6 driver 26, with current maintenance levels, and OS/390 2.10 and
later with the service defined in the PSP Bucket for MSYSOPS.
For a more detailed description of GDPS and these functions, refer to the
white paper available at: