Family 2097+01 IBM System z10 Enterprise Class

IBM United States Sales Manual
Revised:  July 11, 2017.

Table of contents
TOC Link Product life cycle dates TOC Link Technical description
TOC Link Abstract TOC Link Publications
TOC Link Highlights TOC Link Features
TOC Link Description TOC Link Accessories
TOC Link Product positioning TOC Link Machine elements
TOC Link Models TOC Link Supplies

 
Product life cycle dates
Type Model Announced Available Marketing Withdrawn Service Discontinued
2097-E12 2008/02/262008/02/26 2012/06/30 -
2097-E26 2008/02/262008/02/26 2012/06/30 -
2097-E40 2008/02/262008/02/26 2012/06/30 -
2097-E56 2008/02/262008/02/26 2012/06/30 -
2097-E64 2008/02/262008/02/26 2012/06/30 -

Back to topBack to top
 
Abstract

The IBM 2097 System z10 EC(TM) is a world class enterprise server designed to meet your business needs. The System z10 EC is built on the inherent strengths of the IBM System z(TM) platform and is designed to deliver technologies and virtualization that provide improvements in price/performance for key new workloads. The System z10 EC further extends System z leadership in key capabilities with the delivery of expanded scalability for growth and large-scale consolidation, improved security and availability to reduce risk, and just-in-time capacity deployment, helping to respond to changing business requirements.

Model abstract 2097-E12

The IBM 2097 System z10 Enterprise Class Model E12 has 1 to 12 Processor units (PUs), 16 to 352 GB memory, 16 IB, 1 to 3 I/O cages, and 960 CHPIDS.

Model abstract 2097-E26

The IBM 2097 System z10 Enterprise Class Model E26 has 1 to 26 Processor units (PUs), 16 to 752 GB memory, 32 IB, 1 to 3 I/O cages, and 1024 CHPIDS.

Model abstract 2097-E40

The IBM 2097 System z10 Enterprise Class Model E40 has 1 to 40 Processor units (PUs), 16 to 1136 GB memory, 40 IB, 1 to 3 I/O cages, and 1024 CHPIDS.

Model abstract 2097-E56

The IBM 2097 System z10 Enterprise Class Model E56 has 1 to 56 Processor units (PUs), 16 to 1520 GB memory, 48 IB, 1 to 3 I/O cages, and 1024 CHPIDS.

Model abstract 2097-E64

The IBM 2097 System z10 Enterprise Class Model E64 has 1 to 64 Processor units (PUs), 16 to 1520 GB memory, 48 IB, 1 to 3 I/O cages, and 1024 CHPIDS.
Back to topBack to top
 

Highlights

The System z10 EC delivers:

  • Improved total system capacity in a 64-way server, offering increased levels of performance and scalability to help enable new business growth.
  • z10 Quad-core 4.4 GHz processor chips that can help improve the execution of CPU-intensive workloads.
  • Up to 1.5 terabytes of available real memory per server for growing application needs (with up to 1 TB real memory per LPAR).
  • Increased scalability with 36 available subcapacity settings.
  • Just-in-time deployment of capacity resources which can improve flexibility when making temporary or permanent changes. Activation can be further simplified and automated using z/OS Capacity Provisioning (available on z/OS V1.9 with PTF and on z/OS V1.10, when available).
  • Temporary capacity offering Capacity for Planned Event (CPE), a variation of Capacity Back Up (CBU) CPE can be used when capacity is unallocated, but available, and is needed for a short term event.
  • A 16 GB fixed Hardware System Area (HSA) which is managed separately from customer memory. This fixed HSA is designed to improve availability by avoiding outages.
  • Memory and books that are interconnected with a point-to-point symmetric multi processor (SMP) network running with an InfiniBand(R) host bus bandwidth at 6 GBps designed to deliver improved performance.
  • The InfiniBand Coupling Links with a link data rate of 6 GBps, designed to provide a high speed solution and increased distance (150 meters) compared to ICB-4 (10 meters).
  • The OSA-Express3 10 GbE LR with double the port density, increased throughput, and reduced latency.
  • HiperSockets improvements with Multiple Write Facility for increased performance and Layer 2 support to host IP and non-IP workloads.
  • Encryption accelerator provided on quad-core chip, which is designed to provide high-speed cryptography for protecting data in storage. CP Assist for Cryptographic Function (CPACF) offers more protection and security options with Advanced Encryption Standard (AES) 192 and 256 and stronger hash algorithm with Secure Hash Algorithm (SHA-512 and SHA-384).
  • HiperDispatch for improved efficiencies between hardware and the z/OS operating system (z/OS 1.7 and above).
  • Hardware Decimal Floating Point unit on each core on the Processor Unit (PU), which can aid in decimal floating point calculations and is designed to deliver performance improvements and precision in execution.
  • Large page support (1 megabyte pages).
  • Up to 336 FICON Express4 channels.
  • Fiber Quick Connect (FQC), a fiber harness integrated in the System z10 EC frame for a 'quick' connect to ESCON and FICON LX channels.
  • Support for IBM Systems Director Active Energy Manager (AEM) for Linux on System z for a single view of actual energy usage across multiple heterogeneous IBM platforms within the infrastructure. AEM V3.1 is a key component of IBM's Cool Blue(TM) portfolio within Project Big Green.
  • The IBM System z9 Enterprise Class (z9 EC) and System z9 Business Class (z9 BC) servers are the last servers to support participation in the same Parallel Sysplex with IBM eServer zSeries 900 (z90), IBM eServer zSeries 800 (z800), and older System/390 Parallel Enterprise Server systems.

Back to topBack to top
 
Description

The IBM 2097 System z10 EC(TM) is a marriage of evolution and revolution, building on the inherent strengths of the System z(TM) platform, delivering new technologies and virtualization that are designed to offer improvements in price / performance for key workloads as well as enabling a new range of solutions. The z10 EC(TM) further extends the leadership of System z in key capabilities with the delivery of expanded scalability for growth and large-scale consolidation, availability to help reduce risk and improve flexibility to respond to changing business requirements, and improved security. The z10 EC is at the core of the enhanced System z platform that is designed to deliver technologies that business needs today along with a foundation to drive future business growth.

With a modular book design, the z10 EC E64 is designed to provide up to 1.7 times the total system capacity of the z9 EC Model S54 and up to three times the available memory of the z9 EC. Significant steps have been taken in the area of server availability in the z10 EC design. Preplanning requirements are minimized by delivering a fixed, reserved Hardware System Area (HSA) and new capabilities intended to allow you to seamlessly create logical partitions (LPARs), include logical subsystems, change logical processor definitions in an LPAR, and add cryptographic capabilities for an LPAR without a power-on reset.

z10 EC introduces just-in-time deployment of capacity resources designed to provide more flexibility to dynamically change capacity when business requirements change. You are no longer limited by one offering configuration; instead you can define one or more flexible configurations that can be used to solve multiple temporary situations. You can now have multiple configurations active at once and the configurations themselves are flexible so you can activate only what is needed from your defined configuration. As long as your total z10 EC infrastructure can support the maximums that are defined, they can be delivered. A significant change is the ability to add permanent capacity to the server when you are in a temporary state. The combination of these updates can change the way you think about on demand capacity.

Integrated clear-key encryption security features on z10 EC include support for a higher advanced encryption standard and more secure hashing algorithms. Performing these functions in hardware is designed to contribute to improved performance.

Integrated on the z10 EC processor unit is a Hardware Decimal Floating Point unit to accelerate decimal floating point transactions. This function is designed to markedly improve performance for decimal floating point operations which offer increased precision compared to binary floating point operations. This is expected to be particularly useful for the calculations involved in many financial transactions.

Iinnovations on the z10 EC are designed to give needed capacity and memory along with the just-in-time management of resources. Advanced virtualization technologies aid in server consolidation, satisfying high I/O requests and dynamic provisioning of new servers.

IBM Global Financing (IGF) can provide attractive low rate financing for all new and upgraded z10 EC product, storage, software and services. For more information, contact your local IGF sales representative or visit the Web site:

http://www.ibm.com/financing

Available worldwide for eligible customers acquiring products and services from IBM and IBM Business Partners.

The IBM System z10 Enterprise Class - A total systems approach to deliver leadership in enterprise computing

With a total systems approach designed to deploy innovative technologies, IBM System z introduces the z10 EC, supporting z/Architecture(R), and offering the highest levels of reliability, availability, scalability, clustering, and virtualization. The z10 EC just-in-time deployment of capacity allows improved flexibility, administration, and the ability to enable changes as they happen. The expanded scalability on the z10 EC facilitates growth and large-scale consolidation. The z10 EC is designed to provide:

  • Uniprocessor performance improvement up to 62% (based on LSPR mixed workload average).
  • Non-uniprocessor performance improvement up to 50% (based on LSPR mixed workload average) for configurations with the same number of processors.
  • Up to 1.7 times the total system capacity of the z9 EC
  • Up to 64 Processor Units (PUs) compared to a maximum of 54 on the z9 EC
  • Up to 3 times as much total server available memory - up to 1.5 terabytes of total memory
  • Up to 50% more subcapacity choices as compared to z9 EC
  • Increased host base bandwidth using InfiniBand (R) at 6 GBps
  • Coupling with InfiniBand for improved distance and potential cost saving
  • Performance improvements with HiperSockets Multiple Write Facility
  • Improved Advanced Encryption Standard (AES) 192 and 256 and stronger hash algorithms with Secure Hash Algorithm (SHA) 384 and 512
  • HiperDispatch for improved efficiencies between hardware and the z/OS (R) operating system (z/OS 1.7 and above)
  • Hardware Decimal Floating Point unit for improved numeric processing performance
  • Reduction in the availability impact of preplanning requirements
    • Fixed Hardware System Area (HSA) designed so the maximum configuration capabilities can be exploited
    • Designed to reduce the number of planned Power-on-Resets
    • Designed to allow dynamic add/remove of a new logical partition (LPAR) to new or existing logical channel subsystem (LCSS)
  • Open Systems Adapter-Express3 (OSA-Express3) 10 Gigabit Ethernet with double the port density and improved performance
  • Up to 336 FICON channels
  • Large page support (1 megabyte pages)
  • Energy efficiency displays on System Activity Display (SAD) screens
  • Just-in-time deployment of capacity for faster activation without dependency or referral to IBM
  • Store System Information (STSI) change to support billing methodologies
  • Temporary offering Capacity for Planned Event (CPE) available to manage system migrations, data center moves, maintenance activities, and similar situations
  • Support for the IBM Systems Director Active Energy Manager (AEM) for Linux on System z

Model Information

   Model     PUs        Memory       IB     I/O Cages   CHPIDs
   -----   -------   -------------   ---    ---------   ------
    E12    1 to 12   16 to  352 GB    16     1 to 3       960
    E26    1 to 26   16 to  752 GB    32     1 to 3      1024
    E40    1 to 40   16 to 1136 GB    40     1 to 3      1024
    E56    1 to 56   16 to 1520 GB    48     1 to 3      1024
    E64    1 to 64   16 to 1520 GB    48     1 to 3      1024
 

Note: Memory reserved for the fixed HSA is in addition to the purchased entitlement.

Note: The addition of the third and fourth books requires a reduction in the number of fanout cards plugged, to increase cooling around the MCM.

Note: Each LCSS supports up to 256 CHPIDs.

The Performance Advantage

IBM's Large Systems Performance Reference (LSPR) method is designed to provide comprehensive z/Architecture processor capacity ratios for different configurations of Central Processors (CPs) across a wide variety of system control programs and workload environments. For z10 EC, z/Architecture processor capacity indicator is defined with a (7XX) notation, where XX is the number of installed CPs.

In addition to the general information provided for z/OS 1.8, the LSPR also contains performance relationships for z/VM and Linux operating environments.

Based on using an LSPR mixed workload, the performance of the z10 EC (2097) 701 is expected to be:

  • up to 1.62 times that of the z9 EC (2094) 701, and
  • up to 1.50 times the z9 EC for non-uniprocessor environments, assuming equal numbers of processors

Moving from a System z9 partition to an equivalently-sized System z10 partition, a z/VM workload will experience an ITR ratio that is somewhat related to the workload's instruction mix, MP factor, and level of storage over commitment. Workloads with higher levels of storage over commitment or higher MP factors are likely to experience lower than average z10 EC to z9 ITR scaling ratios. The range of likely ITR ratios is wider than the range has been for previous processor migrations.

The LSPR contains the Internal Throughput Rate Ratios (ITRRs) for the z10 EC and the previous-generation zSeries processor families based upon measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user may experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, and the workload processed. Therefore no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated. For more detailed performance information, consult the Large Systems Performance Reference (LSPR) available at:

http://www.ibm.com/servers/eserver/zseries/lspr/

HiperDispatch

A z10 EC exclusive, HiperDispatch represents a cooperative effort between the z/OS operating system and the z10 EC hardware and is intended to provide improved efficiencies in both the hardware and the software in the following ways:

  • Work may be dispatched across fewer logical processors therefore reducing the multi-processor (MP) effects and lowering the interference among multiple partitions
  • Specific z/OS tasks may be dispatched to a small subset of logical processors which Processor Resource/Systems Manager (PR/SM)(TM) will tie to the same physical processors, thus improving the hardware cache re-use and locality of reference characteristics such as reducing the rate of cross-book communication.

Refer to the Software requirements section.

Networking

Response time improvements with OSA-Express3 optimized latency mode

Optimized Latency Mode (OLM) can help improve performance for z/OS workloads with demanding low latency requirements. This includes interactive workloads such as SAP using DB2 Connect. OLM can help improve performance for applications that have a critical requirement to minimize response times for inbound and outbound data when servicing remote clients.

This enhancement applies exclusively to OSA-Express3 QDIO mode (CHPID type OSD).

For prerequisites, refer to the Software requirements section.

HiperSockets network traffic analyzer (HS NTA):

Problem isolation and resolution can now be made simpler by an enhancement to the HiperSockets architecture. This function is designed to allow tracing of Layer 2 and Layer 3 HiperSockets network traffic.

HS NTA allows Linux on System z to control the trace for the internal virtual LAN to capture the records into host memory and storage (file systems) using Linux on System z tools to format, edit, and process the trace records for analysis by system programmers and network administrators.

Configuration flexibility with four-port exploitation for OSA-ICC

Integrated Console Controllers (ICC) allow the System z10 to help reduce cost and complexity by eliminating the requirement for external console controllers.

You can now exploit the four ports on a OSA-Express3 1000BASE-T Ethernet feature (#3367) on the z10 EC and z10 BC, or the two ports on a OSA-Express3-2P 1000BASE-T on a z10 BC (#3369), when defining the feature as an Integrated Console Controller (OSA-ICC) for TN3270E, local non-SNA DFT, 3270 emulation, and 328x printer emulation. There are two PCI-E adapters per feature and two channel path identifiers (CHPIDs) to be assigned. Each PCI-E adapter has two ports, but prior to this only one of the two PCI-E adapter ports was available for use when defined as CHPID type OSC. Removal of this restriction can improve configuration flexibility by allowing the ability to connect two local LAN segments to each CHPID.

OSA-ICC continues to support 120 sessions per CHPID.

Four port exploitation for OSA-Express3 1000BASE-T (feature number 3367) and two port exploitation for OSA-Express3-2P 1000BASE-T (feature number 3369) for OSA-ICC will be available in the first quarter of 2010.

For prerequisites, refer to the Software requirements section.

Hardware decimal floating point

Focused performance boost - hardware decimal floating point

Recognizing that speed and precision in numerical computing are essential, with the introduction of z10 EC each core on the PU has its own hardware decimal floating point unit, which is designed to improve performance of decimal floating point over that provided by System z9.

Decimal calculations are often used in financial applications and those done using other floating point facilities have typically been performed by software through the use of libraries. With a hardware decimal floating point unit, some of these calculations may be done directly and accelerated.

Software support for hardware decimal floating point on z10 EC is provided in several programming languages. Support is provided in Assembler Language in Release 4 or 5 of High Level Assembler. Decimal floating point data and instructions are also supported in Enterprise PL/1 V3.7 and resulting programs can be debugged by Debug Tool V8.1. Java applications, which make use of the BigDecimal Class Library, will automatically begin using the hardware decimal floating point instructions when running on a z10 EC. Support for decimal floating point data types is also provided in SQL as provided in DB2 Version 9. Refer to the Software requirements section.

Large page support for 1 megabyte pages

This change to the z/Architecture on z10 EC is designed to allow memory to be extended to support large (1 megabyte (MB)) pages. When large pages are used, in addition to the existing 4 KB page size, they are expected to reduce memory management overhead for exploiting applications.

Large page support is primarily of benefit for long running applications that are memory-access-intensive. Large page is not recommended for general use. Short lived processes with small working sets are normally not good candidates for large pages.

Large page support is exclusive to z10 EC and to z/OS. Refer to the Software requirements section.

Flexible memory

Flexible memory was first introduced on the z9 EC as part of the design changes and offerings to support enhanced book availability. Flexible memory provides the additional resources to maintain a constant level of memory when replacing a book. On z10 EC, the additional resources required for the flexible memory configurations are provided through the purchase of preplanned memory features (#1996) along with the purchase of your memory entitlement. In most cases, this implementation provides a lower-cost solution compared to z9 EC. Flexible memory configurations are available on Models E26, E40, E56, and E64 only and range from 32 GB to 1136 GB, model dependent. Contact your IBM representative to help you determine the appropriate configuration.

Cryptographic support for security-rich transactions

CP Assist for Cryptographic Function (CPACF): CPACF supports clear-key encryption. All CPACF functions can be invoked by problem state instructions defined by an extension of System z architecture. The function is activated using a no-charge enablement feature (#3863) and offers the following on every CPACF that is shared between two Processor Units (PUs) and designated as CPs and/or Integrated Facility for Linux (IFL):

  • Data Encryption Standard (DES)
  • Triple Data Encryption Standard (TDES)
  • Advanced Encryption Standard (AES) for 128-bit keys
  • Secure Hash Algorithm, SHA-1, SHA-224 and SHA-256
  • Pseudo Random Number Generation (PRNG)

Enhancements to CP Assist for Cryptographic Function (CPACF):

CPACF has been enhanced to include support of the following on CPs and IFLs:

  • Advanced Encryption Standard (AES) for 192-bit keys and 256-bit keys
  • SHA-384 and SHA-512 bit for message digest

SHA-1, SHA-256, and SHA-512 are shipped enabled and do not require the enablement feature.

Support for CPACF is also available using the Integrated Cryptographic Service Facility (ICSF). ICSF is a component of z/OS, and is designed to transparently use the available cryptographic functions, whether CPACF or Crypto Express2, to balance the workload and help address the bandwidth requirements of your applications.

The enhancements to CPACF are exclusive to the System z10 EC and supported by z/OS, z/VM, z/VSE, and Linux on System z. Refer to the Software requirements section.

Configurable Crypto Express2: The Crypto Express2 feature has two PCI-X adapters. Each of the PCI-X adapters can be defined as either a Coprocessor or an Accelerator.

Crypto Express2 Coprocessor - for secure-key encrypted transactions (default) is:

  • Designed to support security-rich cryptographic functions, use of secure-encrypted-key values, and User Defined Extensions (UDX)
  • Designed for Federal Information Processing Standard (FIPS) 140-2 Level 4 certification

Crypto Express2 Accelerator - for Secure Sockets Layer (SSL) acceleration:

  • Is designed to support clear-key RSA operations
  • Offloads compute-intensive RSA public-key and private-key cryptographic operations employed in the SSL protocol

Crypto Express2 features can be carried forward on an upgrade to the new System z10 EC, so users may continue to take advantage of the SSL performance and the configuration capability.

The configurable Crypto Express2 feature is supported by z/OS, z/VM, z/VSE, and Linux on System z. z/VSE offers support for clear-key SSL transactions only. Current versions of z/OS, z/VM, and Linux on System z offer support for both clear-key and secure-key operations.

Refer to the Software requirements section and also the Special features section of the Sales manual on the Web for further information.

http://w3-3.ibm.com/sales/ssi/OIAccess.wss

Additional cryptographic functions and features

Key management for remote loading of ATM and Point of Sale (POS) keys

The elimination of manual key entry is designed to reduce downtime due to key entry errors, service calls, and key management costs.

Improved key exchange with non-CCA cryptographic systems

Features added to IBM Common Cryptographic Architecture (CCA) are designed to enhance the ability to exchange keys between CCA systems, and systems that do not use control vectors by allowing the CCA system owner to define permitted types of key import and export while preventing uncontrolled key exchange that can open the system to an increased threat of attack.

These are supported by z/OS and by z/VM for guest exploitation. Refer to the Software requirements section.

Support for ISO 16609 CBC Mode T-DES Message Authentication (MAC) requirements

ISO 16609 CBC Mode T-DES MAC is accessible through ICSF function calls made in the PCI-X Cryptographic Adapter segment 3 Common Cryptographic Architecture (CCA) code.

This is supported by z/OS and by z/VM for guest exploitation. Refer to the Software requirements section.

Introducing support for RSA keys up to 4096 bits

The RSA services in the CCA API are extended to support RSA keys with modulus lengths up to 4096 bits. The services affected include key generation, RSA-based key management, digital signatures, and other functions related to these.

Refer to the ICSF Application Programmers Guide, SA22-7522, for additional details.

New crypto availability enhancement

Dynamically add crypto to a logical partition

Users can preplan adding Crypto Express2 features to a logical partition (LP) by using the Crypto page in the image profile to define the Cryptographic Candidate List, Cryptographic Online List, and Usage and Control Domain Indexes in advance of crypto hardware installation.

With the change to dynamically add crypto to a logical partition, changes to image profiles, to support Crypto Express2 features, are available without outage to the logical partition. Users can also dynamically delete or move Crypto Express2 features. Preplanning is no longer required.

This enhancement is supported by z/OS, z/VM for guest exploitation, and Linux on System z. Refer to the Software requirements section.

Continued support for TKE workstation and Smart Card Reader

TKE 5.2 workstation to enhance security and convenience

The Trusted Key Entry (TKE) workstation (#0839) and the TKE 5.2 level of Licensed Internal Code (#0857) are optional features on the System z10 EC. The TKE 5.2 Licensed Internal Code (LIC) is loaded on the TKE workstation prior to shipment. The TKE workstation offers security-rich local and remote key management, providing authorized persons a method of operational and master key entry, identification, exchange, separation, and update. The TKE workstation supports connectivity to an Ethernet Local Area Network (LAN) operating at 10 or 100 Mbps. Up to three TKE workstations can be ordered.

The TKE workstation, feature #0839, is available on the System z10 EC, z9 EC, z9 BC, z990, and z890.

Refer also to the Special features section of the Sales manual on the Web for further information.

http://w3-3.ibm.com/sales/ssi/OIAccess.wss

Smart Card Reader: Support for an optional Smart Card Reader attached to the TKE 5.2 workstation allows for the use of smart cards that contain an embedded microprocessor and associated memory for data storage. Access to and the use of confidential data on the smart cards is protected by a user-defined Personal Identification Number (PIN).

TKE 5.2 Licensed Internal Code (LIC) has added the capability to store key parts on DVD-RAMs and continues to support the ability to store key parts on paper, or optionally on a smart card. TKE 5.2 LIC has limited the use of floppy diskettes to read-only. The TKE 5.2 LIC can remotely control host cryptographic coprocessors using a password-protected authority signature key pair either in a binary file or on a smart card.

The optional TKE features are:

  • TKE 5.2 LIC (#0857) and TKE workstation (#0839)
  • TKE Smart Card Reader (#0887)
  • TKE additional smart cards (#0888)

The Smart Card Reader, which can be attached to a TKE workstation with the 5.2 level of LIC, is available on the System z10 EC, z9 EC, z9 BC, z990, and z890.

System z10 EC cryptographic migration: Clients using a User Defined Extension (UDX) of the Common Cryptographic Architecture should contact their UDX provider for an application update before ordering a new System z10 EC machine, or planning to migrate or activate a UDX application to firmware driver 73 level.

  • The Crypto Express2 feature is supported on the System z10 EC and can be carried forward on an upgrade to the System z10 EC.
  • You must use TKE 5.2 workstations to control the System z10 EC.
  • TKE 5.0 and 5.1 workstations (#0839 and #0859) may be used to control z9 EC, z9 BC, z890, and z990 servers.

FICON and FCP for connectivity to disk, tape, and printers

Extended distance FICON - improved performance at extended distance: An enhancement to the industry standard FICON architecture (FC-SB-3) helps avoid degradation of performance at extended distances by implementing a new protocol for "persistent" Information Unit (IU) pacing. Control units that exploit the enhancement to the architecture can increase the pacing count (the number of IUs allowed to be in flight from channel to control unit). Extended distance FICON also allows the channel to "remember" the last pacing update for use on subsequent operations to help avoid degradation of performance at the start of each new operation.

Improved IU pacing can help to optimize the utilization of the link, for example help keep a 4 Gbps link fully utilized at 50 km, and allows channel extenders to work at any distance, with performance results similar to that experienced when using emulation.

The requirements for channel extension equipment are simplified with the increased number of commands in flight. This may benefit z/OS Global Mirror (Extended Remote Copy - XRC) applications as the channel extension kit is no longer required to simulate specific channel commands. Simplifying the channel extension requirements may help reduce the total cost of ownership of end-to-end solutions.

Extended distance FICON is transparent to operating systems and applies to all the FICON Express2 and FICON Express4 features carrying native FICON traffic (CHPID type FC). For exploitation, the control unit must support the new IU pacing protocol. The channel will default to current pacing values when operating with control units that cannot exploit extended distance FICON.

Exploitation of extended distance FICON is supported by IBM System Storage DS8000 series Licensed Machine Code (LMC) level 5.3.1xx.xx (bundle version 63.1.xx.xx), or later.

Note: To support extended distance without performance degradation, the buffer credits in the FICON director must be set appropriately. The number of buffer credits required is dependent upon the link data rate (1 Gbps, 2 Gbps, or 4 Gbps), the maximum number of buffer credits supported by the FICON director or control unit, as well as application and workload characteristics. High bandwidth at extended distances is achievable only if enough buffer credits exist to support the link data rate.

FCP - increased performance for small block sizes: The Fibre Channel Protocol (FCP) Licensed Internal Code has been modified to help provide increased I/O operations per second for small block sizes. With FICON Express4, there may be up to 57,000 I/O operations per second (all reads, all writes, or a mix of reads and writes), a 80% increase compared to System z9. These results are achieved in a laboratory environment using one channel configured as CHPID type FCP with no other processing occurring and do not represent actual field measurements. A significant increase in I/O operations per second for small block sizes can also be expected with FICON Express2.

This FCP performance improvement is transparent to operating systems and applies to all the FICON Express4 and FICON Express2 features when configured as CHPID type FCP, communicating with SCSI devices.

SCSI IPL now a base function: The SCSI Initial Program Load (IPL) enablement feature #9904, first introduced on z990 in October of 2003, is no longer required. The function is now delivered as a part of the server Licensed Internal Code. SCSI IPL allows an IPL of an operating system from an FCP-attached SCSI disk.

Getting ready for an 8 Gbps SAN infrastructure with FICON Express8

With introduction of FICON Express8 on the System z10 EC and System z10 BC family of servers, you now have additional growth opportunites for your storage area network (SAN). FICON Express 8 supports a link data rate of 8 gigabits per second (Gbps) and autonegotiation to 2 or 4 Gbps for synergy with existing switches, directors, and storage devices. With support for native FICON, High Performance FICON for System z (zHPF), and Fibre Channel Protocol (FCP), the System z10 servers enable you to position your SAN for even higher performance - helping you to prepare for an end-to-end 8 Gbps infrastructure to meet the increased bandwidth demands of your applications.

High performance FICON for System z - improving upon the native FICON protocol: The FICON Express8 features support High Performance FICON for System z (zHPF) which was introduced in October 2008 on the System z10 servers. zHPF provides optimizations for online transaction processing (OLTP) workloads. zHPF is an extension to the FICON architecture and is designed to improve the execution of small block I/O requests. zHPF streamlines the FICON architecture and reduces the overhead on the channel processors, control unit ports, switch ports, and links by improving the way channel programs are written and processed. zHPF-capable channels and devices support both native FICON and zHPF protocols simultaneously (CHPID type FC).

High Performance FICON for System z now supports multitrack operations:

zHPF support of multitrack operations can help increase system performance and improve FICON channel efficiency when attached to the IBM System Storage DS8000 series. zFS, HFS, PDSE, and other applications that use large data transfers with Media Manager are expected to benefit.

In laboratory measurements, multitrack operations (e.g. reading 16x4k bytes/IO) converted to the zHPF protocol on a FICON Express8 channel, achieved a maximum of up to 40% more MB/sec than multitrack operations using the native FICON protocol.

zHPF and support for multitrack operations is exclusive to the System z10 servers and applies to all FICON Express8, FICON Express4, and FICON Express2 features (CHPID type FC). Exploitation is required by z/OS and the control unit. Refer to the Software requirements section.

zHPF with multitrack operations is available in the DS8000 series Licensed Machine Code (LMC) level 5.4.3.xx (bundle version 64.3.xx.xx) or later with the purchase of DS8000 series feature (#7092).

Previously, zHPF was limited to read or write sequential I/O's transferring less than a single track size (for example, 12 4k byte records or 12x4k bytes/IO).

FICON Express8 performance improvements for zHPF and native FICON on the System z10 servers: A FICON Express8 channel exploiting the High Performance FICON for System z (zHPF) protocol, when operating at 8 Gbps, is designed to achieve a maximum throughput of up to 800 MBps when processing large sequential read I/O operations and up to 730 MBps when processing large sequential write I/O operations. This represents an 80 to 100% increase in performance compared to a FICON Express4 channel operating at 4 Gbps on a System z10 server. For those large sequential read or write I/O operations that use the native FICON protocol, the FICON Express8 channel, when operating at 8 Gbps, is designed to achieve up to 510 MBps. This represents a 45 to 55% increase compared to a FICON Express4 channel operating at 4 Gbps on a System z10 server.

The FICON Express8 channel, when operating at 8 Gbps, is also designed to achieve a maximum of 52,000 IO/sec for small data transfer I/O operations that can exploit the zHPF protocol. This represents approximately a 70% increase compared to a FICON Express4 channel operating at 4 Gbps and executing zHPF I/O operations on System a z10 server. For those small data transfer I/O operations that use the native FICON protocol, the FICON Express8 channel, when operating at 8 Gbps, is designed to achieve a maximum of 20,000 IO/sec, which represents approximately a 40% increase compared to a FICON Express4 channel operating at 4 Gbps on a System z10 server. The FICON Express8 features support both the native FICON protocol and the zHPF protocol concurrently in the server Licensed Internal Code.

These measurements for FICON (CHPID type FC) using both the native FICON and zHPF protocols are examples of the maximum MB/sec and IO/sec that can be achieved in a laboratory environment using one FICON Express8 channel on a System z10 server with z/OS V1.10 and no other processing occurring and do not represent actual field measurements. Details are available upon request.

FICON Express8 performance at 2 or 4 Gbps link data rate - it may be time to migrate to a FICON Express8 channel

Performance benefits may be realized by migrating to a FICON Express8 channel and operating at a link data rate of 2 or 4 Gbps. If you migrate now, you may be able to realize performance benefits when your SAN is not yet 8 Gbps-ready.

In laboratory measurements using the zHPF protocol with small data transfer I/O operations, FICON Express8 operating at 2 Gbps achieved a maximum of 47,000 IO/sec, compared to the maximum of 52,000 IO/sec achieved when operating at 4 Gbps or 8 Gbps. This represents approximately a 50% increase compared to a FICON Express4 channel operating at 2 Gbps on a System z10 server.

In laboratory measurements using the native FICON protocol with small data transfer I/O operations, FICON Express8 operating at 2 Gbps or 4 Gbps achieved a maximum of 20,000 IO/sec, which represents approximately a 40% increase compared to a FICON Express4 channel operating at 2 Gbps or 4 Gbps on a System z10 server.

In laboratory measurements using FCP with small data transfer I/O operations, FICON Express8 operating at 4 Gbps, compared to FICON Express4 operating at 4 Gbps, achieved a maximum of 84,000 IO/sec, which represents approximately a 40% increase compared to a FICON Express4 channel operating at 4 Gbps on a System z10 server.

FICON Express8 performance improvements for FCP on the System z10 servers:

The FICON Express8 FCP channel, when operating at 8 Gbps, is designed to achieve a maximum throughput of up to 800 MBps when processing large sequential read I/O operations and up to 730 MBps when processing large sequential write I/O operations. This represents an 80 to 100% increase compared to a FICON Express4 FCP channel operating at 4 Gbps on System z10.

The FICON Express8 FCP channel is designed to achieve a maximum of 84,000 IO/sec when processing read or write small data transfer I/O operations. This represents approximately a 40% increase compared to a FICON Express4 FCP channel when operating at 4 Gbps on a System z10 server.

These measurements for FCP (CHPID type FCP supporting attachment to SCSI devices) are examples of the maximum MB/sec and IO/sec that can be achieved in a laboratory environment, using one FICON Express8 channel on a System z10 server with z/VM V5.4 or Linux on System z distribution Novell SUSE SLES 10 with no other processing occurring, and do not represent actual field measurements. Details are available upon request.

FICON Express8 for channel consolidation:

FICON Express8 may also allow for the consolidation of existing FICON Express, FICON Express2, or FICON Express4 channels onto fewer FICON Express8 channels while maintaining and enhancing performance.

To request assistance for ESCON or FICON channel consolidation analysis using the zCP3000 tool, contact your IBM representative. They will assist you with a capacity planning study to estimate the number of FICON channels that can be consolidated onto FICON Express8. They can also assist you with ESCON to FICON channel migration.

Resource Measurement Facility (RMF): RMF has been enhanced to support FICON Express8. RMF is an IBM product designed to simplify management of single and multiple system workloads. RMF gathers data and creates reports that help your system programmers and administrators optimally tune your systems, react quickly to system delays, and diagnose performance problems. RMF may assist you in understanding your capacity requirements. RMF output is used by the zCP3000 tool to assist with your channel consolidation potential.

FICON end-to-end data integrity checking: FICON Express8 continues the unparalleled heritage of data protection with its native FICON, zHPF, and channel-to-channel (CTC) intermediate data checking and end-to-end data integrity checking for all devices (such as disk and tape), which is transparent to operating systems, middleware, and applications. With end-to-end data integrity checking, Cyclical Redundancy Check (CRC) is generated at the end points for quality of service. This applies to CHPID type FC.

Fibre Channel Protocol (FCP) transmission data checking: FICON Express8 continues the transmission data checking for an FCP channel (communicating with SCSI devices) with it's full-fabric capability. FCP performs intermediate data checking for each leg of the transmission. This applies to CHPID type FCP.

FICON Express8 10KM LX and SX: The System z10 servers continue to support your current fiber optic cabling environments with its introduction of FICON Express8.

  1. FICON Express8 10KM LX (#3325), with four channels per feature, is designed to support unrepeated distances up to 10 kilometers (6.2 miles) over 9 micron single mode fiber optic cabling without performance degradation. To avoid performance degradation at extended distance, FICON switches or directors (for buffer credit provision) or Dense Wavelength Division Multiplexers (for buffer credit simulation) may be required.
  2. FICON Express8 SX (#3326), with four channels per feature, is designed to support 50 or 62.5 micron multimode fiber optic cabling.

For details regarding the unrepeated distances for FICON Express8 10KM LX and FICON Express8 SX refer to System z Planning for Fiber Optic Links (GA23-0367) available on System z10 servers at planned availability in the Library section of Resource Link.

www.ibm.com/servers/resourcelink

All channels on a single FICON Express8 feature are of the same type - 10KM LX or SX.

Both features support small form factor pluggable optics (SFPs) with LC Duplex connectors. The optics continue to permit each channel to be individually serviced in the event of a fiber optic module failure.

The FICON Express8 features, designed for connectivity to servers, switches, directors, disks, tapes, and printers, can be defined as:

  • Native FICON, zHPF, and FICON channel-to-channel (CTC) (CHPID type FC)
  • Fibre Channel Protocol (CHPID type FCP for communication with SCSI devices).

The FICON Express8 features are exclusive to z10 EC and z10 BC servers. Refer to the Software requirements section for operating system support for CHPID types FC and FCP.

Cleaning discipline for FICON Express8 fiber optic cabling

With the introduction of 8 Gbps link data rates, it is even more critical to ensure your fiber optic cabling infrastructure performs as expected. With proper fiber optic cleaning and maintenance, you can be assured that the "data gets through".

With 8 Gbps link data rates over multimode fiber optic cabling, link loss budgets and distances are reduced. Single mode fiber optic cabling is more "reflection sensitive". With high link data rates and single mode fiber optic cabling there is also less margin for error. The cabling is no longer scratch-tolerant and contaminants such as dust and oil can present a problem.

To keep the data flowing, proper handling of fiber trunks and jumper cables is critical as well as thorough cleaning of fiber optic connectors. Work with your data center personnel or IBM personnel to ensure you have fiber optic cleaning procedures in place.

Information regarding related Global Technology Services offerings is available at the following Web site:

http://www-935.ibm.com/services/us/index.wss/offering /its/a1027996

The Optimized Airflow Assessment for Cabling reviews existing data center cabling and prioritizes tactical plans across the data center to help increase system availability, adapt to changing technologies and transmission protocols and reduce energy-related cooling costs through optimized airflow.

http://www-935.ibm.com/services/us/index.wss/offering /its/a1028860

The Facilities Cabling Services - fiber transport system helps lower the operating cost of the data center, supports the highest level of availability for an IT infrastructure and allows the latest technologies and transmission protocols to be transported, while reducing clogs of unstructured cabling under floor tiles.

If you need further support or assistance on this matter please send an e-mail to cabling@us.ibm.com with your request.

FICON Express4 - 1, 2, or 4 Gbps

  • Offers two unrepeated distance options (4 kilometer and 10 kilometer) when using single-mode fiber optic cabling
  • Supports a 4 Gbps link data rate with auto-negotiation to 1 or 2 Gbps for synergy with existing switches, directors, and storage devices

The FICON Express4 features have two modes of operation designed for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) traffic (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments

  2. Fibre Channel Protocol traffic (CHPID type FCP) in the z/VM, z/VSE, and Linux on System z environments

There are three FICON Express4 features from which to choose:

  1. FICON Express4 10KM LX (#3321), with four channels per feature, is designed to support unrepeated distances up to 10 kilometers (6.2 miles) over single-mode fiber optic cabling. Use this feature when the unrepeated distance between devices is greater than 4 kilometers (km) or your link loss budget requirements exceed 2 dB end-to-end between devices.

  2. FICON Express4 SX (#3322), with four channels per feature, is designed to carry traffic over multimode fiber optic cabling. Refer to the Standards section for the supported unrepeated distances.

  3. FICON Express4 4KM LX (#3324), with four channels per feature, is designed to support unrepeated distances up to 4 kilometers (2.5 miles) over single-mode fiber optic cabling. This feature is designed to offer a cost-effective solution to satisfy the majority of your FICON/FCP single-mode fiber optic cabling distance requirements.

Note: The ANSI Fibre Channel Physical Interface (FC-PI-2) standard defines 10 kilometer (km) transceivers and 4 km transceivers when using 9 micron single-mode fiber optic cabling. IBM supports these FC-PI-2 variants.

IBM supports interoperability of 10 km transceivers with 4 km transceivers provided the unrepeated distance between a 10 km transceiver and a 4 km transceiver does not exceed 4 km (2.5 miles).

The FICON Express4 features have Small Form Factor Pluggable (SFP) optics to permit each channel to be individually serviced in the event of a fiber optic module failure. The traffic on the other channels on the same feature can continue to flow if a channel requires servicing.

All channels on a single FICON Express4 feature are of the same type - 4KM LX, 10KM LX, or SX. You may carry your current FICON Express2 and FICON Express features (#3319, #3320, #2319, #2320) forward from z990 or z9 EC.

Refer to the Software requirements section for operating system support for CHPID types FC and FCP.

FICON Express2 and FICON Express: Your 4-port FICON Express2 features (1 or 2 Gbps link data rate) can be carried forward to z10 EC. If you have 2-port FICON Express features (1 Gbps link data rate) you can also carry them forward to z10 EC. FICON Express LX (#2319) can be defined as CHPID type FCV (FICON bridge) to allow communication with ESCON control units using the ESCON Director Model 5 with the bridge feature. Migration to native FICON is encouraged. The ESCON Director Model 5 was withdrawn from marketing December 31, 2004.

Introducing Fiber Quick Connect for FICON LX environments: Fiber Quick Connect (FQC), an optional feature on z10 EC, is now being offered for all FICON LX (single-mode fiber) channels, in addition to the current support for ESCON (62.5 micron multimode fiber) channels. FQC is designed to significantly reduce the amount of time required for on-site installation and setup of fiber optic cabling. FQC facilitates adds, moves, and changes of ESCON and FICON LX fiber optic cables in the data center, and may reduce fiber connection time by up to 80%.

FQC is for factory installation of Fiber Transport System (FTS) fiber harnesses for connection to channels in the I/O cage. FTS fiber harnesses enable connection to FTS direct-attach fiber trunk cables from IBM Global Technology Services.

FQC, coupled with FTS, is a solution designed to help minimize disruptions and to isolate fiber cabling activities away from the active system as much as possible.

IBM provides the direct-attach trunk cables, patch panels, and Central Patching Location (CPL) hardware, as well as the planning and installation required to complete the total structured connectivity solution. An ESCON example - four trunks, each with 72 fiber pairs, can displace up to 240 fiber optic jumper cables, the maximum quantity of ESCON channels in one I/O cage. This significantly reduces fiber optic jumper cable bulk.

At CPL panels you can select the connector to best meet your data center requirements. Small form factor connectors are available to help reduce the floor space required for patch panels.

CPL planning and layout is done prior to arrival of the server on-site using the default CHannel Path IDdentifier (CHPID) placement report, and documentation is provided showing the CHPID layout and how the direct- attach harnesses are plugged.

Note: FQC supports all of the ESCON channels and all of the FICON LX channels in all of the I/O cages of the server.

IBM Site and Facilities Services: IBM Site and Facilities Services has a comprehensive set of scalable solutions to address IBM cabling requirements, from product-level to enterprise-level for small, medium, and large enterprises.

  • IBM Facilities Cabling Services - fiber transport system
  • IBM IT Facilities Assessment, Design, and Construction Services - optimized airflow assessment for cabling

Planning and installation services for individual fiber optic cable connections are available. An assessment and planning for IBM Fiber Transport System (FTS) trunking components can also be performed.

These services are designed to be right-sized for your products or the end-to-end enterprise, and to take into consideration the requirements for all of the protocols and media types supported on the System z10 EC, System z9, and zSeries (e.g. ESCON, FICON, Coupling Links, OSA-Express) whether the focus is the data center, the Storage Area Network (SAN), the Local Area Network (LAN), or the end-to-end enterprise.

IBM Site and Facilities Services are designed to deliver convenient, packaged services to help reduce the complexity of planning, ordering, and installing fiber optic cables. The appropriate fiber cabling is selected based upon the product requirements and the installed fiber plant.

The services are packaged as follows:

  • Under IBM Facilities Cabling Services there is the option to provide IBM Fiber Transport System (FTS) trunking commodities (fiber optic trunk cables, fiber harnesses, panel-mount boxes) for connecting to the z10 EC, z9 EC, z9 BC, z990, and z890. IBM can reduce the cable clutter and cable bulk under the floor. An analysis of the channel configuration and any existing fiber optic cabling is performed to determine the required FTS trunking commodities. IBM can also help organize the entire enterprise. This option includes enterprise planning, new cables, fiber optic trunking commodities, installation, and documentation.
  • Under IBM IT Facilities Assessment, Design, and Construction Services there is the Optimized Airflow Assessment for Cabling option to provide you with a comprehensive review of your existing data center cabling infrastructure. This service provides an expert analysis of the overall cabling design required to help improve data center airflow for optimized cooling, and to facilitate operational efficiency through simplified change management.

Refer to the services section of Resource Link for further details. Access Resource Link at:

www.ibm.com/servers/resourcelink

HiperSockets - "Network in a box" HiperSockets Layer 2 support - for flexible and efficient data transfer for IP and non-IP workloads: Now, the HiperSockets internal networks on System z10 EC can support two transport modes: Layer 2 (Link Layer) as well as the current Layer 3 (Network or IP Layer). Traffic can be Internet Protocol (IP) Version 4 or Version 6 (IPv4, IPv6) or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA). HiperSockets devices are now protocol-independent and Layer 3 independent. Each HiperSockets device has its own Layer 2 Media Access Control (MAC) address, which is designed to allow the use of applications that depend on the existence of Layer 2 addresses such as Dynamic Host Configuration Protocol (DHCP) servers and firewalls.

Layer 2 support can help facilitate server consolidation. Complexity can be reduced, network configuration is simplified and intuitive, and LAN administrators can configure and maintain the mainframe environment the same as they do a non-mainframe environment.

With support of the new Layer 2 interface by HiperSockets, packet forwarding decisions are now based upon Layer 2 information, instead of Layer 3 information. The HiperSockets device can perform automatic MAC address generation to allow uniqueness within and across logical partitions (LPARs) and servers. MAC addresses can also be locally administered. The use of Group MAC addresses for multicast is supported as well as broadcasts to all other Layer 2 devices on the same HiperSockets network. Datagrams are delivered only between HiperSockets devices that are using the same transport mode (Layer 2 with Layer 2 and Layer 3 with Layer 3). A Layer 2 device cannot communicate directly with a Layer 3 device in another LPAR.

A HiperSockets device can filter inbound datagrams by Virtual Local Area Network identification (VLAN ID, IEEE 802.1q), the Ethernet destination MAC address, or both. Filtering can help reduce the amount of inbound traffic being processed by the operating system, helping to reduce CPU utilization.

Analogous to the respective Layer 3 functions, HiperSockets Layer 2 devices can be configured as primary or secondary connectors or multicast routers. This is designed to enable the creation of high-performance and high-availability Link Layer switches between the internal HiperSockets network and an external Ethernet or to connect the HiperSockets Layer 2 networks of different servers.

HiperSockets Layer 2 support is exclusive to System z10 EC, supported by Linux on System z, and by z/VM for guest exploitation. Refer to the Software requirements section.

HiperSockets Multiple Write Facility for increased performance: HiperSockets performance has been enhanced to allow for the streaming of bulk data over a HiperSockets link between logical partitions (LPARs). The receiving LPAR can now process a much larger amount of data per I/O interrupt. This enhancement is transparent to the operating system in the receiving LPAR. HiperSockets Multiple Write Facility, with fewer I/O interrupts, is designed to reduce CPU utilization of the sending and receiving LPAR. HiperSockets Multiple Write Facility is supported in the z/OS environment. Refer to the Software requirements section.

Local Area Network (LAN) connectivity

Introducing OSA-Express3 10 GbE LR - Designed to deliver increased throughput and reduced latency: OSA-Express3 10 Gigabit Ethernet (GbE) has been designed to increase throughput for standard frames (1492 byte) and jumbo frames (8992 byte) to help satisfy the bandwidth requirements of your applications. This increase in performance (compared to OSA-Express2 10 GbE) has been achieved through an enhancement to the architecture that supports direct host memory access by using a data router, eliminating store and forward" delays.

Double the port density: The OSA-Express3 10 GbE has been designed with two PCI adapters, each with one port. Doubling the port density on a single feature helps to reduce the number of I/O slots required for high-speed connectivity to the Local Area Network (LAN). Each port continues to be defined as CHPID type OSD, supporting the Queued Direct Input/Output (QDIO) architecture for high-speed TCP/IP communication.

10 GbE cabling and connector: The OSA-Express3 10 GbE feature continues to be Long Reach (LR) supporting the 9 micron single mode fiber optic cabling environment. The connector is new; it is now the small form factor, LC Duplex connector. Previously the SC Duplex connector was supported. The LC Duplex connector is common with FICON, ISC-3, and OSA-Express2 Gigabit Ethernet LX and SX.

OSA-Express3 10 GbE LR (feature #3370) is exclusive to z10 EC and supports CHPID type OSD. It is supported by z/OS, z/VM, z/VSE, z/TPF, and Linux on System z. Refer to the Software requirements section.

OSA-Express2: The OSA-Express2 family of LAN adapters includes:

  • OSA-Express2 Gigabit Ethernet Long wavelength (GbE LX) and Short wavelength (GbE SX) for fiber optic connectivity to the LAN using
    • QDIO (CHPID type OSD) for TCP/IP traffic when using Layer 3 and protocol-independent traffic when using Layer 2
    • OSA for NCP (CHPID type OSN) supporting LPAR-to-LPAR connectivity from operating systems that support Channel Data Link Control (CDLC) to IBM Communication Controller for Linux (CCL); supports Network Control Program (NCP) functions
  • OSA-Express2 1000BASE-T Ethernet for Category 5 (copper) connectivity to the LAN using
    • QDIO (CHPID type OSD)
    • Non-QDIO (CHPID type OSE) for SNA/APPN/HPR and/or TCP/IP traffic
    • OSA-Integrated Console Controller (OSA-ICC) (CHPID type OSC) for emulation support for console session connections; TN3270E and local non-SNA DFT 3270 emulation
    • OSA for NCP (CHPID type OSN)
OSA-Express2 GbE (LX - #3364, SX - #3365) supports a link data rate of 1 Gigabit per second (Gbps) in each direction over 9 micron single-mode fiber (LX) or 50 or 62.5 micron multimode fiber (SX) with an LC Duplex connector.

OSA-Express2 1000BASE-T Ethernet (#3366) supports a link data rate of 10, 100, or 1000 Megabits per second (Mbps) auto-negotiated (target device must be set to auto-negotiate) and Category 5 Unshielded Twisted Pair (UTP) cabling with an RJ-45 connector.

Refer to the Software requirements section for operating system support for CHPID types, OSC, OSD, OSE, and OSN. Refer also to the Standards section.

Functions supported by OSA-Express3 and OSA-Express2

  • Queued Direct Input/Output (QDIO) - uses memory queues and a signaling protocol to directly exchange data between the OSA microprocessor and the network software for high-speed communication.
    • QDIO Layer 2 (Link layer) - for IP (IPv4, IPv6) or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA) workloads. Using this mode the Open Systems Adapter (OSA) is protocol-independent and Layer-3 independent. Packet forwarding decisions are based upon the Medium Access Control (MAC) address.
    • QDIO Layer 3 (Network or IP layer) - for IP workloads. Packet forwarding decisions are based upon the IP address. All guests share OSA's MAC address.
  • Jumbo frames in QDIO mode (8992 byte frame size) when operating at 1 Gbps (fiber or copper) and 10 Gbps (fiber).
  • 640 TCP/IP stacks per CHPID - for hosting more images.
  • Large send for IPv4 packets - for TCP/IP traffic and CPU efficiency, offloading the TCP segmentation processing from the host TCP/IP stack.
  • Concurrent LIC update - to help minimize the disruption of network traffic during an update; when properly configured, designed to avoid a configuration off or on (applies to CHPID types OSD and OSN).
  • Checksum offload for IPv4 inbound and outbound packets - for calculating and validating the TCP/UDP and IP header checksums, reducing CPU cycles consumption (OSA performs the checksum calculations). Checksums are used to verify the contents of files when transmitted over a network.
  • Multiple Image Facility (MIF) and spanned channels - for sharing OSA among logical channel subsystems.

InifiniBand coupling links for Parallel Sysplex: The IBM System z10 EC introduces InfiniBand coupling link technology, designed to provide increased bandwidth at greater distances. At introduction, InfiniBand coupling links complement and do not replace Integrated Cluster Bus-4 (ICB-4) and InterSystem Channel-3 (ISC-3) available on z10 EC, z9 EC, z9 BC, z990, and z890 servers.

The IBM System z10 EC will support a 12x (12 lanes of fiber in each direction) InfiniBand-Double Data Rate (IB-DDR) coupling link which is designed to support a total interface link data rate of 6 gigabytes per second (GBps) in each direction. The maximum distance for this point-to- point link over fiber optic cabling is 150 meters (492 feet). This new InfiniBand coupling link provides improved performance over the current ISC-3 coupling link in data centers where servers are less than 150 meters apart.

A 12x InfiniBand-Single Data Rate (IB-SDR) coupling link is available on System z9 EC and System z9 BC servers configured as Internal Coupling Facilities (ICFs) only. This coupling link is designed to support a total interface link data rate of 3 gigabytes per second (GBps) in each direction. This new InfiniBand coupling link provides improved performance over the current ISC-3 coupling link in data centers where systems are less than 150 meters apart. When a System z10 EC server is connected to a System z9 server using point-to-point InfiniBand cabling, the link auto-negotiates to the highest common data rate of 3 GBps.

Other advantages of Parallel Sysplex using InfiniBand (PSIFB):

  • InfiniBand coupling links also provide a new ability to define up to 16 CHPIDs on an HCA2-O fanout, allowing physical coupling links to be shared by multiple sysplexes. This also provides additional subchannels for Coupling Facility communication, improving scalability, and reducing contention in heavily utilized system configurations. It also allows for one CHPID to be directed to one CF, and another CHPID directed to another CF on the same target server, using the same port.
  • Like other coupling links, external InfiniBand coupling links are also valid to pass time synchronization signals for Server Time Protocol (STP). Therefore the same coupling links can be used to exchange timekeeping information and Coupling Facility messages in a Parallel Sysplex.
  • The IBM System z10 EC also takes advantage of InfiniBand as a higher bandwidth replacement for the Self-Timed Interconnect (STI) I/O interface features found in prior System z servers.

The IBM System z10 EC will support up to 32 PSIFB links as compared to 16 PSIFB links on System z9 servers. For either System z10 EC or System z9 EC, there must be less then or equal to a total of 32 PSIFBs and ICB-4 links.

InfiniBand coupling links are CHPID type CIB. Infiniband uses OM3 cables (50 micron multimode fiber rated at 2000 MHz-km).

The IBM System z9 Enterprise Class (z9 EC) and System z9 Business Class (z9 BC) servers are the last servers to support participation in the same Parallel Sysplex with IBM eServer zSeries 900 (z90), IBM eServer zSeries 800 (z800), and older System/390 Parallel Enterprise Server systems.

NTP Client Support: The STP design has been enhanced to include support for a Simple Network Time Protocol (SNTP) client on the Support Element on z10 EC. With the STP feature enabled, you can initialize the time of an STP-only Coordinated Timing Network to the time provided by a Network Time Protocol (NTP) server, and maintain time accuracy. This allows an enterprise comprised of heterogeneous platforms to track to the same time source.

NTP Client support is also available on System z9 EC or System z9 BC servers that are at Driver 67L with the latest MCLs and have the STP Feature 1021 installed. Additional information is available on the STP Web page:

http://www.ibm.com/systems/z/pso/stp.html

A Redpaper Server Time Protocol NTP Client support, (REDP-4329) is available on the Redbooks Web site: http://www.redbooks.ibm.com/

Implementation Services for Parallel Sysplex DB2 data sharing: To assist with the assessment, planning, implementation, testing, and backup and recovery of a System z DB2 data sharing environment, IBM Global Technology Services has available the IBM Implementation Services for Parallel Sysplex Middleware - DB2 data sharing.

This DB2 data sharing service is designed for clients who want to:

  1. Enhance the availability of data
  2. Enable applications to take full utilization of all servers' resources
  3. Share application system resources to meet business goals
  4. Manage multiple systems as a single system from a single point of control
  5. Respond to unpredicted growth by quickly adding computing power to match business requirements without disruption
  6. Build on the current investments in hardware, software, applications, and skills while potentially reducing computing costs

The offering consists of six selectable modules; each is a stand-alone module that can be individually acquired. The first module is an infrastructure assessment module, followed by five modules which address the following DB2 data sharing disciplines:

  1. DB2 data sharing planning
  2. DB2 data sharing implementation
  3. Adding additional data sharing members
  4. DB2 data sharing testing
  5. DB2 data sharing backup and recovery

For more information on these services contact your IBM representative or refer to:

http://www.ibm.com/services/server, or contact your IBM

Capacity on Demand

Capacity on Demand - Temporary Capacity Just-in-time deployment of System z10 EC Capacity on Demand (CoD) is a radical departure from previous System z and zSeries servers. This new architecture allows:

  • Up to four temporary records to be installed on the CEC and active at any given time
  • Up to 200 temporary records to be staged on the SE
  • Variability in the amount of resources that can be activated per record
  • The ability to control and update records independent of each other
  • Improved query functions to monitor the state of each record
  • The ability to add capabilities to individual records concurrently, eliminating the need for constant ordering of new temporary records for different user scenarios
  • Permanent LIC-CC upgrades to be performed while temporary resources are active

These capabilities allow you to access and manage processing capacity on a temporary basis, providing increased flexibility for on demand environments. The CoD offerings are built from a common Licensed Internal Code - Configuration Code (LIC-CC) record structure. These Temporary Entitlement Records (TERs) contain the information necessary to control which type of resource can be accessed and to what extent, how many times and for how long, and under what condition - test or real workload. Use of this information gives the different offerings their personality. Three temporary-capacity offerings will be available on February 26, 2008:

Capacity Back Up (CBU) - Temporary access to dormant processing units (PUs), intended to replace capacity lost within the enterprise due to a disaster. CP capacity or any and all specialty engine types (zIIP, zAAP, SAP, IFL, ICF) can be added up to what the physical hardware model can contain for up to 10 days for a test activation or 90 days for a true disaster recovery. Each CBU record comes with a default of five test activations. Additional test activations may be ordered in groups of 5 but a record can not contain more than 15 test activations. Each CBU record provides the entitlement to these resources for a fixed period of time, after which the record is rendered useless. This time period can span from 1 to 5 years and is specified through ordering quantities of CBU years.

Capacity for Planned Events (CPE) - Temporary access to dormant PUs, intended to replace capacity lost within the enterprise due to a planned event such as a facility upgrade or system relocation. This is a new offering and is available only on the System z10 EC. CPE is similar to CBU in that it is intended to replace lost capacity; however, it differs in its scope and intent. Where CBU addresses disaster recovery scenarios that can take up to three months to remedy, CPE is intended for short-duration events lasting up to 3 days, maximum. Each CPE record, once activated, gives you access to all dormant PUs on the machine that can be configured in any combination of CP capacity or specialty engine types (zIIP, zAAP, SAP, IFL, ICF).

On/Off Capacity on Demand (On/Off CoD) - Temporary access to dormant PUs, intended to augment the existing capacity of a given system. On/Off CoD helps you contain workload spikes that may exceed permanent capacity such that Service Level Agreements cannot be met and business conditions do not justify a permanent upgrade. An On/Off CoD record allows you to temporarily add CP capacity or any and all specialty engine types (zIIP, zAAP, SAP, IFL, ICF) up to the following limits:

  • The quantity of temporary CP capacity ordered is limited by the quantity of purchased CP capacity (permanently active plus unassigned).
  • The quantity of temporary IFLs ordered is limited by quantity of purchased IFLs (permanently active plus unassigned).
  • Temporary use of unassigned CP capacity or unassigned IFLs will not incur a hardware charge.
  • The quantity of permanent zIIPs plus temporary zIIPs can not exceed the quantity of purchased (permanent plus unassigned) CPs plus temporary CPs and the quantity of temporary zIIPs can not exceed the quantity of permanent zIIPs.
  • The quantity of permanent zAAPs plus temporary zAAPs can not exceed the quantity of purchased (permanent plus unassigned) CPs plus temporary CPs and the quantity of temporary zAAPs can not exceed the quantity of permanent zAAPs.
  • The quantity of temporary ICFs ordered is limited by the quantity of permanent ICFs as long as the sum of permanent and temporary ICFs is less than or equal to 16.

Although the System z10 EC will allow up to four temporary records of any type to be installed, only one temporary On/Off CoD record may be active at any given time. An On/Off CoD record may be active while other temporary records are active.

Capacity provisioning - An installed On/Off CoD record is a necessary prerequisite for automated control of temporary capacity through z/OS MVS Capacity Provisioning. z/OS MVS Capacity provisioning allows you to set up rules defining the circumstances under which additional capacity should be provisioned in order to fulfill a specific business need. The rules are based on criteria, such as: a specific application, the maximum additional capacity that should be activated, time and workload conditions. This support provides a fast response to capacity changes and ensures sufficient processing power will be available with the least possible delay even if workloads fluctuate. See z/OS MVS Capacity Provisioning User's Guide (SA33-8299) for more information.

On/Off CoD Test - On/Off CoD allows for a no-charge test. No IBM charges are assessed for the test, including IBM charges associated with temporary hardware capacity, IBM software, or IBM maintenance. This test can be used to validate the processes to download, stage, install, activate, and deactivate On/Off CoD capacity nondisruptively. Each On/Off CoD-enabled server is entitled to only one no-charge test. This test may last up to a maximum duration of 24 hours commencing upon the activation of any capacity resources contained in the On/Off CoD record. Activation levels of capacity may change during the 24 hour test period. The On/Off CoD test automatically terminates at the end of the 24 hours period. In addition to validating the On/Off CoD function within your environment, you may choose to use this test as a training session for your personnel who are authorized to activate On/Off CoD.

Capacity on Demand - Permanent Capacity

Customer Initiated Upgrade (CIU) facility: When yourbusinessneeds additional capacity quickly, Customer Initiated Upgrade (CIU) is designed to deliver it. CIU is designed to allow you to respond to sudden increased capacity requirements by requesting a System z10 EC PU and/or memory upgrade via the Web, using IBM Resource Link, and downloading and applying it to your System z10 EC server using your system's Remote Support connection. Further, with the Express option on CIU, an upgrade may be made available for installation as fast as within a few hours after order submission.

Permanent upgrades: Orders (MESs) of all PU types and memory for System z10 EC servers that can be delivered by Licensed Internal Code, Control Code (LIC-CC) are eligible for CIU delivery. CIU upgrades may be performed up to the maximum available processor and memory resources on the installed server, as configured. While capacity upgrades to the server itself are concurrent, your software may not be able to take advantage of the increased capacity without performing an Initial Programming Load (IPL).

Increased flexibility with z/VM-mode partitions: System z10 EC provides for the definition of a z/VM-mode partition (LPAR) containing a mix of processor types including CPs and specialty processors IFLs, zIIPs, zAAPs, and ICFs. With the planned z/VM support, this new capability increases flexibility and simplifies systems management by allowing z/VM to manage guests to operate Linux on System z on IFLs, to operate z/VSE and z/OS on CPs, to offload z/OS system software overhead, such as DB2 workloads, on zIIPs, and to provide an economical Java execution environment under z/OS on zAAPs, all in the same VM LPAR.

HMC system support: The functions available on the Hardware Management Console (HMC) version 2.10.0 as described apply exclusively to z10 EC. However, the HMC version 2.10.0 will continue to support the systems as shown.

The 2.10.0 HMC will continue to support up to two 10/100 Mbps Ethernet LANs. Token Ring LAN i not supported. The 2.10.0 HMC applications have been updated to support HMC hardware without a diskette drive. DVD-RAM, CD-ROM, and/or USB flash memory drive media will be used.

 Family           Machine Type    Firmware Driver       SE Version
 ------           ------------    ---------------       ----------
 z10 EC             2097             73                 2.10.0
 z9 BC              2096             67                 2.9.2
 z9 EC              2094             67                 2.9.2
 z890               2086             55                 1.8.2
 z990               2084             55                 1.8.2
 z800               2066             3G                 1.7.3
 z900               2064             3G                 1.7.3
 9672 G6            9672/9674        26                 1.6.2
 9672 G5            9672/9674        26                 1.6.2
 

TCP/IP Version 6 (IPv6)

HMC version 2.10.0 and Support Element (SE) version 2.10.0 can now communicate using TCP/IP Version 4 (IPv4), TCP/IP Version 6 (IPv6), or both. It is no longer necessary to assign a static IP address to an SE if it only needs to communicate with HMCs on the same subnet. An HMC and SE can use IPv6 link-local addresses to communicate with each other. HMC/SE support is addressing the following requirements:

  • The availability of addresses in the IPv4 address space is becoming increasingly scarce.
  • The demand for IPv6 support is high in Asia/Pacific countries since many companies are deploying IPv6.
  • The U.S. Department of Defense and other U.S. Government agencies are requiring IPv6 support for any products purchased after June 2008.

More information on the U.S. government requirements can be found at the following urls:

http://www.whitehouse.gov/omb/memoranda/fy2005/m05-22.pdf

http://www.whitehouse.gov/omb/egov/documents/IPv6_FAQs.pdf

Capacity On Demand

The HMC version 2.10.0 and SE version 2.10.0 will support new more flexible controls for temporary processor upgrades. You can temporarily add processors using the On/Off CoD (Capacity on Demand) (FC 9896), the CBU (Capacity BackUp) feature (FC 9910), or the CDP (Capacity Planned Event) (FC 9912). Highlights of the new more flexible controls for those features include the following.

  • Partial activation - You can choose partial activation of resources up to the maximum you ordered.
  • On/Off CoD record reuse - Each On/Off CoD record is initially active for 180 days. This record can be activated, deactivated, and reactivated many times while the record remains active. If increased capacity is needed for a longer period of time, or if you want to increase processor maximum capacities in the record, you can 'replenish' the record.
  • Permanent upgrade while temporary capacity is active - You can add permanent processor or memory capacity while temporary On/Off CoD, CBU, or CPE records are active.
  • Multiple records can be simultaneously active - Up to 4 records (On/Off CoD, CBU, and CPE) can be active at any given time. However, only one On/Off CoD record can be active at any given time.
  • Automatic Deactivation - When a record expires, the resource is automatically deactivated.
    • The record will not be deactivated if it means removing a dedicated processor or the last of a specific processor type.
    • Expiration warning messages will continue to be provided on the HMC prior to the date of expiration.

SNMP API (Simple Network Management Protocol Application Programming Interface) enhancements have also been made for the new Capacity On Demand features. More information can be found in the System z10 Enterprise Class Capacity On Demand User's Guide, SC28-6871.

Enhanced installation support for z/VM using the HMC

HMC version 2.10.0 along with SE version 2.10.0 on System z10 EC will now provide the ability to install Linux on System z in a z/VM virtual machine using the Hardware Management Console (HMC) DVD drive. This new function does not require a network connection between z/VM and the HMC, but instead, uses the existing communication path between the Support Element (SE) and the HMC.

Using the legacy support and the planned z/VM future support, z/VM can be installed in an LPAR and both z/VM and Linux on System z can be installed in a virtual machine from the HMC DVD drive without requiring any network setup or a connection between an LPAR and the HMC.

This addresses customer concerns of security and additional Configuration efforts using the only other previous solution of the external network connection from the HMC to the z/VM image.

Dynamic Enhancements

There are two new features which provide more dynamic capabilities without having to do preplanning.

Dynamic Add/Remove Cryptos

  • The "Change LPAR Cryptographic Controls" task allows dynamic changes without requiring a partition reactivation. Primary scenarios are as follows:
    • Dynamically add a Crypto to a partition for the first time
    • Dynamically add a Crypto to a partition already using Crypto
    • Dynamically remove Crypto from a partition

Note: Moving a Crypto from one partition to another is done via Remove/Add.

Note: The above tasks don't require the Crypto hardware to be installed in the system.

Note: This task can Change Running System and/or Save to Profiles.

Note: A Usage Domain Zeroize task will be provided to clear the appropriate partition Crypto keys for a given Usage Domain when removing a Crypto from a partition.

Dynamic Add Logical CPs without Preplanning

  • Previously, the Image Profile defines the initial and reserved values for the different processor types for that partition. If those values weren't defined prior to partition activation/IPL, they could only be updated by reactivating that partition (including reIPL).
  • The HMC/SE will now provide a task called Logical Processor Add which can:
    • Increase the "reserved" value for a given processor type (ie, GP, zAAP, zIIp, IFL, etc)
    • Add a new processor type which is not in use yet for that partition.
    • Increase the "initial" value for a given processor type
    • "Change Running System" and/or "Save to Profiles"
  • Currently, exploitation of this support is limited to z/VM 5.3 with PTFs.

Enhanced Driver Maintenance (EDM)

There are several enhancements that have been made to the HMC/SE based on the feedback from the System z9 Enhanced Driver Maintenance field experience. Reliability, Availability, and Serviceability (RAS) enhancements were made. One example is a change to better handle intermittent customer network issues. EDM performance improvements will also be provided. Finally, new EDM user interface features were added to allow for customers and service personnel to better plan for the EDM. An example is a new option to check all licensed internal code change update EDM requirements. This option can be executed in advance of the EDM preload or activate.

Change Management

There were several enhancements made on the HMC/SE which provide more information for customers and service personnel as well as provide more flexibility.

The Query Channel/Crypto Configure Off/On Pending task will provide specific details on currently active Licensed Internal Code (LIC) change level and the levels which will be active after the Configure Off/On. In addition, the user will have the ability to determine which, if any, channels or Crypto Express2 features will require a configure off/on for a future LIC update process.

Customers and service personnel will be given the ability to redefine OSA-Express or Crypto Express2 LIC updates to be Configured Off/On if they desire the update to be done to one port or Crypto at a time rather than all at one for the same port/crypto type.

The System Information task has been updated to explicitly show any conditions where a LIC change update may not be truly active until an additional exception action is taken. This is generally an exception case that these conditions exist, but the information is now readily available on this one task.

Power/Thermal Monitoring

On System z9, IBM introduced power/thermal monitoring support with the HMC System Activity Display (SAD) task providing power consumption and air input temperature. On System z10, the HMC will now provide support for the Active Energy Manager (AEM) which will display power consumption/air input temperature as well as exhaust temperature. AEM will also provide some limited status/configuration information which might assist in explaining changes to the power consumption. AEM is exclusive to System z10.

Panel Wizards

Panel wizards were added to the HMC and SE in order to improve the user interface. The purpose of the wizards is to guide users through the panel options, provide recommended defaults where possible, and provide easier understanding of input and change of options. The following wizards were added. (Note that the existing tasks which the wizard provides the enhancement are still available).

  • Manage User Wizard - provides a wizard for the following tasks:
    • User Profiles
    • Customize User Controls
    • Password Profiles
  • Image Profile Wizard
    • Initial stage of a wizard for Customizing Image Activation Profiles Further enhancements are being investigated for the future.

z/VM Image Mode

On System z9, the supported Activation Image Profile Modes included the following: (Note that all of these modes have varying rules on what combination of processors and shared versus dedicated processors are allowed).

  • ESA/390 - Supports CPs, zAAPs, & zIIPs
  • ESA/390 TPF - Support CPs
  • Coupling Facility - Supports CPs & ICFs
  • Linux only - Support CPs & IFLs

The HMC version 2.10.0 and SE version 2.10.0 will support an additional Activation Image Profile mode called z/VM. This image mode will support CPs, zAAPs, zIIPs, ICFs, and IFLs. It will allow all the varying rules and processor combinations in the above modes. The only requirement is that z/VM is the base operating system in that Image. This allows for easier Image Profile planning for whatever guest operating systems may run in that z/VM image. This also allows running different operating systems within that z/VM image for different purposes/processor requirements.

The key advantage of this support is for environments where customers need to use z/VM to host Linux and z/OS or z/VSE guests in the same "box", they will not have to artificially separate the management of those two environments if they do not want to. They can manage one z/VM image to host the entire collection of guests they want to deploy.

SNMP API Enhancements

In addition to the Capacity On Demand Simple Network Management Protocol Application Programming Interface (SNMP API) new features, the following SNMP API enhancements are also available:

  • Query Active Licensed Internal Code Change Levels API
    • Returns Active Licensed Internal Code Change Levels
    • Also returns if any exception conditions exists for Channel/Crypto Configure Off/On, Coupling Facility Control Code (CFCC) Reactivation, or Activation on next Power On Reset/System Activate.
  • Disabled Wait API Event
    • Previously, SNMP Hardware Message Events had to be parsed for text of Hard Event, and there was no automation interface to obtain the Program Status Word (PSW).
    • This new SNMP Disabled Wait Event contain the PSW, Image Name, Partition ID, CPC Serial Number, and CPC Name, and will eliminate any need to parse text of Hardware Message Events.
  • Query PSW API
    • New API support for obtaining PSW
    • Only valid if Image is in not operating state.

CIM Automation APIs

The HMC will support Common Information Model (CIM) as an additional systems management API. The focus is on attribute query and operational management functions for System z: CPCs, Images, Activation Profiles. The goal is to provide similar functionality as the SNMP API. Some features (e.g. indications (SNMP Trap equivalent), Capacity On Demand, processors) are not implemented in the CIM support yet.

CIM is defined by the Distributed Management Task Force: www.dmtf.org.

The HMC object model extends the DMTF schema version 2.15. The Object Manager is OpenPegasus (V2.5.2): www.openpegasus.org.

Many toolkits exist to support client scripting. OpenPegasus comes with a C/C++ client toolkit. Standards Based LINUX Instrumentation for Manageability (SBLIM) Java Client: www.sblim.org includes other useful tools including a web-based class browser.

The IBM publication Common Information Model (CIM) Management Interface SB10-7154 provides more information on System z10 CIM support.

Universal Lift Tool / Ladders

The Universal Lift Tool / Ladders feature (#3759) is designed to provide customers with enhanced system availability benefits by improving the service and upgrade times for larger, heavier devices. This feature includes a custom lift / lower mechanism that is specifically designed for use with System z10 frames, allowing these procedures to be accomplished quicker and with fewer people. It is recommended that one of these features be obtained for each customer account / datacenter.

Weight distribution plate

The weight distribution plate is designed to distribute the weight of a frame onto two floor panels in a raised-floor installation.

Certain configuration racks can weigh up to 2450 pounds per frame. The concentrated load on a caster or a leveling foot can be half of the total frame weight. For a multiple-system installation, it is possible that one floor panel could have two casters from two adjacent systems on it, and potentially induce a highly concentrated load onto a single floor panel. The weight distribution plate is designed to distribute the load over two floor panels and to eliminate a highly concentrated load on a single floor panel.

You are responsible for consulting with the floor tile manufacturer to determine the load rating of the floor tile and the pedestal structure supporting the floor tiles. Depending on the type of raised floors and the floor panels, additional panel supports (pedestals) may be required to restore or improve the structural integrity of the panel.

Note: Cable cutouts on a floor panel will significantly reduce the floor tile load rating (up to 50%).

Processor Unit Summary

Listed below are the minimums and maximums of processor units that customers may permanently purchase. The feature codes affected are identified in parentheses.

      Total   CP7s*    IFLs       ICFs    zAAPs   zIIPs   SAPs
       PUs  (#6810) (#6811/6816) (#6812) (#6814) (#6815) (#6813)
Model Avail Min/Max   Min/Max    Min/Max Min/Max Min/Max Min/Max
----- ----- ------- ------------ ------- ------- ------- -------
E12    12    0 - 12   0 - 12     0 - 12  0 - 06  0 - 06   0 - 03
E26    26    0 - 26   0 - 26     0 - 16  0 - 13  0 - 13   0 - 07
E40    40    0 - 40   0 - 40     0 - 16  0 - 20  0 - 20   0 - 11
E56    56    0 - 56   0 - 56     0 - 16  0 - 28  0 - 28   0 - 18
E64    64    0 - 64   0 - 64     0 - 16  0 - 32  0 - 32   0 - 21
 

Note: Subcapacity models, CP4 (#6807), CP5 (#6808), and CP6(#6809), can have a maximum of 12 PUs.

Note: One CP (#6807, #6808, #6809, or #6810), IFL (#6811) or ICF (#6812) is required for any model.

Note: The total number of PUs purchased can not exceed the total number available for that model.

Note: One CP (#6807, #6808, #6809 or #6810) must be installed with the installation of any zAAPs that are installed or prior to the installation of any zAAPs.

Note: The total number of zAAPs installed must be less than or equal to the sum of the active CPs (#6807, #6808, #6809, or #6810) installed on any machine.

Note: There are two spares per system.

Note: The number of SAPs provided to the customer as standard PUs are as follows:

  • Model E12 = Three SAPs
  • Model E26 = Six SAPs
  • Model E40 = Nine SAPs
  • Model E56 = Ten SAPs
  • Model E64 = Eleven SAPs

Accessibility by people with disabilities

A U.S. Section 508 Voluntary Product Accessibility Template (VPAT) containing details on accessibility compliance can be requested at:

http://www.ibm.com/able/product_accessibility/index.html

Section 508 of the US Rehabilitation Act

System z10 Enterprise Class servers are capable on delivery, when used in accordance with IBM's associated documentation, of satisfying the applicable requirements of Section 508 of the Rehabilitation Act of 1973, 29 U.S.C. Section 794d, as implemented by 36 C.F.R. Part 1194, provided that any Assistive Technology used with the Product properly interoperates with it.
Back to topBack to top
 

Product positioning
The IBM System z10 EC represents an evolution and a revolution of the IBM mainframe. With a modular design for affordable scalability and availability, the z10 EC 701 offers performance improvements of up to 1.62 times that of the z9 EC 701, and up to 1.50 times other z9 EC configurations for equivalent numbers of processors, up to 1.5 TB total memory, up to 12 PUs that can be defined for subcapacity use, and a new host bus interface using InfiniBand with a link data rate of 6 GBps. With a design for affordable scalability, z10 EC will continue to offer investment protection and improved price/performance with upgrades. Built on a foundation that improves recovery for unplanned outages and reduction of planned outages, the z10 EC goes further to offer a reduction in preplanning requirements by delivering and reserving a fixed Hardware System Area (HSA), and just-in-time deployment of resources that allows greater flexibility in defining and executing temporary capacity needs. The performance of z10 EC is designed to improve application performance, support more transactions, increase scalability and assist in consolidation of workloads.
Back to topBack to top
 
Models

Model summary matrix

Model PUs Memory IB I/O Cages CHPIDS
E12 1 to 12 16 to 352 GB 16 1 to 3 960
E26 1 to 26 16 to 752 GB 32 1 to 3 1024
E40 1 to 40 16 to 1136 GB 40 1 to 3 1024
E56 1 to 56 16 to 1520 GB 48 1 to 3 1024
E64 1 to 64 16 to 1520 GB 48 1 to 3 1024

Note: A portion of the total memory is delivered and reserved for fixed HSA

Note: The maximum amount of memory that can be configured for a single LPAR is 1 TB

Note: The addition of the third and fourth books require a reduction in the number of fanout cards plugged, to increase cooling around the MCM.

Note: Each LCSS supports up to 256 CHPIDS.

Customer setup (CSU)

Customer set up is not available on this machine.

Devices supported

Peripheral hardware and device attachments

IBM devices previously attached to IBM System z9 and zSeries servers are supported for attachment to System z10 EC channels, unless otherwise noted. The subject I/O devices must meet ESCON or FICON architecture requirements to be supported. I/O devices that meet OEMI architecture requirements are supported only using an external converter. Prerequisite Engineering Change Levels may be required. For further detail, contact IBM service personnel.

While the z10 EC supports devices as described above, IBM does not commit to provide support or service for an IBM device that has reached its End of Service effective date as announced by IBM.

Note: IBM cannot confirm the accuracy of performance, compatibility, or any other claims related to non-IBM products. Questions regarding the capabilities of non-IBM products should be addressed to the suppliers of those products.

For a list of the current supported FICON devices, refer to the following Web site:

http://www.ibm.com/systems/z/connectivity/

Model conversions

Model Conversions - Hardware upgrades:

   From             To
-------------   -------------
M/T     Model   M/T     Model      Description
----    -----   ----    -----      ------------
2084    A08     2097    E12   (*)  A08  to  E12
2084    A08     2097    E26   (*)  A08  to  E26
2084    A08     2097    E40   (*)  A08  to  E40
2084    A08     2097    E56   (*)  A08  to  E56
2084    A08     2097    E64   (*)  A08  to  E64
2084    B16     2097    E12   (*)  B16  to  E12
2084    B16     2097    E26   (*)  B16  to  E26
2084    B16     2097    E40   (*)  B16  to  E40
2084    B16     2097    E56   (*)  B16  to  E56
2084    B16     2097    E64   (*)  B16  to  E64
2084    C24     2097    E12   (*)  C24  to  E12
2084    C24     2097    E26   (*)  C24  to  E26
2084    C24     2097    E40   (*)  C24  to  E40
2084    C24     2097    E56   (*)  C24  to  E56
2084    C24     2097    E64   (*)  C24  to  E64
2084    D32     2097    E12   (*)  D32  to  E12
2084    D32     2097    E26   (*)  D32  to  E26
2084    D32     2097    E40   (*)  D32  to  E40
2084    D32     2097    E56   (*)  D32  to  E56
2084    D32     2097    E64   (*)  D32  to  E64
 
2094    S08     2097    E12   (*)  S08  to  E12
2094    S08     2097    E26   (*)  S08  to  E26
2094    S08     2097    E40   (*)  S08  to  E40
2094    S08     2097    E56   (*)  S08  to  E56
2094    S08     2097    E64   (*)  S08  to  E64
2094    S18     2097    E12   (*)  S18  to  E12
2094    S18     2097    E26   (*)  S18  to  E26
2094    S18     2097    E40   (*)  S18  to  E40
2094    S18     2097    E56   (*)  S18  to  E56
2094    S18     2097    E64   (*)  S18  to  E64
2094    S28     2097    E12   (*)  S28  to  E12
2094    S28     2097    E26   (*)  S28  to  E26
2094    S28     2097    E40   (*)  S28  to  E40
2094    S28     2097    E56   (*)  S28  to  E56
2094    S28     2097    E64   (*)  S28  to  E64
2094    S38     2097    E12   (*)  S38  to  E12
2094    S38     2097    E26   (*)  S38  to  E26
2094    S38     2097    E40   (*)  S38  to  E40
2094    S38     2097    E56   (*)  S38  to  E56
2094    S38     2097    E64   (*)  S38  to  E64
2094    S54     2097    E12   (*)  S54  to  E12
2094    S54     2097    E26   (*)  S54  to  E26
2094    S54     2097    E40   (*)  S54  to  E40
2094    S54     2097    E56   (*)  S54  to  E56
2094    S54     2097    E64   (*)  S54  to  E64
 
2097    E12     2097    E26   (*)  E12  to  E26
2097    E12     2097    E40   (*)  E12  to  E40
2097    E12     2097    E56   (*)  E12  to  E56
2097    E12     2097    E64   (*)  E12  to  E64
2097    E26     2097    E40   (*)  E26  to  E40
2097    E26     2097    E56   (*)  E26  to  E56
2097    E26     2097    E64   (*)  E26  to  E64
2097    E40     2097    E56   (*)  E40  to  E56
2097    E40     2097    E64   (*)  E40  to  E64
2097    E56     2097    E64   (*)  E56  to  E64
 

Feature conversions

See Worldwide Customer Letter for feature conversion list.
Back to topBack to top
 

Technical description
TOC Link Physical specifications TOC Link Operating environment TOC Link Limitations
TOC Link Hardware requirements TOC Link Software requirements


Physical specifications

Dimensions:

                         Depth     Width      Height
                         -----     -----      ------
System with All Covers
  - Inches                71.0      61.6       79.26
  - Centimeter           185.4     156.5      201.32
 
System with Covers and Reduction
  - Inches                71.0      61.6       70.3
  - Centimeter           185.4     156.5      178.5
 
Each Frame With One Side Cover and Without Packaging
   - Inches               50.0      30.7       79.26
   - Centimeter          127.0      78.0      201.32
 
Each Frame on Casters with One Side Cover and With Packaging
(Domestic)
   - Inches               51.4      32.4       79.76
   - Centimeter          130.6      82.2      202.58
 
Each Frame With One Side Cover and With Packaging (ARBO Crate)
   - Inches               51.5      36.5       87.6
   - Centimeter          130.8      92.7      222.5
 

Approximate weight:

                         New Build                New Build
                          Minimum                  Maximum
                          System                   System
                          Model E12               Model E64
                        One I/O Cage           Three I/O Cages
                        ------------           ---------------
System with IBF Feature
  -  kg                   1448                      2271
  -  lb                   3258                      5110
System without IBF Feature
  -  kg                   1248                      1968
  -  lb                   2807                      4430
 

To assure installability and serviceability in non-IBM industry-standard racks, review the installation planning information for any product-specific installation requirements.

Operating environment

  • Temperature:
    • 10 to 32 degrees C (50 to 89 degrees F) for all models up to 900 meters; maximum ambient reduces 1 degree C per 300 meters above 900 meters
  • Relative Humidity: 8 to 80% (percent)
  • Wet Bulb (Caloric Value): 23 degrees C (73 degrees F) Operating Mode
  • Max Dew Point: 17 degrees C (62.6 degrees F) - Operating Mode
  • Electrical Power:
    • 27.8 kVA (typically 0.99 PF at 200V)
    • 28.4 kVA (typically 0.97 PF at 380V)
    • 28.95 kVA (typically 0.95 PF at 480V)
  • Capacity of Exhaust: 5950 cubic meters / hour (3500 CFM)
  • Noise Level:
    • Declared A-Weighted Sound Power Level, LWAd(B) = 7.7
    • Declared A-Weighted Sound Pressure Level, LpAm(dB) = 55
  • Leakage and Starting Current: 70 mA / 270 A (~10ms)

Limitations

Not applicable.

Hardware requirements

The hardware requirements for the System z10 EC and its features and functions are identified below.

Machine Change Levels (MCLs) are required.

Descriptions of the MCLs are available now through Resource Link Access Resource Link at:

http://www.ibm.com/servers/resourcelink

Peripheral hardware and device attachments

IBM devices previously attached to IBM System z9 and zSeries servers are supported for attachment to System z10 EC channels, unless otherwise noted. The subject I/O devices must meet ESCON or FICON architecture requirements to be supported. I/O devices that meet OEMI architecture requirements are supported only using an external converter. Prerequisite Engineering Change Levels may be required. For further detail, contact IBM service personnel.

While the z10 EC supports devices as described above, IBM does not commit to provide support or service for an IBM device that has reached its End of Service effective date as announced by IBM.

Note: IBM cannot confirm the accuracy of performance, compatibility, or any other claims related to non-IBM products. Questions regarding the capabilities of non-IBM products should be addressed to the suppliers of those products.

For a list of the current supported FICON devices, refer to the following Web site:

http://www.ibm.com/systems/z/connectivity/

Software requirements

Operating System Support

Listed are the operating systems and the minimum versions and releases supported by System z10 EC, its functions, and its features. Select the releases appropriate to your operating system environments.

Note: Refer to the z/OS, z/VM, z/VSE subsets of the 2097DEVICE Preventive Service Planning (PSP) bucket prior to installing a z10 EC.

System z10 EC requires at a minimum:

  • z/OS V1.7, V1.8, or V1.9, with PTFs.
    • z/OS V1.7 requires the IBM zIIP Support for z/OS V1.7 Web deliverable to be installed to enable HiperDispatch.
  • z/VM V5.2 and V5.3 with the PTFs to allow guests to exploit the System z10 EC at the System z9 functionality level.
  • z/VSE V3.1, V4.1 Compatibility Support with PTFs.
  • z/TPF V1.1 is required to support 64 engines per z/TPF LPAR.
  • TPF V4.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

InfiniBand coupling links on System z10 EC, z9 EC, and z9 BC requires at a minimum:

  • z/OS V1.7 with PTFs.
  • z/VM 5.3 to define, modify, and delete a Coupling using InfiniBand link, CHPID type CIB, when z/VM is the controlling LPAR for dynamic I/O.

Hardware Decimal Floating Point on System z10 EC requires at a minimum:

  • z/OS V1.7 with PTFs (for High level Assembler support).
  • z/OS V1.9 with PTFs for full support, for C/C++.
  • z/VM 5.3.

Capacity provisioning on System z10 EC requires at a minimum:

  • z/OS V1.9 with PTFs (see z/OS MVS Capacity Provisioning User's Guide (SA33-8299) for z/OS functions that must be enabled).
  • Linux on System z - IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases.

Large Page support (1 megabyte pages) on System z10 EC requires at a minimum:

  • z/OS V1.9 with PTFs.
  • Linux on System z - IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases.

CP Assist for Cryptographic Function (CPACF) (#3863) on the System z10 EC requires at a minimum:

  • z/OS V1.7 with either the Cryptographic Support for z/OS V1R6/R7 and z/OS.e V1R6/R7 Web deliverable (no longer available), the Enhancements to Cryptographic Support for z/OS and z/OS V1R6/R7 Web deliverable (no longer available), or the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable.
  • z/OS V1.8.
  • z/VM V5.2 for guest exploitation.
  • z/VSE V3.1 and IBM TCP/IP for VSE/ESA V1.5e with PTFs.
  • z/TPF V1.1.
  • TPF V4.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4.3 and RHEL 5.

Enhancements to CP Assist for Cryptographic Function (CPACF) on the System z10 EC requires at a minimum:

  • z/OS V1.7, V1.8 or V1.9, with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable.
  • z/VM V5.2 for guest exploitation.
  • z/VSE V4.1 and IBM TCP/IP for VSE/ESA V1.5e with PTFs.
  • Linux on System z - IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases.

Configurable Crypto Express2 on the System z10 EC requires at a minimum:

  • z/OS V1.7 with the Enhancements to Cryptographic Support for z/OS and z/OS.e V1R6/V1R7 Web deliverable (no longer available), or with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable.
  • z/OS V1.8.
  • z/VM V5.2 for guest exploitation.
  • zVSE V3.1 and IBM TCP/IP for VSE/ESA V1.5e.
  • z/TPF V1.1 (acceleration mode only).
  • Linux on System z distributions:
    • Novell SUSE SLES 9 SP3 and SLES 10.
    • Red Hat RHEL 4.4 and RHEL 5.1.

Note: z/VSE supports clear-key RSA operations only. Linux on System z and z/VM V5.2, and later, support clear-and secure-key operations.

Note: The Cryptographic Support for z/OS V1.7, V1.8, V1.9 and z/OS.e V1.7 and V1.8 Web deliverables may be obtained at:

http://www.ibm.com/eserver/zseries/zos/downloads

Key management for remote loading of ATM and Point of Sale (POS) keys on System z10 EC requires at a minimum:

  • z/OS V1.7 with the Enhancements to Cryptographic Support for z/OS and z/OS.e V1R6/V1R7 Web deliverable (no longer available), or with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable.
  • z/OS V1.8.
  • z/VM V5.2 for guest exploitation.

Improved Key Exchange with Non-CCA Cryptographic systems on System z10 EC requires at a minimum:

  • z/OS V1.7.
  • z/VM 5.2 for guest exploitation.

Support for ISO 16609 CBC Mode T-DES Message Authentication (MAC) requirements on System z10 EC requires at a minimum:

  • z/OS V1.7 with the Enhancements to Cryptographic Support for z/OS and z/OS.e V1R6/V1R7 Web deliverable (no longer available), or with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable.
  • z/OS V1.8.
  • z/VM V5.2 for guest exploitation.

Support for RSA keys up to 4096 bits in Length on System z10 EC requires at a minimum:

  • z/OS V1.7, z/OS V1.8 or z/OS V1.9 with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable.
  • z/VM V5.2 for guest exploitation.

Dynamically Add Crypto to Logical Partition on System z10 EC requires at a minimum:

  • z/OS V1.7, z/OS V1.8 or z/OS V1.9 with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable.
  • z/VM V5.2 for guest exploitation.
  • Linux on System z distributions:
    • Novell SUSE SLES 10 SP1.
    • Red Hat RHEL 5.1.

FICON Express8 (CHPID type FC) when utilizing native FICON or Channel-To-Channel (CTC), on the z10 EC and z10 BC servers requires at a minimum:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01).
  • z/VM V5.3.
  • z/VSE V4.1
  • z/TPF V1.1.
  • TPF V4.1 at PUT 16.
  • Linux on System z distributions:
    • Novell SUSE SLES 9, SLES 10, and SLES 11.
    • Red Hat RHEL 4 and RHEL 5.

FICON Express8 (CHPID type FC) for support of zHPF single- track operations on the z10 EC and z10 BC servers requires at a minimum:

  • z/OS V1.8, V1.9, or V1.10 with PTFs.
  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with PTFs.
  • Linux on System z distributions:
    • IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases.

FICON Express8 (CHPID type FC) for support of zHPF multitrack operations on the z10 EC and z10 BC servers requires at a minimum:

  • z/OS V1.9 and V1.10 with PTFs.

FICON Express8 (CHPID type FCP) for support of SCSI devices on the z10 EC and z10 BC servers requires at a minimum:

  • z/VM V5.3.
  • z/VSE V4.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9, SLES 10, and SLES 11.
    • Red Hat RHEL 4 and RHEL 5.

FICON Express4 (CHPID type FC), including Channel-To-Channel (CTC), on z10 EC requires at a minimum:

  • z/OS V1.7.
  • z/VM V5.2.
  • z/VSE V3.1.
  • TPF V4.1 at PUT 16.
  • z/TPF 1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

FICON Express4 (CHPID type FCP) for support of SCSI disks on z10 EC requires at a minimum:

  • z/VM V5.2.
  • z/VSE V3.1 with PTFs.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

HiperSockets Layer 2 support on the z10 EC requires at a minimum:

  • z/VM 5.2 for guest exploitation.
  • Linux on System z - IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases.

HiperSockets Multiple Write Facility on the z10 EC requires at a minimum:

  • z/OS V1.9 with PTFs (Second quarter, 2008).

OSA-Express2 Gigabit Ethernet LX (#3364) and SX (#3365) on the z10 EC requires at a minimum:

  • For CHPID type OSD:
    • z/OS V1.7.
    • z/VM V5.2.
    • z/VSE V3.1.
    • TPF V4.1 at PUT 13 with PTF.
    • z/TPF 1.1.
    • Linux on System z distributions:
      • Novell SUSE SLES 9 and SLES 10.
      • Red Hat RHEL 4 and RHEL 5.
  • For CHPID type OSN in support of OSA-Express2 for NCP:
    • z/OS V1.7 with PTFs.
    • z/VM V5.2.
    • z/VSE 3.1 with PTFs.
    • TPF 4.1.
    • z/TPF 1.1.
    • Linux on System z distributions:
      • Novell SUSE SLES 9 SP2 and SLES 10.
      • Red Hat RHEL 4.3 and RHEL 5.

OSA-Express2 1000BASE-T Ethernet (#3366) requires at a minimum:

  • For CHPID type OSC:
    • z/OS V1.7.
    • z/VM V5.2.
    • z/VSE V3.1.
  • For CHPID type OSD:
    • z/OS V1.7.
    • z/VM V5.2.
    • z/VSE V3.1.
    • TPF V4.1 at PUT 13 with PTF.
    • z/TPF 1.1.
    • Linux on System z distributions:
      • Novell SUSE SLES 9 and SLES 10.
      • Red Hat RHEL 4 and RHEL 5.
  • For CHPID type OSE:
    • z/OS V1.7.
    • z/VM V5.2.
    • z/VSE V3.1.
  • For CHPID type OSN in support of OSA-Express2 for NCP:
    • z/OS V1.7 with PTFs.
    • z/VM V5.2.
    • z/VSE 3.1 with PTFs.
    • TPF 4.1.
    • z/TPF 1.1
    • Linux on System z distributions:
      • Novell SUSE SLES 9 SP2 and SLES 10.
      • Red Hat RHEL 4.3 and RHEL 5.

OSA-Express2 10 Gigabit Ethernet LR (#3368) on the z10 EC requires at a minimum:

  • Supporting CHPID type OSD only:
    • z/OS V1.7.
    • z/VM V5.2.
    • z/VSE V3.1.
    • TPF 4.1 at PUT 13 with PTFs.
    • z/TPF 1.1.
    • Linux on System z distributions:
      • Novell SUSE SLES 9 and SLES 10.
      • Red Hat RHEL 4 and RHEL 5.

OSA-Express3 10 Gigabit Ethernet LR (#3370) on the z10 EC requires at a minimum:

  • Supporting CHPID type OSD only:
    • z/OS V1.7.
    • z/VM V5.2.
    • z/VSE V3.1.
    • TPF 4.1 at PUT 13 with PTFs.
    • z/TPF 1.1.
    • Linux on System z distributions:
      • Novell SUSE SLES 9 and SLES 10.
      • Red Hat RHEL 4 and RHEL 5.

Back to topBack to top
 
Publications

The following publications are available in the Library section of Resource Link(TM):

           Title                                        Order Number
-----------------------------------------------------   ------------
z10 EC System Overview                                  SA22-1084
z10 EC Installation Manual - Physical Planning (IMPP)   GC28-6865
z10 EC PR/SM (TM)  Planning Guide                       SB10-7153
 

The following publications are shipped with the product and available in the Library section of Resource Link:

      Title                                          Order Number
------------------------------                       ------------
z10 EC Installation Manual                           GC28-6864
z10 EC Service Guide                                 GC28-6866
z10 EC Safety Inspection Guide                       GC28-6870
System Safety Notices                                G229-9054
 

The following publications are available in the Library section of Resource Link:

          Title                                        Order Number
------------------------------------------------------ ------------
Application Programming Interfaces for Java            API-JAVA
Application Programming Interfaces                     SB10-7030
Capacity on Demand User's Guide                        SC28-6871
CHPID Mapping Tool User's Guide                        GC28-6825
Common Information Model (CIM) Management Interface    SB10-7154
Coupling Facility Channel I/O Interface Physical Layer SA23-0395
ESCON(R) and FICON CTC Reference                       SB10-7034
ESCON I/O Interface Physical Layer                     SA23-0394
FICON(R) I/O Interface Physical Layer                  SA24-7172
Hardware Management Console Operations Guide (V2.10.0) SC28-6867
IOCP User's Guide                                      SB10-7037
Maintenance Information for Fiber Optic Links          SY27-2597
z10 EC Parts Catalog                                   GC28-6869
Planning for Fiber Optic Links                         GA23-0367
SCSI IPL - Machine Loader Messages                     SC28-6839
Service Guide for HMCs and SEs                         GC28-6861
Service Guide for Trusted Key Entry Workstations       GC28-6862
Standalone IOCP User's Guide                           SB10-7152
Support Element Operations Guide (Version 2.10.0)      SC28-6868
System z Functional Matrix                             ZSW0-1335
OSA-Express Customer's Guide                           SA22-7935
OSA-ICC User's Guide                                   SA22-7990
 

Publications for System z10 Enterprise Class(TM) can be obtained at Resource Link by accessing the following Web site:

www.ibm.com/servers/resourcelink

Using the instructions on the Resource Link panels, obtain a user ID and password. Resource Link has been designed for easy access and navigation.
Back to topBack to top
 

Features
TOC Link Features - No charge TOC Link Features - Chargeable TOC Link Feature descriptions
TOC Link Feature exchanges


Features - No charge

COLOR: A specify code is not required.

VOLTAGE: A specify code for line cords feature codes is required and will be provided by the configurator based on the country code of the order. The specify codes listed below must be used when an alternative to the ELINK configurator default is required.

(#8992) 6ft 250V 60A Line Cord (Chicago)

(No Longer Available as of November 9, 2010)

(#8994) 6ft 480V 30A Line Cord (Chicago)

(No Longer Available as of November 9, 2010)

(#8982) 14ft 200-240V LV Line Cord (WT)

(No Longer Available as of June 30, 2012)

(#8985) 14ft 380-415V HV Line Cord (WT)

(No Longer Available as of September 10, 2013)

(#8993) 14ft 250V Cord (US,Can,Jap)

(No Longer Available as of June 30, 2012)

(#8995) 14ft 480V Line Cord (US,Can,Jap)

(No Longer Available as of June 30, 2012)

(#8997) 14 ft 380-415V Zero Halogen

(No Longer Available as of September 10, 2013)

External Cables           Description
---------------    -------------------------------
 8982              14ft 200-240V LV Line Cord (WT)
 8985              14ft 380-415V HV Line Cord (WT)
 8992              6ft 250V 60A Line Cord (Chicago)
 8993              14ft 250V Cord (US,Can,Jap)
 8994              6ft 480V 30A Line Cord (Chicago)
 8995              14ft 480V Line Cord (US,Can,Jap)
 8996              14 ft 200-240V Zero Halogen
 8997              14 ft 380-415V Zero Halogen
 

LANGUAGE: A specify code for language is required and will be provided by the configurator based on the country code of the order. The specify codes listed below must be used when an alternative to the ELINK configurator default is required.

Note: All of the following No Longer Available as of June 30, 2012.

Specify Code       Description
------------       -----------------------------------
 2924              US English
 2928              France
 2929              German
 2930              Spanish Non-Spain
 2931              Spain
 2932              Italian
 2935              Canadian French
 2978              Portuguese
 2979              Brazilian Portuguese
 2980              UK English
 2983              Norwegian
 2987              Sweden Finland
 2988              Netherlands
 2989              Belgian French
 2993              Denmark
 2997              Swiss French, German
 5560              Luxembourg Orders Placed in Belgium
 5561              Iceland Orders Placed in Denmark
 5562              China Orders Placed in Hong Kong
 

Features - Chargeable

                                Machine
Description                     Type     Model  Feature
-----------------------------   -------  -----  -------
System z10 EC                      2097   E12
                                          E26
                                          E40
                                          E56
                                          E64
 
HMC w/Dual EN                                   0084
SE-EN Switch (former HUB)                       0089
HMC w/Dual EN                                   0091
I/O Cage ISC-D Airflow                          0113
I/O Cage Full Card Airflow                      0114
1 CPE Capacity Unit                             0116
100 CPE Capacity Unit                           0117
10000 CPE Capacity Unit                         0118
1 CPE Capacity Unit-IFL                         0119
100 CPE Capacity Unit-IFL                       0120
1 CPE Capacity Unit-ICF                         0121
100 CPE Capacity Unit-ICF                       0122
1 CPE Capacity Unit-zAAP                        0123
100 CPE Capacity Unit-zAAP                      0124
1 CPE Capacity Unit-zIIP                        0125
100 CPE Capacity Unit-zIIP                      0126
1 CPE Capacity Unit-SAP                         0127
100 CPE Capacity Unit-SAP                       0128
CEC                                             0156
HCA2-C Fanout                                   0162
HCA2-O Fanout                                   0163
MBA Fanout Airflow                              0165
ISC-Mother Card                                 0217
ISC-Daughter Card                               0218
ISC-3 link on F/C 0218                          0219
ISAOPT Enablement                               0251
IFB-MP Daughter Card                            0326
STI-A8 Mother Card                              0327
STI-A4 Mother Card                              0328
TKE workstation                                 0839
TKE workstation                                 0841
TKE 5.2 LIC                                     0857
TKE 7.0 LIC                                     0860
Crypto Express2                                 0863
TKE Smart Card Reader                           0887
TKE addl smart cards                            0888
UID Label for Dpt of Defense                    0998
STP Enablement                                  1021
EMEA Special Operations                         1022
4 GB Memory DIMM(8/feature)                     1604
8 GB Memory DIMM(8/feature)                     1608
LICC Ship Via Net Indicator                     1750
CUoD Ctl for Plan Ahead                         1995
CUoD-Concurrent Conditioning                    1999
Line Cord Plan Ahead                            2000
16-Port ESCON Card                              2323
ESCON Channel port                              2324
STI Rebalance                                   2400
Min Infrastructure Pricing                      0001
Memory DIMM Airflow                             2698
Preplanned memory                               1996
Preplanned memory activation                    1997
16 GB Memory                                    2201
32 GB Memory                                    2202
48 GB Memory                                    2203
64 GB Memory                                    2204
80 GB Memory                                    2205
96 GB Memory                                    2206
112 GB Memory                                   2207
128 GB Memory                                   2208
144 GB Memory                                   2209
160 GB Memory                                   2243
176 GB Memory                                   2211
192 GB Memory                                   2212
208 GB Memory                                   2213
224 GB Memory                                   2214
240 GB Memory                                   2215
256 GB Memory                                   2216
288 GB Memory                                   2217
320 GB Memory                                   2218
352 GB Memory                                   2219
US English                                      2924
France                                          2928
German                                          2929
Spanish - Non Spain                             2930
Spain                                           2931
Italian                                         2932
Canadian French                                 2935
Portuguese                                      2978
Brazilian Portuguese                            2979
UK English                                      2980
Norwegian                                       2983
Sweden Finland                                  2987
Netherlands                                     2988
Belgian French                                  2989
Denmark                                         2993
Swiss French, German                            2997
Balanced Power Plan Ahead                       3001
Internal Battery IBF-E                          3211
FICON Express4 10KM LX                          3321
FICON Express4 SX                               3322
FICON Express4 4KM LX                           3324
FICON Express8 10KM LX                          3325
FICON Express8 SX                               3326
OSA-Express3 GbE LX                             3362
OSA-Express3 GbE SX                             3363
OSA-Express2 GbE LX                             3364
OSA-Express2 GbE SX                             3365
OSA-Express2 10 GbE LR                          3368
OSA-Express2 1000BASE-T EN                      3366
OSA-Express3 10 GbE LR                          3370
ICB-4 link                                      3393
CPACF Enablement                                3863
Luxembourg-Belgium ordered                      5560
Iceland-Ordered in Denmark                      5561
China-Ordered in Hong Kong                      5562
17 inch flat panel                              6094
20 inch flat panel                              6095
Power Sequence Controller                       6501
Additional CBU Test                             6805
FQC 1st Bracket/Mounting HW                     7960
FQC Additional Brackets                         7961
MT-RJ 6 ft harness                              7962
MT-RJ 8 ft harness                              7963
MT-RJ 5 ft harness                              7964
LC Dup 6 ft harness                             7965
LC Dup 8.5 ft harness                           7966
LC Dup 5 ft harness                             7967
LC Dup 8.5 ft harness                           7968
Bolt Down Kit-High Raised Fl                    7994
Bolt Down Kit-Low Raised Fl                     7993
6Ft 250V Line Cord, Chi                         8992
14Ft 250V Line-US,Can,Japan                     8993
6Ft 480V 30A Line Cord, Chi                     8994
14Ft 480V Line-US,Can,Japan                     8995
Multi Order Ship Flag                           9000
Multi Order Rec Only-NB                         9001
Multi Order Rec Only-MES                        9002
RPO Action Flag                                 9003
Downgraded PUs Per Request                      9004
On/Off CoD Act IFL Day                          9888
On/Off CoD Act ICF Day                          9889
On/Off CoD Act zAAP Day                         9893
On/Off COD authorization                        9896
On/Off CoD Active CP Day                        9897
Perm upgr authorization                         9898
CIU Activation (Flag)                           9899
On Line CoD Buying (Flag)                       9900
On/Off CoD Act zIIP Day                         9908
On/Off CoD Act SAP Day                          9909
CBU authorization                               9910
CPE authorization                               9912
OPO sales authorization                         9913
Northern Hemisphere                             9930
Southern Hemisphere                             9931
Universal Lift Tool/Ladders                     3759
Weight Distribution Kit                         9970
Height Reduce Ship                              9975
Height Reduction for Return                     9976
z10 EC Site Tool Kit                            9968
CP4                                             6807
CP5                                             6808
CP6                                             6809
CP7                                             6810
IFL                                             6811
ICF                                             6812
SAP (optional)                                  6813
zAAP                                            6814
zIIP                                            6815
Unassigned IFL                                  6816
CBU                                             6818
5 Additional CBU Tests                          6819
1 CBU Year                                      6817
1 CBU CP                                        6820
25 CBU CP                                       6821
1 CBU IFL                                       6822
25 CBU IFL                                      6823
1 CBU ICF                                       6824
25 CBU ICF                                      6825
1 CBU zAAP                                      6826
25 CBU zAAP                                     6827
1 CBU zIIP                                      6828
25 CBU zIIP                                     6829
1 CBU SAP                                       6830
25 CBU SAP                                      6831
CBU Replenishment                               6832
Capacity for Planned Event                      6833
OPO Sales Flag                                  6835
OPO Sales Flag Alteration                       6836
Feature Converted CBU CP                        6837
Feature Converted CBU IFL                       6838
Feature Converted CBU ICF                       6839
Feature Converted CBU zAAP                      6840
Feature Converted CBU zIIP                      6841
Feature Converted CBU SAP                       6842
401 Capacity Marker                             7101
402 Capacity Marker                             7102
403 Capacity Marker                             7103
404 Capacity Marker                             7104
405 Capacity Marker                             7105
406 Capacity Marker                             7106
407 Capacity Marker                             7107
408 Capacity Marker                             7108
409 Capacity Marker                             7109
410 Capacity Marker                             7110
411 Capacity Marker                             7111
412 Capacity Marker                             7112
501 Capacity Marker                             7113
502 Capacity Marker                             7114
503 Capacity Marker                             7115
504 Capacity Marker                             7116
505 Capacity Marker                             7117
506 Capacity Marker                             7118
507 Capacity Marker                             7119
508 Capacity Marker                             7120
509 Capacity Marker                             7121
510 Capacity Marker                             7122
511 Capacity Marker                             7123
512 Capacity Marker                             7124
601 Capacity Marker                             7125
602 Capacity Marker                             7126
603 Capacity Marker                             7127
604 Capacity Marker                             7128
605 Capacity Marker                             7129
606 Capacity Marker                             7130
607 Capacity Marker                             7131
608 Capacity Marker                             7132
609 Capacity Marker                             7133
610 Capacity Marker                             7134
611 Capacity Marker                             7135
612 Capacity Marker                             7136
700 Capacity Marker                             7137
701 Capacity Marker                             7138
702 Capacity Marker                             7139
703 Capacity Marker                             7140
704 Capacity Marker                             7141
705 Capacity Marker                             7142
706 Capacity Marker                             7143
707 Capacity Marker                             7144
708 Capacity Marker                             7145
709 Capacity Marker                             7146
710 Capacity Marker                             7147
711 Capacity Marker                             7148
712 Capacity Marker                             7149
1-Way Processor CP4                             6539
2-Way Processor CP4                             6540
3-Way Processor CP4                             6541
4-Way Processor CP4                             6542
5-Way Processor CP4                             6543
6-Way Processor CP4                             6544
7-Way Processor CP4                             6545
8-Way Processor CP4                             6546
9-Way Processor CP4                             6547
10-Way Processor CP4                            6548
11-Way Processor CP4                            6549
12-Way Processor CP4                            6550
1-Way Processor CP5                             6551
2-Way Processor CP5                             6552
3-Way Processor CP5                             6553
4-Way Processor CP5                             6554
5-Way Processor CP5                             6555
6-Way Processor CP5                             6556
7-Way Processor CP5                             6557
8-Way Processor CP5                             6558
9-Way Processor CP5                             6559
10-Way Processor CP5                            6560
11-Way Processor CP5                            6561
12-Way Processor CP5                            6562
1-Way Processor CP6                             6601
2-Way Processor CP6                             6602
3-Way Processor CP6                             6603
4-Way Processor CP6                             6604
5-Way Processor CP6                             6605
6-Way Processor CP6                             6606
7-Way Processor CP6                             6607
8-Way Processor CP6                             6608
9-Way Processor CP6                             6609
10-Way Processor CP6                            6610
11-Way Processor CP6                            6611
12-Way Processor CP6                            6612
0-Way Processor CP7                             6700
1-Way Processor CP7                             6701
2-Way Processor CP7                             6702
3-Way Processor CP7                             6703
4-Way Processor CP7                             6704
5-Way Processor CP7                             6705
6-Way Processor CP7                             6706
7-Way Processor CP7                             6707
8-Way Processor CP7                             6708
9-Way Processor CP7                             6709
10-Way Processor CP7                            6710
11-Way Processor CP7                            6711
12-Way Processor CP7                            6712
 
System z10 EC                      2097   E12
                                          E26
                                          E40
                                          E56
MBA Fanout Card                                 0164
ICB Cable (new to current)                      0229
ICB Cable (new to new)                          0230
 
 
System z10 EC                      2097   E12
Model E12                                       1117
1 Book, 1 I/O                                   4311
1 Book, 2 I/O                                   4312
1 Book, 3 I/O                                   4313
 
System z10 EC                      2097   E26
Model E26                                       1118
2 Book, 1 I/O                                   4321
2 Book, 2 I/O                                   4322
2 Book, 3 I/O                                   4323
 
System z10 EC                      2097   E40
Model E40                                       1119
3 Book, 1 I/O                                   4331
3 Book, 2 I/O                                   4332
3 Book, 3 I/O                                   4333
 
System z10 EC                      2097   E56
Model E56                                       1120
4 Book, 1 I/O                                   4341
4 Book, 2 I/O                                   4342
4 Book, 3 I/O                                   4343
 
System z10 EC                      2097   E64
Model E64                                       1121
4 Book, 1 I/O                                   4341
4 Book, 2 I/O                                   4342
4 Book, 3 I/O                                   4343
 
 
System z10 EC                      2097   E26
                                          E40
                                          E56
                                          E64
416 GB Memory                                   2221
448 GB Memory                                   2222
480 GB Memory                                   2223
512 GB Memory                                   2224
560 GB Memory                                   2225
608 GB Memory                                   2226
656 GB Memory                                   2227
704 GB Memory                                   2228
752 GB Memory                                   2229
13-Way Processor CP7                            6713
14-Way Processor CP7                            6714
15-Way Processor CP7                            6715
16-Way Processor CP7                            6716
17-Way Processor CP7                            6717
18-Way Processor CP7                            6718
19-Way Processor CP7                            6719
20-Way Processor CP7                            6720
21-Way Processor CP7                            6721
22-Way Processor CP7                            6722
23-Way Processor CP7                            6723
24-Way Processor CP7                            6724
25-Way Processor CP7                            6725
26-Way Processor CP7                            6726
713 Capacity Marker                             7150
714 Capacity Marker                             7151
715 Capacity Marker                             7152
716 Capacity Marker                             7153
717 Capacity Marker                             7154
718 Capacity Marker                             7155
719 Capacity Marker                             7156
720 Capacity Marker                             7157
721 Capacity Marker                             7158
722 Capacity Marker                             7159
723 Capacity Marker                             7160
724 Capacity Marker                             7161
725 Capacity Marker                             7162
726 Capacity Marker                             7163
 
System z10 EC                      2097   E40
                                          E56
                                          E64
800 GB Memory                                   2230
848 GB Memory                                   2231
896 GB Memory                                   2232
944 GB Memory                                   2233
1008 GB Memory                                  2234
1072 GB Memory                                  2235
1136 GB Memory                                  2236
27-Way Processor CP7                            6727
28-Way Processor CP7                            6728
29-Way Processor CP7                            6729
30-Way Processor CP7                            6730
31-Way Processor CP7                            6731
32-Way Processor CP7                            6732
33-Way Processor CP7                            6733
34-Way Processor CP7                            6734
35-Way Processor CP7                            6735
36-Way Processor CP7                            6736
37-Way Processor CP7                            6737
38-Way Processor CP7                            6738
39-Way Processor CP7                            6739
40-Way Processor CP7                            6740
727 Capacity Marker                             7164
728 Capacity Marker                             7165
729 Capacity Marker                             7166
730 Capacity Marker                             7167
731 Capacity Marker                             7168
732 Capacity Marker                             7169
733 Capacity Marker                             7170
734 Capacity Marker                             7171
735 Capacity Marker                             7172
736 Capacity Marker                             7173
737 Capacity Marker                             7174
738 Capacity Marker                             7175
739 Capacity Marker                             7176
740 Capacity Marker                             7177
 
System z10 EC                      2097   E56
                                          E64
1200 GB Memory                                  2237
1264 GB Memory                                  2238
1328 GB Memory                                  2239
1392 GB Memory                                  2240
1456 GB Memory                                  2241
1520 GB Memory                                  2242
41-Way Processor CP7                            6741
42-Way Processor CP7                            6742
43-Way Processor CP7                            6743
44-Way Processor CP7                            6744
45-Way Processor CP7                            6745
46-Way Processor CP7                            6746
47-Way Processor CP7                            6747
48-Way Processor CP7                            6748
49-Way Processor CP7                            6749
50-Way Processor CP7                            6750
51-Way Processor CP7                            6751
52-Way Processor CP7                            6752
53-Way Processor CP7                            6753
54-Way Processor CP7                            6754
55-Way Processor CP7                            6755
56-Way Processor CP7                            6756
741 Capacity Marker                             7178
742 Capacity Marker                             7179
743 Capacity Marker                             7180
744 Capacity Marker                             7181
745 Capacity Marker                             7182
746 Capacity Marker                             7183
747 Capacity Marker                             7184
748 Capacity Marker                             7185
749 Capacity Marker                             7186
750 Capacity Marker                             7187
751 Capacity Marker                             7188
752 Capacity Marker                             7189
753 Capacity Marker                             7190
754 Capacity Marker                             7191
755 Capacity Marker                             7192
756 Capacity Marker                             7193
 
System z10 EC                      2097   E64
57-Way Processor CP7                            6757
58-Way Processor CP7                            6758
59-Way Processor CP7                            6759
60-Way Processor CP7                            6760
61-Way Processor CP7                            6761
62-Way Processor CP7                            6762
63-Way Processor CP7                            6763
64-Way Processor CP7                            6764
757 Capacity Marker                             7194
758 Capacity Marker                             7195
759 Capacity Marker                             7196
760 Capacity Marker                             7197
761 Capacity Marker                             7198
762 Capacity Marker                             7199
763 Capacity Marker                             7200
764 Capacity Marker                             7201
 
 
System z9 EC                    2094      S08
                                          S18
                                          S28
                                          S38
                                          S54
 
HCA1-O fanout                                   0167
 
 
 
System z9 BC                    2096      S07
 
HCA1-O fanout                                   0167
 
 
System z9 EC                    2094      S08
                                          S18
                                          S28
                                          S38
                                          S54
ICB Cable (new to current)                      0229
TKE 5.2 LIC                                     0857
 
 
System z9 BC                    2096      S07
                                          R07
 
ICB Cable (new to current)                      0229
TKE 5.2 LIC                                     0857
 
 
IBM eServer                     2084      A08
zSeries 990                               B16
                                          C24
                                          D32
 
ICB Cable (new to current)                      0229
TKE 5.2 LIC                                     0857
 
 
IBM eServer                     2086      A04
zSeries 890
 
ICB Cable (new to current)                      0229
TKE 5.2 LIC                                     0857
 
The following features are not orderable on the System z10 EC models.
If they are installed at the time of an upgrade to the System z10 EC,
they may be retained.
 
Description                             Feature Code
---------------------                   ------------
HMC                                        0079
HMC                                        0081
TKE workstation                            0859
FICON Express LX                           2319
FICON Express SX                           2320
FICON Express2 LX                          3319
FICON Express2 SX                          3320
17 inch panel display                      6092
21 inch panel display                      6093
 
 
NOTES:
1. Memory will NOT carry forward
2. Support elements will NOT carry forward
3. FICON Express is supported on System z10 EC as carry forward only
4. OSA-Express is NOT supported on System z10 EC
 
Link                                                      Maximum
Type   Name                        Communication Use      Links
-----  -------------------------   --------------------   -------
IC     Internal Coupling channel   Internal                32
 
ICB-4  Integrated Cluster Bus-4    z10 EC, z9 EC, z9 BC,   16
#3393                              z990, z890
 
ISC-3  InterSystem Channel-3       z10 EC, z9 EC, z9 BC,   48
#0217, #0218, #0219                z990, z890
 
IFB    12x IB-SDR or DDR           z10 EC to z10 EC (DDR)  32
                                   z10 EC to z9  (SDR)
 
  • The maximum number of Coupling Links combined cannot exceed 64 per server (ICs, ICB-4s, active ISC-3 links, and IFBs)
  • For each MBA fanout installed for ICB-4s, the number of possible HCA fanouts for coupling is reduced by one
  • An ISC-3 feature on a z10 EC can be connected to System z9 or zSeries server in peer mode (CHPID type CFP) operating at 2 Gbps, exclusively. Compatibility mode is not supported.

There are 28 I/O slots in one I/O cage, up to three I/O cages, for a server total of 84 I/O slots. Up to four Logical Channel SubSystems (LCSSs) are supported with a maximum of 256 Channel Path IDentifiers (CHPIDs) per LCSS and per operating system image, and up to 1024 CHPIDs per server.

                  - - - - - Per Server - - - - -
                  Minimum  Maximum    Maximum     Increments   Purchase
Feature Name      features features connections   per feature increments
----------------- -------- -------- -----------   ----------- ----------
16-port ESCON      0(1)     69       1024          16          4
#2323, #2324                         channels      channels    channels
                                                   1 reserved
                                                   as a spare
 
FICON Express4     0(1)     84       336           4           4
#3321, #3322, #3324                  channels      channels    channels
 
FICON Express2(5)  0(1)     84       336           4           4
#3319, #3320                         channels      channels    channels
 
FICON Express(5)   0(1)     60       120           2           2
#2319, #2320                         channels      channels    channels
 
ICB-4 link(3)      0(1)      8       16 links(2)      N/A      1 link
#3393
 
ISC-3              0(1)     12       48 links(2)   4 links     1 link
#0217, #0218, #0219
 
12x IB-DDR(3)      0(1)     16       32 ports(2)   2 ports     2 ports
IFB #0163
 
OSA-Express3       0        24       48 ports      2           2 ports
#3370
 
OSA-Express2       0        24       48 ports      2           2 ports
#3364, #3365, #3366
 
OSA-Express2       0        24       48 ports      1           1 port
#3368
 
Crypto Express2    0        8        16            2           2 PCI-X
#0863(4)                             PCI-X         PCI-X        adapters
                                     adapters      adapters
 

Note: (1) Minimum of one I/O feature (ESCON or FICON) or one Coupling Link (ICB-4, ISC-3, IFB) required.

Note: (2) Maximum number of Coupling Links combined cannot exceed 64 per server. (ICs, ICB-4s, active ISC-3 links, and IFBs).

Note: (3) ICB-4s and 12x IB-DDRs are not included in the maximum feature count for I/O slots but are included in the CHPID count.

Note: (4) An initial order of Crypto Express2 is 4 PCI-X adapters (two features). Each PCI-X adapter can be configured as either a coprocessor or an accelerator.

Note: (5) Can be carried forward on an upgrade; cannot be ordered.

Feature descriptions

(#6539) 1-Way Processor CP4

(No Longer Available as of September 10, 2013)

(#6540) 2-Way Processor CP4

(No Longer Available as of September 10, 2013)

(#6541) 3-Way Processor CP4

(No Longer Available as of September 10, 2013)

(#6542) 4-Way Processor CP4

(No Longer Available as of September 10, 2013)

(#6543) 5-Way Processor CP4

(No Longer Available as of September 10, 2013)

(#6601) 1-Way Processor CP6

(No Longer Available as of September 10, 2013)

(#6602) 2-Way Processor CP6

(No Longer Available as of September 10, 2013)

(#6705) 5-Way Processor CP7

(No Longer Available as of September 10, 2013)

(#6712) 12-Way Processor CP7

(No Longer Available as of September 10, 2013)

(#6713) 13-Way Processor CP7

(No Longer Available as of September 10, 2013)

(#0084) Hardware Management Console with dual Ethernet

(No Longer Available as of November 9, 2010)

The HMC is a workstation designed to provide a single point of control and single system image for managing local or remote hardware elements. Connectivity is supplied using an Ethernet Local Area Network (LAN) devoted exclusively to accessing the supported local and remote servers. The HMC is designed to support, exclusively, the HMC application. The HMC is supplied with two Ethernet ports capable of operating at 10, 100, or 1000 Mbps. Included is one mouse, one keyboard, a selectable flat- panel display, and a DVD RAM to install Licensed Internal Code (LIC).

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: None.
  • Corequisites: None
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: The HMC is for the exclusive use of the HMC application. Customer applications cannot reside on the HMC. The ESCON Director and Sysplex Timer(R) applications cannot reside on the HMC. TCP/IP is the only supported communication protocol. The HMC supports z10 ECs. It can also be used to support z9 EC, z9 BC, z990, z890, z900, and z800 servers.
  • Field Installable: Yes. Parts removed as a result of feature conversions become the property of IBM.
  • Cable Order: Cables are shipped with the HMC. The Ethernet cables are Category 5 Unshielded Twisted Pair (UTP) with an RJ-45 connector on each end.
(#0089) Ethernet switch

(No Longer Available as of June 30, 2012)

An Ethernet switch is used to manage the Ethernet connection between Support Elements (SEs) and Hardware Management Consoles (HMCs.) With the Virtual Local Area Network (VLAN) capability offered on z10 EC an Ethernet switch is no longer required. This optional feature is available for use when you have more than one HMC in the same ring. The switch is a 16-port Ethernet standalone, unmanaged switch, capable of 10 and 100 Mbps.

  • Minimum: None
  • Maximum: Ten (10).
  • Prerequisites: None
  • Corequisites: None
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes. Parts removed as a result of feature conversions become the property of IBM.
  • Cable Order: Cables are a customer responsibility.
(#0090) HMC with dual Ethernet

(No Longer Available as of June 30, 2012)

The Hardware Management Console (HMC) is a workstation designed to provide a single point of control and single system image for managing local or remote hardware elements. Connectivity is supplied using an Ethernet Local Area Network (LAN) devoted exclusively to accessing the supported local and remote servers. The HMC is designed to support, exclusively, the HMC application.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: The HMC is for the exclusive use of the HMC application. Customer applications cannot reside on the HMC. The ESCON Director and Sysplex Timer applications cannot reside on the HMC. TCP/IP is the only supported communication protocol. The HMC supports z10 ECs. It can also be used to support z10 BC, z9 EC, z9 BC, z990, z890, z900, and z800 servers.
  • Field Installable: Yes. Parts removed as a result of feature conversions become the property of IBM.
  • Cable Order: Cables are shipped with the HMC. The Ethernet cables are Category 5 Unshielded Twisted Pair (UTP) with an RJ-45 connector on each end.
(#0116) 1 CPE Capacity Unit

(No Longer Available as of June 30, 2013)

Pricing feature code equal to the remainder of three days multiplied by the number of CPE Capacity Units divided by 100 purchased in a given Capacity for Planned Event record.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0117) 100 CPE Capacity Unit

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of CPE Capacity Units purchased in a given pre-paid Capacity for Planned Event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: # 0116
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0118) 10000 CPE Capacity Unit

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of CPE Capacity Units purchased in a given pre-paid Capacity for Planned Event record divided by 10,000.

  • Minimum: None.
  • Maximum: 250
  • Prerequisites: # 0117
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0119) 1 CPE Capacity Unit-IFL

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary Integrated Facility for Linux (IFL) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0120) 100 CPE Capacity Unit-IFL

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary Integrated Facility for Linux (IFL) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0119
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0121) 1 CPE Capacity Unit-ICF

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary Internal Coupling Facility (ICF) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0122) 100 CPE Capacity Unit-ICF

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary Internal Coupling Facility (ICF) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0121
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0123) 1 CPE Capacity Unit-zAAP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary System z Application Assist Processor (zAAP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0124) 100 CPE Capacity Unit-zAAP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary System z Application Assist Processor (zAAP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0123
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0125) 1 CPE Capacity Unit-zIIP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary System z Integrated Information Processor (zIIP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0126) 100 CPE Capacity Unit-zIIP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary System z Integrated Information Processor (zIIP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0125
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0127) 1 CPE Capacity Unit-SAP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary System Assist Processor (SAP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0128) 100 CPE Capacity Unit-SAP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary System Assist Processor (SAP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0127
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0168) HCA2-O LR fanout card for Long Reach 1x InfiniBand

(No Longer Available as of June 30, 2012)

Long Reach 1x InfiniBand coupling links utilize the Host Channel Adapter2 Optical Long Reach fanout card (HCA2-O LR). This fanout is designed to support single data rate (SDR) at 2.5 Gbps link data rate (1x IB-SDR) or double data rate (DDR) at 5 Gbps (1x IB-DDR). The speed will be auto-negotiated and is determined by the capability of the attached Dense Wavelength Division Multiplexer (DWDM) to which it is attached. The DWDM vendor must be qualified by System z. An unrepeated distance of 10 km (6.2 miles) is supported. Greater distances are supported when attached to a System z-qualified optical networking solution.

Note: A link data rate of 2.5 Gbps or 5 Gbps does not represent the actual performance of the link.

The HCA2-O LR fanout card has two ports and resides in the processor nest on the front of the book in the CPC cage. The two ports exit the fanout card using LC Duplex connectors (same connector used with ISC-3) and support 9 micron single mode fiber optic cables. These fiber optic cables and connectors are industry standard and are a customer responsibility.

Long Reach 1x InfiniBand coupling links are designed to satisfy extended distance requirements, and to facilitate a migration from ISC-3 coupling links to InfiniBand coupling links.

  • Minimum: None. Order increment is two ports/links (one HCA2-O LR fanout card).
  • Maximum: Sixteen (16) features and 32 ports/links.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: Maximum number of IFB links (whether HCA2-O #0163 or HCA2-O LR #0168) in combination with ICB-4 is 32 ports/links per system. Without ICB-4, the maximum number of IFB links (whether HCA2-O #0163 or HCA2-O LR #0168) is 32 ports/links per system.
(#0217, #0218, #0219) InterSystem Channel-3 (ISC-3)

(No Longer Available as of June 30, 2012)

The InterSystem Channel-3 (ISC-3) feature is a member of the family of Coupling Link options. An ISC-3 feature can have up to four links per feature. The ISC-3 feature is used by coupled servers to pass information back and forth over 2 Gigabits per second (Gbps) links in a Parallel Sysplex environment. The z10 EC ISC-3 feature is compatible with ISC-3 features on System z9 and zSeries servers. While ICB-4 is used for short distances between servers (7 meters - 23 feet), ISC-3 supports an unrepeated distance of up to 10 kilometers (6.2 miles) between servers when operating at 2 Gbps. Extended distance for ISC-3 is available through RPQ. ISC-3 (CHPID type CFP - peer) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The ISC-3 feature is composed of a Mother card (ISC-M #0217) and two Daughter cards (ISC-D #0218). Each daughter card has two ports or links, for a total of four links per feature. Each link is activated using Licensed Internal Code, Configuration Control (LICCC) with ISC links #0219. The ISC-D cannot be ordered. When the quantity of ISC links (#0219) is selected, the appropriate number of ISC-Ms and ISC-Ds is selected by the configuration tool. Additional ISC-Ms may be ordered up to the maximum of ISC-Ds required or twelve (12), whichever is the smaller number. The link is defined in peer (CHPID type CFP) mode only. Compatibility mode is not supported.

Each link utilizes a Long Wavelength (LX) laser as the optical transceiver, and supports use of a 9 micron single mode fiber optic cable terminated with an industry standard small form factor LC Duplex connector. The ISC-3 feature accommodates reuse (at reduced distances) of 50 micron multimode fiber optic cables when the link data rate does not exceed 1 Gbps. A pair of Mode Conditioning Patch cables are then required, one for each end of the link.

  • Minimum: None. Links are ordered in increments of one. It is recommended that initial orders include two links. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4 or ISC-3) must be present.
  • Maximum: 12 features, 48 links (4 links per feature).
  • Prerequisites: None.
  • Corequisites: ECF (standard) for Sysplex Timer attachment or STP enablement (#1021).
  • Compatibility Conflicts: None.
  • Customer Setup: No.
  • Limitations:
    • The maximum number of Coupling Links combined (ICs, ICB-4s, active ISC-3 links, and IFBs) cannot exceed 64 per server.
    • The unrepeated distance between 2 Gbps ISC-3 links is limited to 10 kilometers (6.2 miles). If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an LC Duplex connector is required.
(#0229) ICB-4 cable for z10 EC to System z

(No Longer Available as of June 30, 2012)

The Integrated Cluster Bus-4 (ICB-4) cable is a unique 10 meter (33 feet) copper cable to be used with ICB-4 links (#3393) only when the target servers are System z9.

ICB-4 cables will be automatically ordered to match the quantity of ICB-4 links (#3393) on order. The quantity of ICB-4 cables can be reduced, but cannot exceed the quantity of ICB-4 links on order.

Note: When ordering ICB cables, planning for the required number of cables should consider the total number of servers and ICB features to be ordered and enabled in calculating the number of cables to be ordered. As an example, if two servers with four features are being ordered and enabled, the total number of cables required is two. Proper planning will prevent over-ordering the number of cables.

  • Limitations: While the ICB-4 cable is 10 meters in length, 3 meters (10 feet) is used for internal routing and strain relief - 7 meters (23 feet) is available for server-to-server connection.
(#0230) ICB-4 z10 EC cable

(No Longer Available as of June 30, 2012)

The Integrated Cluster Bus-4 (ICB-4) z10 EC cable is a unique 10 meter (33 feet) copper cable to be used with ICB-4 links (#3393) when the target servers are z10 EC.

ICB-4 z10 EC cables will be automatically ordered to match the quantity of ICB-4 links (#3393) on order. The quantity of ICB-4 cables can be reduced, but cannot exceed the quantity of ICB-4 links on order.

Note: When ordering ICB cables, planning for the required number of cables should consider the total number of z10 EC servers and ICB features to be ordered and enabled in calculating the number of cables to be ordered. As an example, if two servers with four features are being ordered and enabled, the total number of cables required is two. Proper planning will prevent over-ordering the number of cables.

  • Limitations: While the ICB-4 cable is 10 meters in length, 3 meters (10 feet) is used for internal routing and strain relief - 7 meters (23 feet) is available for server-to-server connection.
(#0251) ISAOPT enablement for machine types 2097 (z10 EC) and 2098 (z10 BC)

(No Longer Available as of October 12, 2010)

This feature cannot be ordered. When IBM zEnterprise BladeCenter Extension (zBX) model 001 is ordered or upgraded, the configurator tool selects a quantity of this feature. The quantity of the feature is equal to the quantity of blades selected for the attached IBM zEnterprise BladeCenter Extension (zBX) system at the time of the ocnfiguration.

  • Minimum: None.
  • Maximum: Fifty six (56).
  • Prerequisites: None (see Limitations).
  • Corequisite: None (see Limitations).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: This feature is designed to work with FC #0610, IBM Smart Aanalytics Optimizer blade, on machine type 2458-001.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0839) TKE workstation

(No Longer Available as of December 31, 2009)

This is an optional feature. The Trusted Key Entry (TKE) workstation is a combination of hardware and software, network-connected to the server, and designed to provide a security-rich, flexible method for master and operational key entry as well as local and remote management of the cryptographic coprocessor features. Crypto Express2 default configuration on the z10 EC is a coprocessor. This optional feature provides basic key management -- key identification, exchange, separation, update, backup, as well as security administration. The TKE workstation has one Ethernet port and supports connectivity to an Ethernet Local Area Network (LAN) operating at 10 and 100 Mbps.

The feature shipment includes a system unit, mouse, keyboard, 17-inch (431.8 mm) flat panel display, DVD-RAM drive to install Licensed Internal Code (LIC), and a PCI-X Cryptographic Coprocessor. The workstation has one Ethernet port and a serial port for attaching a Smart Card Reader.

If Trusted Key Entry is required on z10 EC, then a TKE workstation must be used. TKE workstations can also be used to control the z9 EC, z9 BC, z990, and z890 servers.

  • Minimum: None.
  • Maximum: Three (3).
  • Prerequisites: CP Assist for Cryptographic Function (#3863) and Crypto Express2 feature (#0863).
  • Corequisite: TKE 5.2 LIC (#0857) loaded on TKE workstation prior to shipment.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: LAN cabling is a customer responsibility. A Category 5 Unshielded Twisted Pair (UTP) cable terminated with RJ-45 connector is required.
(#0840) TKE Workstation

(No Longer Available as of November 9, 2010)

This is a chargable optional feature. The Trusted Key Entry (TKE) workstation is a combination of hardware and software, network-connected to the server, and designed to provide a security-rich, flexible method for master and operational key entry as well as local and remote management of the cryptographic coprocessor features. Crypto Express2 or Crypto Express3 default configuration on the z10 EC and z10 BC is a coprocessor. This optional feature provides basic key management such as key identification, exchange, separation, update, backup, as well as security administration. The TKE workstation has one Ethernet port and supports connectivity to an Ethernet Local Area Network (LAN) operating at 10 and 100 Mbps.

The feature shipment includes a system unit, mouse, keyboard, flat panel display, DVD-RAM drive to install Licensed Internal Code (LIC), and a PCI-X Cryptographic Coprocessor. The workstation has one Ethernet port and a USB port for attaching a Smart Card Reader.

If Trusted Key Entry is required on z10 EC and z10 BC, then a TKE workstation must be used. TKE workstations can also be used to control the z9 BC, z9 EC, z10 BC and z10 EC servers.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: CP Assist for Cryptographic Function (#3863) and any of the following: Crypto Express2 feature (#0863), Crypto Express3 feature (#0864), Crypto Express2-1P (#0870), Crypto Express3-1P (#0871).
  • Corequisite: TKE 6.0 LIC (#0858) loaded on TKE workstation prior to shipment.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: LAN cabling is a customer responsibility. A Category 5 Unshielded Twisted Pair (UTP) cable terminated with RJ-45 connector is required.
(#0854) TKE 5.3 LIC

(No Longer Available as of December 31, 2009)

The Trusted Key Entry (TKE) 5.3 level of Licensed Internal Code (LIC) is installed in a TKE workstation. TKE 5.3 LIC is a no-charge enablement feature which is loaded prior to shipment when a TKE workstation (#0839) is ordered. The TKE 5.3 LIC includes support for the Smart Card Reader (#0885).

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: TKE workstation (#0859, #0839).
  • Corequisites: CP Assist for Cryptographic Function (CPACF) (#3863) and Crypto Express2 (#0863).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0857) TKE 5.2 LIC

(No Longer Available as of November 20, 2009)

The Trusted Key Entry (TKE) 5.2 level of Licensed Internal Code (LIC) is installed in a TKE workstation (#0859, #0839). TKE 5.2 LIC is a no- charge enablement feature which is loaded prior to shipment when a TKE workstation is ordered. The TKE 5.2 LIC includes support for the Smart Card Reader.

  • Minimum: None.
  • Maximum: Three (3).
  • Prerequisites: TKE workstation (#0859, #0839).
  • Corequisites:
    • For z10 EC, CP Assist for Cryptographic Function (CPACF) (#3863) and Crypto Express2 (#0863).
    • For z9 EC, z9 BC, CPACF (#3863) and Crypto Express2 (#0863).
    • For z990 and z890 (2084, 2086), CPACF (#3863) and PCIXCC (#0868) or Crypto Express2 (#0863).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0858) TKE 6.0 Licensed Internal Code (LIC)

(No Longer Available as of June 30, 2013)

The Trusted Key Entry (TKE) 6.0 level of Licensed Internal Code (LIC) is installed in a TKE workstation (#0839 and #0840). TKE 6.0 LIC is a no-charge enablement feature which is loaded prior to shipment when a TKE workstation is ordered. The TKE 6.0 LIC includes support for the Smart Card Reader.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: TKE workstation (#0839, #0840).
  • Corequisites: For z9 BC and 10 BC, CP Assist for Cryptographic Function (CPACF) (#3863), including any of the following: Crypto Express2 (#0863), Crypto Express2-1P (#0870), Crypto Express3 (#0864), Crypto Express3-1P (#0871). For z9 EC and z10 EC, CP Assist for Cryptographic Function (CPACF) (#3863), including any of the following: Crypto Express2 (#0863), Crypto Express3 (#0864).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0863) Crypto Express2

(No Longer Available as of December 31, 2009)

The Crypto Express2 feature is designed to satisfy high-end server security requirements. It contains two PCI-X adapters which are configured independently, either as a coprocessor supporting secure key transactions or as an accelerator for Secure Sockets Layer (SSL) acceleration.

The Crypto Express2 feature is designed to conform to the Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification, and supports User Defined Extension (UDX) services to implement cryptographic functions and algorithms (when defined as a coprocessor).

  • Minimum: None or two. An initial order is two features.
  • Maximum: 8 features (16 PCI-X adapters, two PCI-X adapters per feature).
  • Prerequisites: CP Assist for Cryptographic Function (CPACF) (#3863).
  • Corequisites: TKE workstation (#0859, #0839) if you require security-rich, flexible key entry or remote key management if not using Integrated Cryptographic Service Facility (ICSF) panels.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: None. Internal connection. No external cables.
(#0864) Crypto Express3

(No Longer Available as of June 30, 2012)

The Crypto Express3 feature is designed to satisfy high-end server security requirements. It contains two PCI-E adapters which are configured independently, either as a coprocessor supporting secure key transactions or as an accelerator for Secure Sockets Layer (SSL) acceleration.

The Crypto Express3 feature is designed to conform to the Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification, and supports User Defined Extension (UDX) services to implement cryptographic functions and algorithms (when defined as a coprocessor).

  • Minimum: None or two. An initial order is two features.
  • Maximum: 8 features (16 PCI-E adapters, two PCI-E adapters per feature).
  • Prerequisites: CP Assist for Cryptographic Function (CPACF) (#3863).
  • Corequisites: TKE workstation (#0859, #0839, #0840) if you require security-rich, flexible key entry or remote key management if not using Integrated Cryptographic Service Facility (ICSF) panels.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: None. Internal connection. No external cables.
(#0867) TKE 7.1 LIC

(No Longer Available as of June 30, 2013)

The Trusted Key Entry (TKE) 7.1 level of Licensed Internal Code (LIC) is installed in a TKE workstation (#0841). TKE 7.1 LIC is a no-charge enablement feature which is loaded prior to shipment when a TKE workstation is ordered. The TKE 7.1 LIC includes support for the Smart Card Reader (#0885)

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: TKE workstation (#0841).
  • Prerequisites: CP Assist for Cryptographic Function (#3863); Crypto Express3 (#0864).
  • Compatibility Conflicts: TKE workstations with TKE 7.1 LIC can be used to control z196, z114, z10 EC, and z10 BC servers.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0884) TKE additional smart cards

(No Longer Available as of June 30, 2012)

These are Java**-based smart cards which provide a highly efficient cryptographic and data management application built-in to read-only memory for storage of keys, certificates, passwords, applications, and data. The TKE blank smart cards are compliant with FIPS 140-2 Level 2.

  • Minimum: None. Order increment is one. When one is ordered a quantity of 10 smart cards are shipped.
  • Maximum: 99 (990 blank Smart Cards).
  • Prerequisites: CP Assist for Cryptographic Function (#3863), Crypto Express2 (#0863).
  • Corequisites: TKE workstation with 5.3 level of LIC (#0854) for secure key parts entry and cryptographic hardware management or ISPF panels for clear key entry and cryptographic hardware management, and TKE Smart Card Reader (#0885).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes.
  • Cable Order: Not applicable.
(#0885) TKE Smart Card Reader

(No Longer Available as of June 30, 2012)

The TKE Smart Card Reader feature supports the use of smart cards, which resemble a credit card in size and shape, but contain an embedded microprocessor, and associated memory for data storage. Access to and the use of confidential data on the smart cards is protected by a user-defined Personal Identification Number (PIN).

The TKE LIC allows the storing of key parts on diskettes or paper, or optionally on smart cards, or to use a TKE authority key stored on a diskette, or optionally on a smart card, and to log on to the Cryptographic Coprocessor using a passphrase, or optionally a logon key pair. One (1) feature includes two Smart Card Readers, two cables to connect to the TKE workstation, and 20 smart cards.

  • Minimum: None. Order increment is one. Included are two Smart Card Readers and 20 smart cards.
  • Maximum: Ten.
  • Prerequisites: CP Assist for Cryptographic Function (#3863), Crypto Express2 feature (#0863).
  • Corequisites: TKE workstation with 5.3 level of LIC (#0854) for secure key parts entry and Crypto hardware management or ISPF panels for clear key entry and Crypto hardware management.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes.
  • Cable Order: None. External cables to connect two Smart Card Readers to the TKE workstation are shipped with the feature.
(#0887) TKE Smart Card Reader

(No Longer Available as of October 1, 2009)

The TKE Smart Card Reader feature supports the use of smart cards, which resemble a credit card in size and shape, but contain an embedded microprocessor, and associated memory for data storage. Access to and the use of confidential data on the smart cards is protected by a user-defined Personal Identification Number (PIN).

The TKE LIC allows the storing of key parts on diskettes or paper, or optionally on smart cards, or to use a TKE authority key stored on a diskette, or optionally on a smart card, and to log on to the Cryptographic Coprocessor using a passphrase, or optionally a logon key pair. One (1) feature includes two Smart Card Readers, two cables to connect to the TKE workstation, and 20 smart cards.

  • Minimum: None.
  • Maximum: Three.
  • Prerequisites: CP Assist for Cryptographic Function (#3863), Crypto Express2 feature (#0863).
  • Corequisites: TKE workstation with 5.2 level of LIC (#0857) for secure key parts entry and Crypto hardware management or ISPF panels for clear key entry and Crypto hardware management.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes.
  • Cable Order: None. External cables to connect two Smart Card Readers to the TKE workstation are shipped with the feature.
(#0888) TKE additional smart cards

(No Longer Available as of October 1, 2009)

These are Java**-based smart cards which provide a highly efficient cryptographic and data management application built-in to read-only memory for storage of keys, certificates, passwords, applications, and data. The TKE blank smart cards are compliant with FIPS 140-2 Level 2.

  • Minimum: None. Order increment is one. When one is ordered a quantity of 10 smart cards are shipped.
  • Maximum: 99 (990 blank Smart Cards).
  • Prerequisites: CP Assist for Cryptographic Function (#3863), Crypto Express2 (#0863).
  • Corequisites: TKE workstation with 5.2 level of LIC (#0857) for secure key parts entry and cryptographic hardware management or ISPF panels for clear key entry and cryptographic hardware management, and TKE Smart Card Reader (#0887).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes.
  • Cable Order: Not applicable.
(#1750) Licensed Internal Code (LIC) ship using Net Indicator

(No Longer Available as of June 30, 2013)

This indicator flag is added to orders that are Licensed Internal Code (LIC) only and delivered by Web tools such as Customer Initiated Upgrade (CIU). There are no parts. The flag is generated by the system and not orderable.

(#1996) Preplanned memory

Preplanned memory features are used to build the physical infrastructure for Flexible memory or Plan Ahead memory. Each feature equates to 16 GB of physical memory.

  • Minimum number of features: None.
  • Maximum number of features: 94.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#1997) Preplanned memory activation

(No Longer Available as of June 30, 2013)

Preplanned memory activation features are required to activate the physical memory installed using feature #1996 into usable, logical memory. One feature #1997 is needed for each feature #1996.

  • Minimum number of features: None.
  • Maximum number of features: 94.
  • Prerequisites: Preplanned memory (#1996)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#2323) 16-port ESCON

(No Longer Available as of June 30, 2012)

The Enterprise Systems Connection (ESCON) channel supports the ESCON architecture and provides the capability to directly attach to ESCON-supported Input/Output (I/O) devices (storage, disk, printers, control units) in a switched point-to-point topology at unrepeated distances of up to 3 kilometers (1.86 miles) at a link data rate of 17 megabytes (MB) per second. The ESCON channel utilizes 62.5 micron multimode fiber optic cabling terminated with an MT-RJ connector. The high density ESCON feature has 16 ports or channels, 15 of which can be activated for customer use. One channel is always reserved as a spare, in the event of a failure of one of the other channels.

Feature 2323 cannot be ordered. The configuration tool selects the quantity of features based upon the order quantity of ESCON channels (#2324), distributing the channels across features for high availability. After the first pair, ESCON features are installed in increments of one.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 69 features. A maximum of 1024 active channels, 15 channels per feature.
  • Prerequisites: None.
  • Corequisites: ESCON channel (#2324).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: The unrepeated distance between ESCON channels is limited to 3 kilometers (1.86 miles) using 62.5 micron multimode fiber optic cables. If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 62.5 micron multimode fiber optic cable terminated with an MT-RJ connector is required. In the event that the target or downstream device does not support an MT-RJ connector, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed.
(#2324) ESCON channel port

(No Longer Available as of June 30, 2012)

ESCON channels are available on a channel (port) basis in increments of four. The channel quantity is selected and Licensed Internal Code, Configuration Control (LICCC) is shipped to activate the desired quantity of channels on the 16-port ESCON feature (#2323). Each channel utilizes a Light Emitting Diode (LED) as the optical transceiver, and supports use of a 62.5 micron multimode fiber optic cable terminated with a small form factor, industry-standard MT-RJ connector.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 1024 channels.
  • Prerequisites: None.
  • Corequisites: If a 62.5 multimode fiber optic cable terminated with an ESCON Duplex connector is being reused to connect this feature to a downstream device, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed.
  • Compatibility Conflicts: The 16-port ESCON feature has a small form factor optical transceiver that supports an MT-RJ connector only. A multimode fiber optic cable with an ESCON Duplex connector is not supported with this feature.
  • Customer Setup: No.
  • Limitations: The unrepeated distance between ESCON channels is limited to 3 kilometers (1.86 miles) using 62.5 micron multimode fiber optic cables. If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 62.5 micron multimode fiber optic cable terminated with an MT-RJ connector is required. In the event that the target or downstream device does not support an MT-RJ connector, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed.
(#3001) Balanced Power Plan Ahead

(No Longer Available as of June 30, 2012)

Phase currents are minimized when they are balanced among the three input phases. Balanced Power Plan Ahead is designed to allow you to order the full complement of bulk power regulators (BPRs) on any configuration, to help ensure that the configuration will be in a balanced power environment.

  • Minimum: None.
  • Maximum: One.
  • Prerequisites: None.
  • Corequisites: May require additional internal battery features (#3211) and line cords. Configuration tool will determine.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes. It is disruptive.
  • Cable Order: None.
(#3211) Internal Battery (IBF)

(No Longer Available as of June 30, 2012)

Internal battery backup feature. When selected, the actual number of IBFs will be determined based on the power requirements and model. The batteries are installed in pairs.

  • Minimum number of features: None.
  • Maximum number of features: Six. (Installed in pairs).
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
(#3321) FICON Express4 10KM LX

(No Longer Available as of October 27, 2009)

The FICON Express4 10KM LX (long wavelength) feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a Storage Area Network (SAN). The FICON Express4 10KM LX feature supports an unrepeated distance of 10 kilometers (6.2 miles). Each of the four independent ports/channels is capable of 1 gigabit per second (Gbps), 2 Gbps, or 4 Gbps depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels.

FICON Express4 10KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express4 10KM LX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

There are two modes of operation supported by each FICON Express4 10KM LX channel, independently, for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol (CHPID type FCP) which supports attachment to SCSI devices using Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z environments

The FICON Express4 10KM LX feature utilizes a long wavelength (LX) laser as the optical transceiver and supports use of a 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector. LX may also be referred to as LW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 84 features, up to 336 channels; can be any combination of FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: If a 50/125 or 62.5/125 micrometer multimode fiber optic cable is being reused with the FICON Express4 10KM LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors/receptacles in the enterprise. When using MCP cables, the speed is limited to 1 Gbps.

    Note: The speed must be set to 1 Gbps in a switch. The channel and control unit do not have the capability to be manually set to a speed.

  • Compatibility Conflicts: The FICON Express4 10 LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching/downstream device has a long wavelength (LX) transceiver The sending and receiving transceivers must be the same (LX to LX).
  • Customer Set-up: No.
  • Limitations:
    • The FICON Express4 10KM LX feature does not support FICON Bridge (CHPID type FCV).
    • The unrepeated distance between FICON Express4 10KM LX channels is limited to 10 kilometers (6.2 miles).
    • IBM supports interoperability of 10 km transceivers with 4 km transceivers provided the unrepeated distance between a 10 km transceiver and a 4 km transceiver does not exceed 4 km (2.5 miles).
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3322) FICON Express4 SX

(No Longer Available as of October 27, 2009)

The FICON Express4 SX (short wavelength) feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a storage area network. Each of the four independent ports/channels is capable of 1 gigabit per second (Gbps), 2 Gbps, or 4 Gbps depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels.

FICON Express4 SX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express4 SX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

There are two modes of operation supported by each FICON Express4 SX channel, independently, for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol (CHPID type FCP) which supports attachment to SCSI devices using Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z environments

The FICON Express4 SX feature utilizes a short wavelength (SX) laser as the optical transceiver, and supports use of a 50/125 micrometer multimode fiber optic cable or a 62.5/125-micrometer multimode fiber optic cable terminated with an LC Duplex connector.

Note: IBM does not support a mix of 50 and 62.5 micron fiber optic cabling in the same link. SX may also be referred to as SW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 84 features, up to 336 channels; can be any combination of FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The FICON Express4 SX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching/downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Set-up: No.
  • Limitations:
    • The FICON Express4 SX feature does not support FICON Bridge (CHPID type FCV).
    • The unrepeated distance between FICON Express4 SX channels using multimode fiber optic cabling is as follows:
                                      Fiber
                    Fiber core        Bandwidth
    Link data rate  in microns (u)    @ wavelength    Unrepeated distance
    --------------  --------------    -------------   -------------------
    4 Gbps               50 u         2000 MHz-km         270 meters
                       SX laser         @850 nm           886 feet
     
    4 Gbps               50 u          500 MHz-km         150 meters
                       SX laser         @850 nm           492 feet
     
    4 Gbps              62.5 u         200 MHz-km          70 meters
                       SX laser          @850 nm          230 feet
     
    4 Gbps              62.5 u         160 MHz-km          55 meters
                       SX laser          @850 nm          180 feet
     
    2 Gbps               50 u         2000 MHz-km         500 meters
                       SX laser         @850 nm           1640 feet
     
    2 Gbps              50 u           500 MHz-km         300 meters
                       SX laser          @850 nm          984 feet
     
    2 Gbps              62.5 u         200 MHz-km         150 meters
                       SX laser          @850 nm          492 feet
     
    2 Gbps              62.5 u         160 MHz-km         120 meters
                      SX laser          @850 nm           394 feet
     
    1 Gbps               50 u         2000 MHz-km         860 meters
                       SX laser         @850 nm         2,822 feet
     
    1 Gbps              50 u           500 MHz-km         500 meters
                     SX laser          @850 nm          1,640 feet
     
    1 Gbps              62.5 u         200 MHz-km         300 meters
                       SX laser          @850 nm          984 feet
     
    1 Gbps              62.5 u         160 MHz-km         250 meters
                       SX laser          @850 nm          820 feet
     
    
    • Field Installable: Yes.
    • Cable Order: A customer-supplied cable is required. A 50/125 micrometer multimode fiber optic cable, or a 62.5/125 micrometer multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3324) FICON Express4 4KM LX

(No Longer Available as of October 27, 2009)

The FICON Express4 4KM LX (long wavelength) feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a Storage Area Network (SAN). The FICON Express4 4KM feature supports an unrepeated distance of 4 kilometers (2.5 miles). Each of the four independent ports/channels is capable of 1 gigabit per second (Gbps), 2 Gbps, or 4 Gbps depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels.

FICON Express4 4KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express4 4KM LX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

There are two modes of operation supported by each FICON Express4 4KM LX channel, independently, for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol (CHPID type FCP) which supports attachment to SCSI devices using Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z environments

The FICON Express4 4KM LX feature utilizes a long wavelength (LX) laser as the optical transceiver and supports use of a 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector. LX may also be referred to as LW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 84 features, up to 336 channels; can be any combination of FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: If a 50/125 or 62.5/125 micrometer multimode fiber optic cable is being reused with the FICON Express4 4KM LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors/receptacles in the enterprise. When using MCP cables, the speed is limited to 1 Gbps.

    Note: The speed must be set to 1 Gbps in a switch. The channel and control unit do not have the capability to be manually set to a speed.

  • Compatibility Conflicts: The FICON Express4 4KM LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching/downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX). LX may also be referred to as LW by vendors.
  • Customer Set-up: No.
  • Limitations:
    • The FICON Express4 4KM LX feature does not support FICON Bridge (CHPID type FCV).
    • The unrepeated distance between FICON Express4 4KM LX channels is limited to 4 kilometers (2.5 miles). If greater distances are desired, the FICON Express4 10KM LX feature (#3321) should be ordered.
    • IBM supports interoperability of 10 km transceivers with 4 km transceivers provided the unrepeated distance between a 10 km transceiver and a 4 km transceiver does not exceed 4 km (2.5 miles).
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3325) FICON Express8 10KM LX

(No Longer Available as of June 30, 2012)

The FICON Express8 10KM LX (long wavelength) feature conforms to the Fibre connection (FICON) architecture, the High Performance FICON for System z (zHPF) architecture, and the Fibre Channel Protocol (FCP) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a storage area network (SAN).

Each of the four independent ports/channels is capable of 2 gigabits per second (Gbps), 4 Gbps, or 8 Gbps depending upon the capability of the attached switch or device. The link speed is autonegotiated, point-to- point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels.

FICON Express8 10KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express8 10KM LX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

Each FICON Express8 10KM LX channel can be defined independently, for connectivity to servers, switches, directors, disks, tapes, and printers as:

  1. Native FICON, High Performance FICON for System z (zHPF), and FICON Channel-to-Channel (CTC) - (CHPID type FC); native FICON and zHPF protocols are supported simultaneously.
  2. Fibre Channel Protocol (CHPID type FCP),which supports attachment to SCSI devices directly or through Fibre channel switches or directors.

The FICON Express8 10KM LX feature utilizes a long wavelength (LX) laser as the optical transceiver and supports use of a 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector. LX may also be referred to as LW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB, IFB or ISC-3) must be present in a server.
  • Maximum: 84 features; can be any combination of FICON Express8, FICON Express4, FICON Express2, and FICON Express features.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known. Ensure the attaching/downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX).
  • Customer Set-up: No.
  • Limitations:
    • The FICON Express8 10KM LX feature does not support FICON Bridge (CHPID type FCV).
    • The FICON Express8 10KM LX feature does not support autonegotiation to 1 Gbps.
    • The FICON Express8 10 KM LX feature is designed to support distances up to 10 kilometers (6.2 miles) over 9 micron single mode fiber optic cabling without performance degradation. To avoid performance degradation at extended distance, FICON switches or directors (for buffer credit provision) or Dense Wavelength Division Multiplexers (for buffer credit simulation) may be required.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3326) FICON Express8 SX

(No Longer Available as of June 30, 2012)

The FICON Express8 SX (short wavelength) feature conforms to the Fibre connection (FICON) architecture, the High Performance FICON for System z (zHPF) architecture, and the Fibre Channel Protocol (FCP) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a storage area network (SAN).

Each of the four independent ports/channels is capable of 2 gigabits per second (Gbps), 4 Gbps, or 8 Gbps depending upon the capability of the attached switch or device. The link speed is autonegotiated, point-to- point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels. FICON Express8 SX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express8 SX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross- site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

Each FICON Express8 SX channel, can be defined independently for connectivity to servers, switches, directors, disks, tapes, and printers as:

  1. Native FICON, High Performance FICON for System z (zHPF), and FICON Channel-to-Channel (CTC) - (CHPID type FC); native FICON and zHPF protocols are supported simultaneously.
  2. Fibre Channel Protocol (CHPID type FCP), which supports attachment to SCSI devices directly or through Fibre Channel switches or directors

The FICON Express8 SX feature utilizes a short wavelength (SX) laser as the optical transceiver, and supports use of a 50/125 micrometer multimode fiber optic cable or a 62.5/125-micrometer multimode fiber optic cable terminated with an LC Duplex connector.

Note: IBM does not support a mix of 50 and 62.5 micron fiber optic cabling in the same link. SX may also be referred to as SW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB, PSIFB or ISC-3) must be present in a server.
  • Maximum: 84 features; can be any combination of FICON Express8, FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known. Ensure the attaching/downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Set-up: No.
  • Limitations:
    • The FICON Express8 SX feature does not support FICON Bridge (CHPID type FCV).
    • The FICON Express8 SX feature does not support autonegotiation to 1 Gbps.
    • FICON Express8 is designed to support distances up to 10 kilometers (6.2 miles) without performance degradation. To avoid performance degradation at extended distance, FICON switches or directors (for buffer credit provision) or Dense Wavelength Division Multiplexers (for buffer credit simulation) may be required.
    • For unrepeated distances for FICON Express8 SX refer to System z Planning for Fiber Optic Links (GA23-0367) available on System z10 servers at planned availability in the Library section of Resource Link.

      www.ibm.com/servers/resourcelink

    • Field Installable: Yes.
    • Cable Order: A customer-supplied cable is required. A 50/125 micrometer multimode fiber optic cable, or a 62.5/125 micrometer multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.

Open Systems Adapter (OSA) family of LAN adapters

All of the OSA features support the Queued Direct Input/Output (QDIO) architecture, allowing an OSA feature to directly communicate with the server's communications program through the use of data queues in memory. QDIO is designed to eliminate the use of channel programs and Channel Control Words (CCWs), which can help reduce host interrupts and accelerate TCP/IP packet transmission.

There are multiple Channel Path Identifier (CHPID) types that may be supported by an OSA port, independently. Refer to each of the features for the CHPID types supported.

  • CHPID type OSC - OSA-Integrated Console Controller (OSA-ICC) supporting TN3270E and non-SNA DFT 3270 emulation.
  • CHPID type OSD - Queued Direct Input/Output (QDIO), supporting Transmission Control Protocol/Internet Protocol (TCP/IP) when in Layer 3 mode. Use TN3270E or Enterprise Extender for SNA traffic. When in Layer 2 mode the port is protocol-independent.
  • CHPID type OSE - Non-QDIO, supporting TCP/IP and SNA/APPN/HPR.
  • CHPID type OSN - OSA for NCP supporting LPAR-to-LPAR communication to access IBM Communication Controller for Linux on zSeries.
(#3362) OSA-Express3 Gigabit Ethernet LX

(No Longer Available as of June 30, 2012)

The OSA-Express3 Gigabit Ethernet (GbE) long wavelength (LX) feature has four independent ports. There are two ports per PCI-E adapter. Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express3 GbE LX supports CHPID types OSD. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features. The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Maximum ports: 96 ports.
  • Prerequisites: None.
  • Corequisites: If a 50 or 62.5 micron multimode fiber optic cable is being reused with the OSA-Express3 GbE LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors and receptacles in the enterprise.
  • Compatibility Conflicts: The OSA-Express3 GbE LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an LC Duplex connector is required for connecting each port on this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables are required, one for each end of the link.
(#3363) OSA-Express3 GbE SX

(No Longer Available as of June 30, 2012)

The OSA-Express3 Gigabit Ethernet (GbE) short wavelength (SX) feature has four independent ports. There are two ports per PCI-E adapter. Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express3 GbE SX supports CHPID types OSD. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features. The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Maximum ports: 96 ports.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The OSA-Express3 GbE SX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 50 or a 62.5 micron multimode fiber optic cable terminated with an LC Duplex connector is required for connecting each port on this feature to the selected device.
(#3364) OSA-Express2 Gigabit Ethernet LX

(No Longer Available as of June 30, 2009)

The OSA-Express2 Gigabit Ethernet (GbE) long wavelength (LX) feature has two independent ports, and is designed to deliver line speed - 1 Gbps in each direction. Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express2 GbE LX supports CHPID types OSD and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features. The maximum quantity of all OSA-Express2 and OSA-Express3 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: If a 50 or 62.5 micron multimode fiber optic cable is being reused with the OSA-Express2 GbE LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors and receptacles in the enterprise.
  • Compatibility Conflicts: The OSA-Express2 GbE LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables are required, one for each end of the link.

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3365) OSA-Express2 GbE SX

(No Longer Available as of June 30, 2009)

The OSA-Express2 Gigabit Ethernet (GbE) short wavelength (SX) feature has two independent ports, and is designed to deliver line speed - 1 Gbps in each direction. Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express2 GbE SX supports CHPID types OSD and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features. The maximum quantity of all OSA-Express2 and OSA-Express3 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The OSA-Express2 GbE SX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 50 or a 62.5 micron multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3366) OSA-Express2 1000BASE-T Ethernet

(No Longer Available as of December 31, 2009)

The OSA-Express2 1000BASE-T Ethernet feature has two independent ports. Each port supports attachment to either a 10BASE-T (10 Mbps), 100BASE-TX (100 Mbps), or 1000BASE-T (1000 Mbps or 1 Gbps) Ethernet Local Area Network (LAN). The feature supports auto-negotiation and automatically adjusts to 10, 100, or 1000 Mbps, depending upon the LAN. When the feature is set to autonegotiate, the target device must also be set to autonegotiate. The feature supports the following settings: 10 Mbps half or full duplex, 100 Mbps half or full duplex, 1000 Mbps (1 Gbps) full duplex. OSA-Express2 1000BASE-T Ethernet supports CHPID types OSC, OSD, OSE, and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

When configured at 1 Gbps, the 1000BASE-T Ethernet feature operates in full duplex mode only and supports jumbo frames when in QDIO mode (CHPID type OSD).

  • Minimum: None.
  • Maximum: 24 features. The maximum quantity of all OSA-Express2 and OSA-Express3 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: When the OSA-Express2 feature is set to autonegotiate, the target device must also be set to autonegotiate. Both ends must match (autonegotiate on or off).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. The OSA-Express2 1000BASE-T Ethernet feature supports use of an EIA/TIA Category 5 Unshielded Twisted Pair (UTP) cable terminated with an RJ-45 connector with a maximum length of 100 meters (328 feet).

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3367) OSA-Express3 1000BASE-T Ethernet

(No Longer Available as of June 30, 2012)

The OSA-Express3 1000BASE-T Ethernet feature has four ports. Two ports reside on a PCI-E adapter and share a channel path identifier (CHPID). There are two PCI-E adapters per feature. Each port supports attachment to either a 10BASE-T (10 Mbps), 100BASE-TX (100 Mbps), or 1000BASE-T (1000 Mbps or 1 Gbps) Ethernet Local Area Network (LAN). The feature supports auto-negotiation and automatically adjusts to 10, 100, or 1000 Mbps, depending upon the LAN. When the feature is set to autonegotiate, the target device must also be set to autonegotiate. The feature supports the following settings: 10 Mbps half or full duplex, 100 Mbps half or full duplex, 1000 Mbps (1 Gbps) full duplex. OSA-Express3 1000BASE-T Ethernet supports CHPID types OSC, OSD, OSE, and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

When configured at 1 Gbps, the 1000BASE-T Ethernet feature operates in full duplex mode only and supports jumbo frames when in QDIO mode (CHPID type OSD).

  • Minimum: None.
  • Maximum: 24 features, 96 ports (four ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: When the OSA-Express3 feature is set to autonegotiate, the target device must also be set to autonegotiate. Both ends must match (autonegotiate on or off).
  • Customer Setup: No.
  • Limitations: For CHPID type OSC supporting TN3270E and non-SNA DFT to IPL CPCs and LPARs note that one port per PCI-E adapter is available for use. CHPID type OSC does not recognize the second port on a PCI-E adapter. Thus, if both CHPIDs on an OSA-Express3 feature are defined as CHPID type OSC then two of the four ports are recognized.

    For CHPID type OSN supporting the Network Control Program (NCP) and the channel data link control (CDLC) protocol note that none of the ports are used for external communication. OSA-Express for NCP does not use ports. All communication is LPAR-to-LPAR. Thus, if both CHPIDs on an OSA-Express3 feature are defined as CHPID type OSN, then none of the four ports are used for external communication.

  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. The OSA-Express3 1000BASE-T Ethernet feature supports use of an EIA/TIA Category 5 Unshielded Twisted Pair (UTP) cable terminated with an RJ-45 connector with a maximum length of 100 meters (328 feet).

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3368) OSA-Express2 10 Gigabit Ethernet LR

(No Longer Available as of June 30, 2008)

The OSA-Express2 10 Gigabit Ethernet (GbE) long reach (LR) feature has one port per feature and is designed to support attachment to a 10 Gigabits per second (Gbps) Ethernet Local Area Network (LAN) or Ethernet switch capable of 10 Gbps. OSA-Express2 10 GbE LR supports CHPID type OSD exclusively. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features. The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The OSA-Express2 10 GbE LR feature supports use of an SC Duplex connector. A conversion kit may be required if there are fiber optic cables terminated with LC Duplex connectors. Ensure the attaching or downstream device has a long reach (LR) transceiver. The sending and receiving transceivers must be the same (LR to LR which may also be referred to as LW or LX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an SC Duplex connector is required for connecting this feature to the selected device.

Note: When OSA-Express3 10 Gigabit Ethernet LR (#3370) becomes available OSA-Express2 10 GbE LR can no longer be ordered.

(#3370) OSA-Express3 10 Gigabit Ethernet LR

(No Longer Available as of June 30, 2012)

The OSA-Express3 10 Gigabit Ethernet (GbE) long reach (LR) feature has two ports per feature and is designed to support attachment to a 10 Gigabits per second (Gbps) Ethernet Local Area Network (LAN) or Ethernet switch capable of 10 Gbps. OSA-Express3 10 GbE LR supports CHPID type OSD exclusively. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features. The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The OSA-Express3 10 GbE LR feature supports use of an industry standard small form factor LC Duplex connector. A conversion kit may be required if there are fiber optic cables terminated with SC Duplex connectors. Ensure the attaching or downstream device has a long reach (LR) transceiver. The sending and receiving transceivers must be the same (LR to LR which may also be referred to as LW or LX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3371) OSA-Express3 10 Gigabit Ethernet SR

(No Longer Available as of June 30, 2012)

The OSA-Express3 10 Gigabit Ethernet (GbE) Short Reach (SR) feature has two ports. Each port resides on a PCI-E adapter and has its own channel path identifier (CHPID). There are two PCI-E adapters per feature. OSA-Express3 10 GbE SR is designed to support attachment to a 10 Gigabits per second (Gbps) Ethernet Local Area Network (LAN) or Ethernet switch capable of 10 Gbps. OSA-Express3 10 GbE SR supports CHPID type OSD exclusively. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features, 48 ports (two ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The OSA-Express3 10 GbE SR feature supports use of an industry standard small form factor LC Duplex connector. A conversion kit may be required if there are fiber optic cables terminated with SC Duplex connectors. Ensure the attaching or downstream device has a Short Reach (SR) transceiver. The sending and receiving transceivers must be the same (SR-to-SR).
  • Customer Setup: No.
  • Limitations: OSA-Express3 10 GbE SR supports CHPID type OSD exclusively. It does not support any other CHPID type.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 50 or 62.5 micron multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
    • Unrepeated distance:
      • With 50 micron fiber at 2000 MHz-km: 300 meters (984 feet)
      • With 50 micron fiber at 500 MHz-km: 82 meters (269 feet)
      • With 62.5 micron fiber at 200 MHz-km: 33 meters (108 feet)

Internal Coupling Channel (IC)

This description is for information purposes only. ICs are not identified as a feature. The Internal Coupling channel (IC) is for internal communication between Coupling Facilities defined in Logical Partitions (LPARs) and z/OS images on the same server. ICs do have a Channel Path Identifier (CHPID), which is type ICP, and are assigned using IOCP or HCD. There is no physical hardware. Care should be taken to ensure the planned use of ICs and external Coupling Links (ICB-4s, active ISC-3s, and IFBs) ordered does not exceed 64 CHPIDs per server. ICs (CHPID type ICP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 32 ICs.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: Compatibility mode is not supported.
  • Customer Setup: ICs must be defined in the IOCDS using either IOCP or HCD.
  • Limitations: The maximum number of Coupling Link CHPIDs combined (ICs, ICB-4s, active ISC-3 links, and IFBs) cannot exceed 64 per server.
  • Field Installable: Yes.
  • Cable Order: None. There are no external cables.
(#3393) ICB-4 link

(No Longer Available as of June 30, 2012)

The Integrated Cluster Bus-4 (ICB-4) link is a member of the family of Coupling Link options. ICB-4 operates at 2 GigaBytes per second. ICB-4 is used by coupled servers to pass information back and forth over high speed links in a Parallel Sysplex environment when the distance between servers is no greater than 7 meters (23 feet). Cables are required. ICB-4 is a "native" connection used between z10 EC, z9 EC, z9 BC, z990, and z890 servers. ICB-4s (CHPID type CBP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

An ICB-4 link consists of one link that attaches directly to an STI port on an MBA fanout card in a book, does not require connectivity to a card in the I/O cage, and provides one output port to support ICB-4 to ICB-4 connectivity. One ICB-4 connection is required for each end of the link.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 16 ICB-4 links.
  • Prerequisites: None.
  • Corequisites: An ICB-4 feature is required for each end of the link, whether an z10 EC, z9 EC, z9 BC, z990, or z890.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations:
    • An ICB-4 can only communicate with an ICB-4.
    • The distance between ICB-4s cannot exceed 7 meters (23 feet).
    • The maximum number of Coupling Links combined (ICs, ICB-4s, active ISC-3 links, and InfiniBand) cannot exceed 64 per server.
  • Field Installable: Yes.
  • Cable Order: A cable is required and must be ordered. The connector is unique on z10 EC A 10 meter (33 feet) ICB-4 cable (#0229 z10 EC to System z or #0230 z10 EC to z10 EC) is used with the ICB-4 link -- 3 meters (10 feet) is used for internal routing and strain relief and 7 meters (23 feet) is available for server-to-server connection. This cable is unique to ICB-4.
(#3863) CP Assist for Cryptographic Function (CPACF) enablement

(No Longer Available as of June 30, 2013)

CPACF, supporting clear key encryption, is activated using the no- charge enablement feature (#3863). The CP Assist for Cryptographic Function (CPACF), is shared between two Processor Units (PUs). For every Processor Unit defined as a Central Processor (CP) or an Integrated Facility for Linux (IFL) the following is standard: Advanced Encryption Standard (AES), Data Encryption Standard (DES), Triple Data Encryption Standard (TDES), Pseudo Random Number Generation (PRNG), Secure Hash Algorithm (SHA-1), SHA-224, SHA-256, SHA-384 and SHA-512 are shipped enabled on all z10 EC servers and do not require the no-charge enablement feature. For new servers, shipped from the factory, CPACF enablement (#3863) is loaded prior to shipment. For other than new shipments, Licensed Internal Code is shipped by an enablement diskette. The function is enabled using the Support Element (SE).

  • Minimum: None.
  • Maximum: One.
  • Prerequisites: None.
  • Corequisites: Crypto Express2 (#0863).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.

ETR feature on z10 EC is standard

The External Time Reference (ETR) feature is now standard and supports attachment to the Sysplex Timer Model 2 (9037-002) at an unrepeated distance of up to three kilometers (1.86 miles) and a link data rate of 8 Megabits per second.

The Sysplex Timer Model 2 is the centralized time source that sets the Time-Of-Day (TOD) clocks in all attached servers to maintain synchronization. The Sysplex Timer Model 2 provides the stepping signal that helps ensure that all TOD clocks in a multi-server environment increment in unison to permit full read or write data sharing with integrity. The Sysplex Timer Model 2 is a key component of an IBM Parallel Sysplex environment and a GDPS availability solution for On Demand Business.

Time synchronization and time accuracy on z10 EC: If you require time synchronization across multiple servers (for example you have a Parallel Sysplex) or you require time accuracy either for one or more System z servers or you require the same time across heterogeneous platforms (System z, Unix, AIX, etc) you can meet these requirements by either installing a Sysplex Timer Model 2 (9037-002) or by implementing Server Time Protocol (STP).

The Sysplex Timer Model 2 is the centralized time source that sets the Time-Of-Day (TOD) clocks in all attached servers to maintain synchronization. The Sysplex Timer Model 2 provides the stepping signal that helps ensure that all TOD clocks in a multi-server environment increment in unison to permit full read or write data sharing with integrity. The Sysplex Timer Model 2 is a key component of an IBM Parallel Sysplex environment and a GDPS availability solution for On Demand Business.

The z10 EC server requires the External Time Reference (ETR) feature to attach to a Sysplex Timer. The ETR feature is standard on the z10 EC and supports attachment at an unrepeated distance of up to three kilometers (1.86 miles) and a link data rate of 8 Megabits per second. The distance from the Sysplex Timer to the server can be extended to 100 km using qualified Dense Wavelength Division Multiplexers (DWDMs). However, the maximum repeated distance between Sysplex Timers is limited to 40 km.

The ETR feature utilizes 62.5 micron multimode fiber optic cabling terminated with an MT-RJ connector. The ETR features do not reside in the I/O cage and do not require connectivity to the I/O cage. Compatibility Conflicts: The ETR features have a small form factor optical transceiver that supports an MT-RJ connector only. A multimode fiber optic cable with an ESCON Duplex connector is not supported with this feature.

  • Customer Setup: No.
  • Limitations: The unrepeated distance between an ETR feature and a Sysplex Timer Model 2 is limited to 3 kilometers (1.86 miles). If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 62.5 micron multimode fiber optic cable terminated with an MT-RJ connector is required. Since the Sysplex Timer Model 2 supports use of an ESCON Duplex connector, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed. Note that the ETR was withdrawn from marketing. The Server Time Protocol (STP) feature is the follow on to the Sysplex Timer. It is designed to provide the capability for multiple servers and Coupling Facilities to maintain time synchronization with each other, without requiring a Sysplex Timer(R).

Server Time Protocol is a server-wide facility that is implemented in the Licensed Internal Code (LIC) of z10 EC and presents a single view of time to Processor Resource/Systems Manager (PR/SM). STP uses a message- based protocol in which timekeeping information is passed over externally defined Coupling Links InterSystem Channel-3 (ISC-3) links configured in peer mode, Integrated Cluster Bus-3 (ICB-3) links, Integrated Cluster Bus-4 (ICB-4) links, and Parallel Sysplex InfiniBand (PSIFB) links. These can be the same links that already are being used in a Parallel Sysplex(R) for Coupling Facility (CF) message communication.

STP is designed to support a multisite sysplex configuration up to 100 km (62 miles) using qualified DWDMs.

The ETR feature utilizes 62.5 micron multimode fiber optic cabling terminated with an MT-RJ connector. The ETR features do not reside in the I/O cage and do not require connectivity to the I/O cage.

  • Compatibility Conflicts: The ETR features have a small form factor optical transceiver that supports an MT-RJ connector only. A multimode fiber optic cable with an ESCON Duplex connector is not supported with this feature.
  • Customer Setup: No.
  • Limitations: The unrepeated distance between an ETR feature and a Sysplex Timer Model 2 is limited to 3 kilometers (1.86 miles). If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 62.5 micron multimode fiber optic cable terminated with an MT-RJ connector is required. Since the Sysplex Timer Model 2 supports use of an ESCON Duplex connector, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed.
(#6094) 17-inch small flat-panel display

(No Longer Available as of January 1, 2009)

The business black 17 inch flat-panel display offers the benefits of a flat-panel monitor including improved use of space and reduced energy consumption compared to CRT monitors.

  • Minimum: None.
  • Maximum: Four.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes. Parts removed as a result of feature conversion become the property of IBM.
  • Cable Order: None.
(#6095) 20-inch large flat-panel display

(No Longer Available as of November 9, 2010)

The business black 20 inch flat-panel display offers the benefits of a flat-panel monitor including improved use of space and reduced energy consumption compared to CRT monitors.

  • Minimum: None.
  • Maximum: Four.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes. Parts removed as a result of feature conversion become the property of IBM.
  • Cable Order: None.
(#6096) Flat-panel display

(No Longer Available as of June 30, 2012)

The business black flat-panel display offers the benefits of a flat-panel monitor including improved use of space and reduced energy consumption compared to CRT monitors.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes. Parts removed as a result of feature conversion become the property of IBM.
  • Cable Order: None.
(#6501) Power Sequence Controller (PSC)

(No Longer Available as of June 30, 2012)

The Power Sequence Controller provides the ability to turn control units on and off from the Central Electronic Complex. The PSC feature consists of one PSC24V card, one PSC Y-cable, and two PSC relay boxes that are mounted near the I/O cages within the server. The PSC24V card always plugs into card position 29 (LG29) in the I/O cage and displaces one I/O feature.

  • Minimum number of features: None.
  • Maximum number of features: Three.
  • Prerequisites: One I/O cage per #6501.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
(#6805) Additional CBU Test

(No Longer Available as of June 30, 2013)

An additional test activation that can be purchased with each CBU temporary entitlement record. There can be no more than 15 tests per CBU TER.

  • Minimum: 0.
  • Maximum: 15 per each instance of #6818, Capacity back up, CBU.
  • Prerequisites: #6818
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6807) Central Processor 4 (CP4)

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The CP4 is the Processor Unit purchased and activated supporting the z/OS, z/VM, and z/VSE operating systems.

  • Minimum number of features: None.
  • Maximum number of features: 12.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6808) Central Processor 5 (CP5)

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The CP5 is the Processor Unit purchased and activated supporting the z/OS, z/VM, and z/VSE operating systems.

  • Minimum number of features: None.
  • Maximum number of features: 12.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6809) Central Processor 6 (CP6)

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The CP6 is the Processor Unit purchased and activated supporting the z/OS, z/VM, and z/VSE operating systems.

  • Minimum number of features: None.
  • Maximum number of features: 12.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6810) Central Processor 7 (CP7)

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The CP7 is the Processor Unit purchased and activated supporting the z/OS, z/VM, and z/VSE operating systems.

  • Minimum number of features: None.
  • Maximum number of features: 64.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6811) Integrated Facility for Linux (IFL)

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The IFL is a Processor Unit that is purchased and activated for exclusive use of Linux on System z.

  • Minimum number of features: None.
  • Maximum number of features: 64.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6812) Internal Coupling Facility (ICF)

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The ICF is a Processor Unit purchased and activated for exclusive use by the Coupling Facility Control Code (CFCC).

  • Minimum number of features: None.
  • Maximum number of features: 16.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6813) System Assist Processor (SAP), optional

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The optional SAP is a Processor Unit that is purchased and activated for use as a SAP. This optional SAP is a chargeable feature.

  • Minimum number of features: None.
  • Maximum number of features: Eight (8).
  • Prerequisites: One CP7 (#6810), IFL (#6811), or ICF (#6812).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6814) System z Application Assist Processor (zAAP)

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The zAAP is a specialized Processor Unit that provides a Java execution environment for a z/OS environment. zAAPs are designed to operate asynchronously with the CPs to execute Java programming under control of the IBM Java Virtual Machine (JVM).

The IBM JVM processing cycles are designed to be executed on the configured zAAPs with no anticipated modifications to the Java applications. Execution of the JVM processing cycles on a zAAP is a function of the Software Developer's Kit (SDK) 1.4.1 for zSeries or later, z/OS V1.7 or later, and Processor Resource/Systems Manager (PR/SM).

IBM does not impose software charges on zAAP capacity. Additional IBM software charges will apply when additional CP capacity is used.

Customers are encouraged to contact their specific ISVs and USVs directly to determine if their charges will be affected.

  • Minimum number of features: None.
  • Maximum number of features: 32.
  • Prerequisites: For each zAAP installed there must be a corresponding CP permanently purchased and installed.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
(#6815) System z Integrated Information Processor (zIIP)

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The zIIP is a subcapacity Processor Unit purchased and activated to accept eligible work from z/OS. The zIIP's execution environment will accept eligible work from z/OS. The operating system is designed to manage and direct the work between the general purpose processor (CP) and the zIIP. DB2 UDB for z/OS V8 exploits the zIIP capability for eligible workloads.

The zIIP is designed so that a program can work with z/OS to have eligible portions of its enclave Service Request Block (SRB) work directed to the zIIP. The z/OS operating system, acting on the direction of the program running in SRB mode, controls the distribution of the work between the general purpose processor (CP) and the zIIP. Using a zIIP can help free up capacity on the general purpose processor.

  • Minimum number of features: None.
  • Maximum number of features: 32.
  • Prerequisites: For each zIIP installed there must be a corresponding CP permanently purchased and installed.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
(#6816) Unassigned Integrated Facility for Linux (IFL)

(No Longer Available as of June 30, 2013)

Processor Unit characterization option. An unassigned IFL is a Processor Unit purchased for future use as an IFL (#6811). It is offline and unavailable for use.

  • Minimum number of features: None.
  • Maximum number of features: 63.
  • Prerequisites: One active CP.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6817) One 1 CBU year

(No Longer Available as of June 30, 2013)

Used to set the expiration date of a Capacity back up (CBU) temporary entitlement record.

  • Minimum number of features: One.
  • Maximum number of features: Five (5).
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#6818) Capacity back up (CBU)

(No Longer Available as of June 30, 2013)

This feature code corresponds to the number of different CBU Temporary Entitlement Records (TERs) ordered. Each CBU TER contains configuration data corresponding to the number of years, number of tests, and various engine types.

  • Minimum number of features: None.
  • Maximum number of features: Eight (8) per ordering session.
  • Prerequisites: None.
  • Corequisites: CBU Authorization (#9910).
  • Compatibility Conflicts: None known.
  • Customer Setup: The CBU TER must be installed via the HMC Configuration Manager before it can be activated.
  • Limitations: None.
  • Field Installable: Yes.
(#6819) Five (5) additional CBU tests

(No Longer Available as of October 27, 2009)

Additional test activations that can be purchased with each CBU temporary entitlement record. There is a default of five tests per CBU TER and there can be no more than 15 tests per CBU TER.

  • Minimum: None.
  • Maximum: Three ( 3) per each instance of Capacity back up (#6818).
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6820) Single CBU CP-year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary CP capacity features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6821) 25 CBU CP-year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the temporary Central Processor (CP) capacity features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6822) Single CBU IFL Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary Integrated Facility for Linux (IFL) features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6823) 25 CBU IFL Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary IFL features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6824) Single CBU ICF-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary ICF features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6825) 25 CBU ICF-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary Internal Coupling Facility (ICF) features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6826) Single CBU zAAP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary zAAP features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6827) 25 CBU zAAP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary System z Application Assist Processor (zAAP) features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6828) Single CBU zIIP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary System z Integrated Information Processor (zIIP) features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6829) 25 CBU zIIP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary zIIP features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6830) Single CBU SAP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary SAP features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6831) 25 CBU SAP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary System Assist Processor (SAP) features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6832) CBU Replenishment

(No Longer Available as of June 30, 2013)

This feature is used to restore the ability to activate a CBU TER. Each CBU TER comes with a default of one activation. An activation enables the resources required for disaster recovery. After an activation, no subsequent activations nor any additional testing of this CBU TER can occur until this feature is ordered.

  • Minimum: None.
  • Maximum: One.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6833) Capacity for Planned Event (CPE)

(No Longer Available as of June 30, 2013)

This feature code corresponds to the number of different CPE Temporary Entitlement Records (TERs) ordered. Each CPE TER provides temporary access to the dormant capacity on the server.

  • Minimum: None.
  • Maximum: Eight (8) per ordering session.
  • Prerequisites: None.
  • Corequisites: CPE authorization (#9912).
  • Compatibility Conflicts: None known.
  • Customer Setup: The CBU TER must be installed via the HMC Configuration Manager before it can be activated.
  • Limitations: None.
  • Field Installable: Yes.
(#7960 - #7968) Fiber Quick Connect

(No Longer Available as of June 30, 2012)

The Fiber Quick Connect (FQC) features are optional features for factory installation of the IBM Fiber Transport System (FTS) fiber harnesses for connection to ESCON channels with MT-RJ connectors and FICON LX channels with LC Duplex connectors. FQC, when ordered, supports all of the installed ESCON channels features and all of the FICON LX features in all of the installed I/O cages. FQC cannot be ordered on a partial cage basis. Fiber Quick Connect is for factory installation only, and is available on new servers and on initial upgrades to z10 EC FQC is not available as an MES to an existing z10 EC

Each ESCON direct-attach fiber harness connects to six ESCON channels at one end and one coupler in a Multi-Terminated Push-On Connector (MTP) coupler bracket at the opposite end. Each FICON LX direct-attach fiber harness connects to six FICON LX channels at one end and one coupler in an MTP coupler bracket at the opposite end.

These descriptions are for information purposes only. They cannot be ordered. The configuration tool selects the appropriate features and quantities based upon the server configuration.

(#7960) FQC 1st bracket + mounting hardware

This feature cannot be ordered. When FQC is ordered, the configuration tool selects the required number of MTP mounting brackets and bracket clamps based upon the 16-port ESCON feature quantity and the 2-port or 4-port FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324), 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7961) FQC additional brackets (2nd-5th)

This feature cannot be ordered. When FQC is ordered, the configuration tool selects the required number of MTP 10-position coupler brackets to support the 16-port ESCON feature quantity and the 2-port or 4-port FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324), 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7962) MT-RJ 6 ft (1.8 meters) multimode harnesses (qty 5)

This feature cannot be ordered. The description is for information purposes only. A harness is 6 feet (1.8 meters) in length. A quantity of 5 harnesses supporting 30 ESCON channels are supplied with this feature. The direct-attach harness supports 62.5 micron multimode fiber optic trunk cables. A fiber harness is for use in an I/O cage and supports the 16-port ESCON feature with the optical transceiver supporting the industry-standard small form factor MT-RJ connector.

A fiber harness has six MT-RJ connectors on one end to attach to six ESCON channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the 16-port ESCON feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7963) MT-RJ 8.5 ft (2.6 meters) multimode harnesses (qty 5)

This feature cannot be ordered. The description is for information purposes only. A harness is 8.5 feet (2.6 meters) in length. A quantity of 5 harnesses supporting 30 ESCON channels are supplied with this feature. The direct-attach harness supports 62.5 micron multimode fiber optic trunk cables. A fiber harness is for use in an I/O cage and supports the 16-port ESCON feature with the optical transceiver supporting the industry-standard small form factor MT-RJ connector.

A fiber harness has six MT-RJ connectors on one end to attach to six ESCON channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the 16-port ESCON feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7964) MT-RJ 5 ft (1.5 meters) multimode harnesses (qty 5)

This feature cannot be ordered. The description is for information purposes only. A harness is 5 feet (1.5 meters) in length. A quantity of 5 harnesses supporting 30 ESCON channels are supplied with this feature. The direct-attach harness supports 62.5 micron multimode fiber optic trunk cables. A fiber harness is for use in an I/O cage and supports the 16-port ESCON feature with the optical transceiver supporting the industry-standard small form factor MT-RJ connector.

A fiber harness has six MT-RJ connectors on one end to attach to six ESCON channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the 16-port ESCON feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7965) LC Duplex 6 ft (1.8 meters) single mode harnesses (qty 2)

This feature cannot be ordered. The description is for information purposes only. A harness is 6 feet (1.8 meters) in length. A quantity of 2 harnesses supporting 12 FICON LX channels are supplied with this feature. The direct-attach harness supports 9 micron single mode fiber optic trunk cables. A fiber harness is for use in an I/O cage and supports 2-port or 4-port FICON LX features with optical transceivers supporting the industry-standard small form factor LC Duplex connector.

A fiber harness has six LC Duplex connectors on one end to attach to six FICON LX channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7966) LC Duplex 8.5 ft (2.6 meters) single mode harnesses (qty 2)

This feature cannot be ordered. The description is for information purposes only. A harness is 8.5 feet (2.6 meters) in length. A quantity of 2 harnesses supporting 12 FICON LX channels are supplied with this feature. The direct-attach harness supports 9 micron single mode fiber optic trunk cables. A fiber harness is for use in an I/O cage and supports 2-port or 4-port FICON LX features with optical transceivers supporting the industry-standard small form factor LC Duplex connector.

A fiber harness has six LC Duplex connectors on one end to attach to six FICON LX channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7967) LC Duplex 5 ft (1.5 meter) single mode harnesses (qty 2)

This feature cannot be ordered. The description is for information purposes only. A harness is 5 feet (1.5 meters) in length. A quantity of 2 harnesses supporting 12 FICON LX channels are supplied with this feature. The direct-attach harness supports 9 micron single mode fiber optic trunk cables. A fiber harness is for use in an I/O cage and supports 2-port or 4-port FICON LX features with optical transceivers supporting the industry-standard small form factor LC Duplex connector.

A fiber harness has six LC Duplex connectors on one end to attach to six FICON LX channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7968) LC Duplex 8.5 ft (2.6 meters) single mode harnesses (qty 1)

This feature cannot be ordered. The description is for information purposes only. The harness is 8.5 feet (2.6 meters) in length. One harness supporting 6 FICON LX channels is supplied with this feature. The direct-attach harness supports 9 micron single mode fiber optic trunk cables. A fiber harness is for use in an I/O cage and supports 2-port or 4-port FICON LX features with optical transceivers supporting the industry-standard small form factor LC Duplex connector.

A fiber harness has six LC Duplex connectors on one end to attach to six FICON LX channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#9896) On/Off CoD Authorization

(No Longer Available as of June 30, 2013)

This feature is ordered to enable the ordering of On/Off Capacity on Demand (On/Off CoD) through Resource Link.

  • Minimum number of features: One.
  • Maximum number of features: One.
  • Prerequisites: On Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9898) Permanent Upgrade authorization

(No Longer Available as of June 30, 2013)

This feature is ordered to enable the ordering of Licensed Internal Code Configuration Control (LICCC) enabled, permanent capacity upgrades through Resource Link.

  • Minimum number of features: One.
  • Maximum number of features: One.
  • Prerequisites: On Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9900) On Line Capacity on Demand (CoD) Buying

(No Longer Available as of June 30, 2013)

This feature is ordered to enable purchasing either permanent capacity upgrades or temporary capacity upgrades through Resource Link.

  • Minimum number of features: One.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9910) CBU Authorization

(No Longer Available as of June 30, 2013)

This feature enables the purchase of Capacity back up (CBU). This feature is generated when Capacity back up (#6818) is ordered, or it can be ordered by itself. This feature along with On Line Capacity on Demand (#9900) is required to order CBU through Resource Link.

  • Minimum number of features: One.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: Capacity back up (#6818).
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#9912) CPE Authorization

(No Longer Available as of June 30, 2013)

This feature is ordered to enable the purchase of Capacity for Planned Event (CPE). This feature is generated when Capacity for Planned Event (#6833) is ordered. This feature along with On Line Capacity on Demand (#9900) is required to order CPE through Resource Link.

  • Minimum number of features: One.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: Capacity for Planned Event (#6833).
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9917) 1 MSU-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary Central Processor (CP) resource tokens purchased through Resource Link. Million Service Unit (MSU)

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9918) 100 MSU-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary CP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 MSU-day (#9917)
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9919) 10,000 MSU-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary CP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 MSU-day (#9917), 100 MSU-days (#9918)
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9920) IFL-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary Integrated Facility for Linux (IFL) resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9921) 100 IFL-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary IFL resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 IFL-day (#9920)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9922) 1 ICF-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary Internal Coupling Facility (ICF) resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9923) 100 ICF-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary ICF resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 ICF-day (#9921)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9924) 1 zIIP-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary System z Integrated Information Processor (zIIP) resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9925) 100 zIIP-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary zIIP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 zIIP-day (#9924)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9926) 1 zAAP-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary System z Application Assist Processor (zAAP) resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9927) 100 zAAP-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary zAAP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 zAAP-day (#9926)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9928) 1 SAP-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary System Assist Processor (SAP) resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9929) 100 SAP-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary SAP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 SAP-day (#9928)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9975) Height reduction for shipping 2097 (z10 EC)

(No Longer Available as of June 30, 2012)

This feature is required if it is necessary to reduce the shipping height of the z10 EC. This feature should be selected only when it has been deemed necessary for delivery clearance purposes. It should be ordered only IF ABSOLUTELY essential. This feature elongates the install time and increases the risk of cabling errors during the install activity.

This optional feature should be ordered if you have doorways with openings less than 1941 mm (76.4 inches) high. This feature accommodates doorway openings as low as 1832 mm (72.1 inches). Top hat and side covers are shipped separately. If Internal Battery features (#3210) are a part of the order, they will be shipped separately. Instructions are included for the reassembly on site by IBM personnel.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
(#9976) Frame height reduction for return of 2084/2094 (z990/z9 EC)

(No Longer Available as of June 30, 2012)

The frame height reduction for 2084/2094 provides the tools and instructions to reduce the height of a 2084/2094 when returned to IBM on an upgrade from z990 to a z10 EC or a z9 EC to z10 EC.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: None
  • Compatibility Conflicts: None known.

Feature exchanges

None.
Back to topBack to top
 

Accessories

None.

Customer replacement parts

None.
Back to topBack to top
 

Machine elements

Not available.
Back to topBack to top
 

Supplies

None.

Supplemental media

None.

Trademarks

(R), (TM), * Trademark or registered trademark of International Business Machines Corporation.

** Company, product, or service name may be a trademark or service mark of others.

UNIX is a registered trademark in the United States and other countries licensed exclusively through X/Open Company Limited.
 © IBM Corporation 2017.
Back to topBack to top

Contact IBM

Considering a purchase?