Family 2098+01 IBM System z10 Business Class

IBM United States Sales Manual
Revised:  July 11, 2017.

Table of contents
TOC Link Product life cycle dates TOC Link Technical description
TOC Link Abstract TOC Link Publications
TOC Link Highlights TOC Link Features
TOC Link Description TOC Link Accessories
TOC Link Product positioning TOC Link Machine elements
TOC Link Models TOC Link Supplies

 
Product life cycle dates
Type Model Announced Available Marketing Withdrawn Service Discontinued
2098-E10 2008/10/212008/10/28 2012/06/30 -

Back to topBack to top
 
Abstract

The 2098 IBM System z10 Business Class (z10 BC) delivers innovative technologies for small and medium enterprises that give you a whole new world of capabilities to run modern applications. Ideally suited as the cornerstone of your new enterprise data center, this competitively priced server delivers unparalleled qualities of service to help manage growth and reduce cost and risk in your business.

Model abstract 2098-E10

The IBM 2098 z10 BC Model E10 is designed to provide up to 1.5 times the total system capacity for general purpose processing, nearly two times the available memory and over 40% more configurable processors than the z9 BC Model S07. In June 2009, memory options up to 248GB will be available providing nearly four times the available memory of the z9 BC Model S07.
Back to topBack to top
 

Highlights

The IBM System z10 BC is a world-class enterprise server built on the inherent strengths of the IBM System z platform. It is designed to deliver new technologies and virtualization that provide improvements in price/performance for key new workloads. The System z10 BC further extends System z leadership in key capabilities with the delivery of granular growth options, business-class consolidation, improved security and availability to reduce risk, and just-in-time capacity deployment helping to respond to changing business requirements. Whether you want to deploy new applications quickly, grow your business without growing IT costs or consolidate your infrastructure for reduced complexity, look no further - z Can Do IT. The System z10 BC delivers:

  • The IBM z10 Enterprise Quad Core processor chip running at 3.5 GHz, designed to help improve CPU intensive workloads.
  • A single model E10 offering increased granularity and scalability with 130 available capacity settings.
  • Up to a 5-way general purpose processor and up to 5 additional Specialty Engine processors or up to a 10-way IFL or ICF server for increased levels of performance and scalability to help enable new business growth.
  • Integrated Encryption designed to provide high-speed cryptography for protecting data in storage. CP Assist for Cryptographic Function (CPACF) offers more protection and security options with Advanced Encryption Standard (AES) 192 and 256 and stronger hash algorithms with Secure Hash Algorithm (SHA-384 and SHA-512). Support for Longer Personal Account Numbers for stronger data protection on Crypto Express2. Trusted Key Entry Licensed Internal Code 5.3 enhancement to support Advanced Encryption Standard (AES) encryption algorithm, audit logging, and an infrastructure for payment card industry data security standard (PCIDSS)
  • Integrated Hardware Decimal Floating Point unit on each core on the Processor Unit (PU), which can aid in decimal floating point calculations and is designed to deliver performance improvements and precision in execution.
  • Up to 120 GB of available real memory per server for growing application needs Also a new 8GB fixed Hardware System Area (HSA) which is managed separately from customer memory. This fixed HSA is designed to improve availability by avoiding outages.
  • Plan ahead memory that allows for non disruptive memory increases.
  • Just-in-time deployment of capacity resources which can improve flexibility when making temporary or permanent changes. Activation can be further simplified and automated using z/OS Capacity Provisioning (available on z/OS V1.9 with PTF and on z/OS V1.10). Additionally, increased flexibility with the ability for more temporary offerings installed on the CPC and ways to acquire capacity backup.
  • Temporary capacity offering Capacity for Planned Event (CPE), a variation of Capacity Back Up (CBU). CPE can be used when capacity is unallocated, but available, and is needed for a short-term event.
  • Production workload may now be executed on a CBU Upgrade during a CBU Test provided that certain contract terms are in effect with IBM.
  • InfiniBand host bus bandwidth at 6 GBps designed to deliver improved performance.
  • The InfiniBand Coupling Links with a link data rate of 6 GBps, designed to provide a high-speed solution and increased distance (150 meters) compared to ICB-4 (10 meters).
  • Long reach 1x InfiniBand coupling links - an alternative to ISC-3 facilitating coupling link consolidation
  • Coupling Facility Control Code Level 16 - to help deliver faster service time for CF Duplexing, and improvements to the efficiency of workload distribution when using shared queues in the Coupling Facility.
  • Time accuracy, availability and system management improvements with new STP enhancements.
  • Improved access to data with High Performance FICON for System z (zHPF) on both FICON Express4 and FICON Express2. Additionally, enhanced problem determination, analysis, and manageability of the storage area network (SAN) by providing registration information to the fabric on the name server for both FICON and FCP.
  • FCP - increased performance for small block sizes
  • SCSI Initial Program Load (IPL) - now a base function
  • Platform and name server registration in FICON channel.
  • Extended-distance FICON - helps avoid degradation of performance at extended distances.
  • Increased performance for Local Area Network connectivity with new OSA-Express3 I/O features providing double the port density, increased throughput, and reduced latency. OSA-Express3 10 GbE Long Reach (LR) and Short Reach (SR), OSA Express3 GbE 4-port LX and SX, OSA-Express3-2P GbE SX, OSA-Express3 1000BASE-T 4-port card and OSA-Express3-2P 1000BASE-T.
  • HiperSockets improvements with Multiple Write Facility for increased performance and Layer 2 support to host IP and non-IP workloads.
  • Support for IBM Systems Director Active Energy Manager (AEM) for Linux on System z for a single view of actual energy usage across multiple heterogeneous IBM platforms within the infrastructure. AEM V3.1 is a key component of IBM's Cool Blue portfolio within Project Big Green.

EAL5 certification for System z10 Business Class server: The IBM System z10 Business Class (z10 BC) servers joined previous IBM mainframes as the world's only servers with the highest level of hardware security certification - Common Criteria Evaluation Assurance Level 5 (EAL5), for its logical partitions (LPARs).

The EAL5 ranking gives you confidence that you can host many disparate applications running on different operating systems - z/OS, z/VM, z/VSE, z/TPF, and Linux on System z. Even when the applications contain confidential data, such as payroll, human resources, e-commerce, ERP, and CRM systems, you can be assured that a z10 BC server divided into logical partitions keeps each application's data secure and distinct from the others.

The z10 BC server architecture is designed to prevent the flow of information among logical partitions on a single system. All businesses that currently trust their critical business transactions to the IBM mainframe, as well as government agencies who deal with national security issues, can benefit from the privacy certification received by the z10 BC servers. The z10 EC servers received EAL5 certification October 29, 2009.
Back to topBack to top
 

Description

The z10 BC further extends the leadership of System z by delivering expanded granularity and optimized scalability for growth, enriched virtualization technology for consolidation of distributed workloads, improved availability and security to help increase business resiliency, and just-in-time management of resources. The z10 BC is at the core of the enhanced System z platform and is the new face of System z.

For those customers with distributed servers trying to reduce complexity of operations and operating costs, the z10 BC facilitates consolidation of dozens to hundreds of individual distributed servers into virtual images on one z10 BC server. z10 BC delivers improvements in capacity, memory, I/O infrastructure, and virtualization technology that you need in one small footprint.

In the area of server availability, enhancements have been engineered into the z10 BC to help eliminate unwanted down time. For example, preplanning requirements are minimized by delivering a fixed, reserved Hardware System Area (HSA) that enables dynamic creation of logical partitions, including logical channel subsystems, subchannel sets, and devices, using dynamic I/O without preplanning. Additionally, new capabilities are intended to allow you to dynamically change logical processor definitions and cryptographic co-processor definitions for a logical partition without requiring the logical partition to be deactivated and re-activated.

Further improvement to availability and flexibility is achieved with just-in-time deployment of capacity resources designed to dynamically change capacity when business requirements change. You are no longer limited by one offering configuration; instead one or more flexible configurations can be defined that can be used to solve multiple temporary situations. You can choose from multiple configurations and the configurations themselves are flexible so you can activate only what is needed from your defined configuration. Another significant change is the ability to add permanent capacity to the server when you are in a temporary state. These z10 BC enhancements are designed to allow you to take advantage of the technology helping to provide on-demand capacity more effectively. There are new terms governing System z Capacity Back Up (CBU) now available which allow customers to execute production workload on a CBU Upgrade during a CBU Test.

IBM continues the long history of providing integrated technologies to optimize a variety of workloads. Specialty engines have been available to help users expand the use of the mainframe for new workloads while helping to lower the cost of ownership. The z10 BC processor unit now delivers an integrated Hardware Decimal Floating Point unit to accelerate decimal floating point transactions. This function is designed to markedly improve performance for decimal floating point operations which offer increased precision compared to binary floating point operations. This is expected to be particularly useful for the calculations involved in many financial transactions.

Additionally, integrated clear-key encryption security features on z10 BC include support for a higher advanced encryption standard and more secure hashing algorithms. Performing these functions in hardware is designed to contribute to improved performance in a security-rich environment.

High speed connectivity and high bandwidth out to the data and the network are critical in achieving high levels of transaction throughput and enabling resources inside and outside the server to maximize application requirements. The z10 BC has a new host bus interface with a link data rate of 6 GB using the industry standard InfiniBand protocol to help satisfy requirements for coupling (ICF and server-to-server connectivity), cryptography (Crypto Express2 with secure coprocessors and SSL transactions), I/O (ESCON, FICON or FCP) and LAN (new OSA-Express3 Gigabit, 10 Gigabit and 1000BASE-T Ethernet features). New High Performance FICON for System z (zHPF) also brings new levels of performance when accessing data on zHPF enabled storage devices such as the IBM System Storage DS8000.

IBM Global Financing can provide attractive low rate financing for all new and upgraded z10 BC products, storage, software, and services. For more information, contact your local Global Financing sales representative or visit the website:

http://www.ibm.com/financing

IBM Global Financing is available worldwide for eligible customers acquiring products and services from IBM and IBM Business Partners.

The IBM System z10 Business Class - A total systems approach to deliver leadership in enterprise computing: With a total systems approach designed to deploy innovative technologies, IBM System z introduces the z10 BC, supporting z/Architecture, and offering the highest levels of reliability, availability, scalability, clustering, and virtualization. The z10 BC just-in-time deployment of capacity allows improved flexibility and administration, and the ability to enable changes as they happen. The expanded scalability on the z10 BC facilitates growth and large-scale consolidation. The z10 BC is designed to provide:

  • Uniprocessor performance up to 1.4 times the uniprocessor performance of the z9 BC S07 Z01 (based on LSPR mixed workload average).
  • Up to 1.5 times the total system capacity for general purpose processing, of the z9 BC
  • Up to 12 Processor Units (PUs) including SAPs, as compared to a maximum of 8 on the z9 BC (including SAPs)
  • Up to 1.9 times as much total server available memory as a z9 BC - up to 120 gigabytes of total memory
  • Up to 3.8 times as much total server available memory as a z9 BC by June 30, 2009 - up to 248 gigabytes of total memory
  • Up to 78% more subcapacity choices as compared to z9 BC
  • Increased host base bandwidth using InfiniBand at 6 GBps
  • Hardware support for HiperDispatch
  • Hardware Decimal Floating Point unit for improved numeric processing performance
  • Large page support (1 megabyte pages)
  • Up to 128 FICON channels
  • High Performance FICON for System z (zHPF) provides improvement in performance and RAS on both FICON Express4 and FICON Express2 features
  • Platform and name server registration in FICON channel
  • Extended-distance FICON - helps avoid degradation of performance at extended distances
  • FCP - increased performance for small block sizes
  • SCSI Initial Program Load (IPL) - now a base function
  • Performance improvements with HiperSockets Multiple Write Facility
  • 12x DDR Coupling with InfiniBand for improved distance compared to ICB-4 links and potential cost saving by ISC-3 link consolidation
  • 1x DDR Coupling over InfiniBand links supporting 10 km unrepeated distance
  • STP time accuracy, availability and system management improvements
  • Improved Advanced Encryption Standard (AES) 192 and 256 and stronger hash algorithms with Secure Hash Algorithm (SHA) 384 and 512
  • Reduction in the availability impact of preplanning requirements
    • Fixed Hardware System Area (HSA) designed so the maximum configuration capabilities can be exploited
    • Designed to reduce the number of planned Power-on-Resets
    • Designed to allow dynamic add/remove of a new logical partition (LPAR) to new or existing logical channel subsystem (LCSS)
  • Open Systems Adapter-Express3 (OSA-Express3) 10 Gigabit Ethernet with double the port density and improved performance
  • Energy efficiency displays on System Activity Display (SAD) screens
  • Just-in-time deployment of capacity for faster activation without dependency or referral to IBM
  • Store System Information (STSI) change to support billing methodologies
  • Temporary offering Capacity for Planned Event (CPE) available to manage system migrations, data center moves, maintenance activities, and similar situations
  • Improved performance management with Capacity Provisioning
  • Plan ahead memory that allows for non disruptive memory increases
  • Support for the IBM Systems Director Active Energy Manager (AEM) for Linux on System z

The performance advantage

IBM's Large Systems Performance Reference (LSPR) method: is designed to provide comprehensive z/Architecture processor capacity ratios for different configurations of Central Processors (CPs) across a wide variety of system control programs and workload environments. For z10 BC, z/Architecture processor subcapacity indicator is defined with a (A0x-Z0x) notation, where x is the number of installed CPs, from one to five. There are a total of 26 subcapacity levels, designated by the letters A through Z.

In addition to the general information provided for z/OS V1.9, the LSPR also contains performance relationships for z/VM and Linux operating environments.

Based on using an LSPR mixed workload, the performance of the z10 BC (2098) Z01 is expected to be

  • up to 1.4 times that of the z9 BC (2096) S07 Z01.

Moving from a System z9 partition to an equivalently sized System z10 BC partition, a z/VM workload will experience an ITR ratio that is somewhat related to the workload's instruction mix, MP factor, and level of storage over commitment. Workloads with higher levels of storage over commitment or higher MP factors are likely to experience lower than average z10 BC to z9 ITR scaling ratios. The range of likely ITR ratios is wider than the range has been for previous processor migrations.

The LSPR contains the Internal Throughput Rate Ratios (ITRRs) for the new z10 BC and the previous-generation zSeries processor families based upon measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput that any user may experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, and the workload processed. Therefore no assurance can be given that an individual user will achieve throughput improvements equivalent to the performance ratios stated. For more detailed performance information, consult the Large Systems Performance Reference (LSPR) available at:

http://www.ibm.com/servers/eserver/zseries/lspr/

CPU Measurement Facility architecture: The CPU Measurement Facility is a hardware facility which consists of counters and samples. The facility provides a means to collect run-time data for software performance tuning. The detailed architecture information for this facility can be found in the System z10 Library in Resource Link.

Networking

Response time improvements with OSA-Express3 optimized latency mode

Optimized Latency Mode (OLM) can help improve performance for z/OS workloads with demanding low latency requirements. This includes interactive workloads such as SAP using DB2 Connect. OLM can help improve performance for applications that have a critical requirement to minimize response times for inbound and outbound data when servicing remote clients.

This enhancement applies exclusively to OSA-Express3 QDIO mode (CHPID type OSD).

For prerequisites, refer to the Software requirements section.

HiperSockets network traffic analyzer (HS NTA):

Problem isolation and resolution can now be made simpler by an enhancement to the HiperSockets architecture. This function is designed to allow tracing of Layer 2 and Layer 3 HiperSockets network traffic.

HS NTA allows Linux on System z to control the trace for the internal virtual LAN to capture the records into host memory and storage (file systems) using Linux on System z tools to format, edit, and process the trace records for analysis by system programmers and network administrators.

Configuration flexibility with four-port exploitation for OSA-ICC

Integrated Console Controllers (ICC) allow the System z10 to help reduce cost and complexity by eliminating the requirement for external console controllers.

You can now exploit the four ports on a OSA-Express3 1000BASE-T Ethernet feature (#3367) on the z10 EC and z10 BC, or the two ports on a OSA-Express3-2P 1000BASE-T on a z10 BC (#3369), when defining the feature as an Integrated Console Controller (OSA-ICC) for TN3270E, local non-SNA DFT, 3270 emulation, and 328x printer emulation. There are two PCI-E adapters per feature and two channel path identifiers (CHPIDs) to be assigned. Each PCI-E adapter has two ports, but prior to this only one of the two PCI-E adapter ports was available for use when defined as CHPID type OSC. Removal of this restriction can improve configuration flexibility by allowing the ability to connect two local LAN segments to each CHPID.

OSA-ICC continues to support 120 sessions per CHPID.

Four port exploitation for OSA-Express3 1000BASE-T (feature number 3367) and two port exploitation for OSA-Express3-2P 1000BASE-T (feature number 3369) for OSA-ICC will be available in the first quarter of 2010.

For prerequisites, refer to the Software requirements section.

Hardware decimal floating point

Focused performance boost - hardware decimal floating point: Recognizing that speed and precision in numerical computing are essential, with the introduction of z10 BC each core on the PU has its own hardware decimal floating point unit, which is designed to improve performance of decimal floating point over that provided by System z9.

Decimal calculations are often used in financial applications and those done using other floating point facilities have typically been performed by software through the use of libraries. With a hardware decimal floating point unit, some of these calculations may be done directly and accelerated.

Software support for hardware decimal floating point on z10 BC is provided in several programming languages. Support is provided in Assembler Language in Release 4 or 5 of High Level Assembler. Decimal floating point data and instructions are also supported in Enterprise PL/I V3.7 and resulting programs can be debugged by Debug Tool V8.1. Java applications, which make use of the BigDecimal Class Library, will automatically begin using the hardware decimal floating point instructions when running on a z10 BC. Support for decimal floating point data types is also provided in SQL as provided in DB2 Version 9. Refer to the Software requirements section.

Large page support for 1 megabyte pages: A change to the z/Architecture on z10 BC is designed to allow memory to be extended to support large (1 megabyte (MB)) pages. Use of large pages can improve CPU utilization for exploiting applications.

Large page support is primarily of benefit for long-running applications that are memory-access-intensive. Large page is not recommended for general use. Short-lived processes with small working sets are normally not good candidates for large pages.

Large page support is exclusive to System z10 running either z/OS or Linux on System z. Refer to the Software requirements section.

Cryptographic support for security-rich transactions

CP Assist for Cryptographic Function (CPACF): CPACF supports clear-key encryption. All CPACF functions can be invoked by problem state instructions defined by an extension of System z architecture. The function is activated using a no-charge enablement feature (#3863) and offers the following on every CPACF that is shared between two Processor Units (PUs) and designated as CPs and/or Integrated Facility for Linux (IFL):

  • Data Encryption Standard (DES)
  • Triple Data Encryption Standard (TDES)
  • Advanced Encryption Standard (AES) for 128-bit keys
  • Secure Hash Algorithm, SHA-1, SHA-224, and SHA-256
  • Pseudo Random Number Generation (PRNG)

Enhancements to CP Assist for Cryptographic Function (CPACF): CPACF has been enhanced to include support of the following on CPs and IFLs:

  • Advanced Encryption Standard (AES) for 192-bit keys and 256-bit keys
  • SHA-384 and SHA-512 bit for message digest

SHA-1, SHA-256, and SHA-512 are shipped enabled and do not require the enablement feature.

Support for CPACF is also available using the Integrated Cryptographic Service Facility (ICSF). ICSF is a component of z/OS, and is designed to transparently use the available cryptographic functions, whether CPACF or Crypto Express2, to balance the workload and help address the bandwidth requirements of your applications.

The enhancements to CPACF are exclusive to the System z10 and supported by z/OS, z/VM, z/VSE, and Linux on System z. Refer to the

Software requirements section.

Configurable Crypto Express2: The Crypto Express2 feature has two PCI-X adapters. Each of the PCI-X adapters can be defined as either a Coprocessor or an Accelerator.

Crypto Express2 Coprocessor - for secure-key encrypted transactions (default) is:

  • Designed to support security-rich cryptographic functions, use of secure-encrypted-key values, and User Defined Extensions (UDX)
  • Designed to support secure and clear-key RSA operations
  • The tamper-responding hardware and lower-level firmware layers are validated to U.S. Government FIPS 140-2 standard: Security Requirements for Cryptographic Modules at Level 4.

Crypto Express2 Accelerator - for Secure Sockets Layer (SSL) acceleration:

  • Is designed to support clear-key RSA operations
  • Offloads compute-intensive RSA public-key and private-key cryptographic operations employed in the SSL protocol

Crypto Express2 features can be carried forward on an upgrade to the new System z10 BC, so users may continue to take advantage of the SSL performance and the configuration capability.

The configurable Crypto Express2 feature is supported by z/OS, z/VM, z/VSE, and Linux on System z. z/VSE offers support for clear-key operations only. Current versions of z/OS, z/VM, and Linux on System z offer support for both clear-key and secure-key operations.

Refer to the Software requirements section and also the Special features section of the Sales manual on the Web for further information.

http://www.ibm.com/common/ssi/index.wss

Crypto Express2-1P

An option of one PCI-X adapter per feature, in addition to the current two PCI-X adapters per feature, is being offered for the z10 BC to help satisfy small and midrange security requirements while maintaining high performance.

The Crypto Express2-1P feature, with one PCI-X adapter, can continue to be defined as either a Coprocessor or an Accelerator. A minimum of two features must be ordered.

Additional cryptographic functions and features with Crypto Express2 and Crypto Express2-1P.

Key management Added key management for remote loading of ATM and Point of Sale (POS) keys. The elimination of manual key entry is designed to reduce downtime due to key entry errors, service calls, and key management costs.

Improved key exchange: Added Improved key exchange with non-CCA cryptographic systems.

New features added to IBM Common Cryptographic Architecture (CCA) are designed to enhance the ability to exchange keys between CCA systems, and systems that do not use control vectors by allowing the CCA system owner to define permitted types of key import and export while preventing uncontrolled key exchange that can open the system to an increased threat of attack.

These are supported by z/OS and by z/VM for guest exploitation. Refer to the Software requirements section.

Support for ISO 16609: Support for ISO 16609 CBC Mode T-DES Message Authentication (MAC) requirements ISO 16609 CBC Mode T-DES MAC is accessible through ICSF function calls made in the PCI-X Cryptographic Adapter segment 3 Common Cryptographic Architecture (CCA) code.

This is supported by z/OS and by z/VM for guest exploitation. Refer to the Software requirements section.

Support for RSA keys up to 4096 bits: The RSA services in the CCA API are extended to support RSA keys with modulus lengths up to 4096 bits. The services affected include key generation, RSA-based key management, digital signatures, and other functions related to these.

Refer to the ICSF Application Programmers Guide, SA22-7522, for additional details.

Cryptographic enhancements to Crypto Express2 and Crypto Express2-1P

Dynamically add crypto to a logical partition: Today, users can preplan the addition of Crypto Express2 features to a logical partition (LP) by using the Crypto page in the image profile to define the Cryptographic Candidate List, Cryptographic Online List, and Usage and Control Domain Indexes in advance of crypto hardware installation.

With the change to dynamically add crypto to a logical partition, changes to image profiles, to support Crypto Express2 features, are available without outage to the logical partition. Users can also dynamically delete or move Crypto Express2 features. Preplanning is no longer required.

This enhancement is supported by z/OS, z/VM for guest exploitation, z/VSE, and Linux on System z. Refer to the Software requirements section.

Secure Key AES: The Advanced Encryption Standard (AES) is a National Institute of Standards and Technology specification for the encryption of electronic data. It is expected to become the accepted means of encrypting digital information, including financial, telecommunications, and government data. AES is the symmetric algorithm of choice, instead of Data Encryption Standard (DES) or Triple-DES, for the encryption and decryption of data. The AES encryption algorithm will be supported with secure (encrypted) keys of 128, 192, and 256 bits.

The secure key approach, similar to what is supported today for DES and TDES, offers the ability to keep the encryption keys protected at all times, including the ability to import and export AES keys, using RSA public key technology.

Support for AES encryption algorithm includes the master key management functions required to load or generate AES master keys, update those keys, and re-encipher key tokens under a new master key.

Secure key AES is exclusive to System z10 and is supported by z/OS and z/VM for guest exploitation. Refer to the Software requirements section.

Support for 13-thru 19-digit Personal Account Numbers: Credit card companies sometimes perform card security code computations based on Personal Account Number (PAN) data. Currently, ICSF callable services CSNBCSV (VISA CVV Service Verify) and CSNBCSG (VISA CVV Service Generate) are used to verify and to generate a VISA Card Verification Value (CVV) or a MasterCard Card Verification Code (CVC).

The ICSF callable services currently support 13-, 16-, and 19-digit PAN data. To deliver additional flexibility, new keywords PAN-14, PAN-15, PAN-17, and PAN-18 are implemented in the rule array for both CSNBCSG and CSNBCSV to indicate that the PAN data is comprised of 14, 15, 17, or 18 PAN digits, respectively.

Support for 13-through 19-digit PANs is exclusive to System z10 and is offered by z/OS and z/VM for guest exploitation. Refer to the

Software requirements section.

TKE 5.3 workstation: The Trusted Key Entry (TKE) workstation (#0839) and the TKE 5.3 level of Licensed Internal Code (#0854) are optional features on the System z10 BC. The TKE 5.3 Licensed Internal Code (LIC) is loaded on the TKE workstation prior to shipment. The TKE workstation offers security-rich local and remote key management, providing authorized persons a method of operational and master key entry, identification, exchange, separation, and update. The TKE workstation supports connectivity to an Ethernet Local Area Network (LAN) operating at 10 or 100 Mbps. Up to ten TKE workstations can be ordered.

Enhancement with TKE 5.3 LIC: The TKE 5.3 level of LIC includes support for the AES encryption algorithm, adds 256-bit master keys, and includes the master key management functions required to load or generate AES master keys to cryptographic coprocessors in the host.

Also included is an imbedded screen capture utility to permit users to create and to transfer TKE master key entry instructions to diskette or DVD. Under 'Service Management' a "Manage Print Screen Files" utility will be available to all users.

The TKE workstation (#0839) and TKE 5.3 LIC (#0854) are available on the z10 EC, z9 EC, and z9 BC.

Refer also to the Special features section of the Sales Manual on the Web for further information.

http://www.ibm.com/common/ssi/index.wss

Smart Card Reader - new feature: Support for an optional Smart Card Reader (#0855) attached to the TKE 5.3 workstation allows for the use of smart cards that contain an embedded microprocessor and associated memory for data storage. Access to and the use of confidential data on the smart cards is protected by a user-defined Personal Identification Number (PIN).

TKE 5.3 LIC has added the capability to store key parts on DVD-RAMs and continues to support the ability to store key parts on paper, or optionally on a smart card. TKE 5.3 LIC has limited the use of floppy diskettes to read-only. The TKE 5.3 LIC can remotely control host cryptographic coprocessors using a password-protected authority signature key pair either in a binary file or on a smart card.

The Smart Card Reader, attached to a TKE workstation with the 5.3 level of LIC will support System z10 BC, z10 EC, z9 EC, and z9 BC. However, TKE workstations with 5.0, 5.1 and 5.2 LIC must be upgraded to TKE 5.3 LIC.

TKE additional smart cards - new feature: You have the capability to order Java**-based blank smart cards (#0884) which offers a highly efficient cryptographic and data management application built-in to read-only memory for storage of keys, certificates, passwords, applications, and data. The TKE blank smart cards are compliant with FIPS 140-2 Level 2. When you place an order for a quantity of one, you are shipped 10 smart cards.

System z10 BC cryptographic migration: Clients using a User Defined Extension (UDX) of the Common Cryptographic Architecture should contact their UDX provider for an application upgrade before ordering a new System z10 BC machine; or before planning to migrate or activate a UDX application to firmware driver level 73 and higher.

  • The Crypto Express2 feature is supported on the z9 BC and can be carried forward on an upgrade to the System z10 BC.
  • You may continue to use TKE workstations with 5.3 licensed internal code to control the System z10 BC.
  • TKE 5.0 and 5.1 workstations (#0839 and #0859) may be used to control z9 EC, z9 BC, z890, and z990 servers.

FICON and FCP for connectivity to disk, tape, and printers

High Performance FICON for System z (zHPF) - improvement in performance and RAS

Enhancements have been made to the z/Architecture and the FICON interface architecture to deliver optimizations for online transaction processing (OLTP) workloads. When exploited by the FICON channel, the z/OS operating system, and the control unit, zHPF is designed to help reduce overhead and improve performance.

Additionally, the changes to the architectures offer end-to-end system enhancements to improve reliability, availability, and serviceability (RAS).

zHPF channel programs can be exploited by the OLTP I/O workloads - DB2, VSAM, PDSE, and zFS - which transfer small blocks of fixed size data (4K blocks). zHPF implementation by the DS8000 is exclusively for I/Os that transfer less than a single track of data.

The maximum number of I/Os is designed to be improved up to 100% for small data transfers that can exploit zHPF. Realistic production workloads with a mix of data transfer sizes can see up to 30 to 70% of FICON I/Os utilizing zHPF resulting in up to a 10 to 30% savings in channel utilization. Sequential I/Os transferring less than a single track size (for example, 12x4k bytes/IO) may also benefit.

The FICON Express4 and FICON Express2 features will support both the existing FICON protocol and the zHPF protocol concurrently in the server Licensed Internal Code. High performance FICON is supported by z/OS for DB2, VSAM, PDSE, and zFS applications. Refer to the Software requirements section. zHPF applies to all FICON Express4 and FICON Express2 features (CHPID type FC) and is exclusive to System z10. Exploitation is required by the control unit.

IBM System Storage DS8000 Release 4.1 delivers new capabilities to support High Performance FICON for System z, which can improve FICON I/O throughput on a DS8000 port by up to 100%. The DS8000 series Licensed Machine Code (LMC) level 5.4.1.xx.xx (bundle version 64.1.xx.xx), or later, is required.

Platform and name server registration in FICON channel

The FICON channel now delivers the same information to the fabric as is commonly provided by open systems, registering with the name server in the attached FICON directors. With this information, your storage area network (SAN) can be more easily and efficiently managed, enhancing your ability to perform problem determination and analysis.

Registration allows other nodes and/or SAN managers to query the name server to determine what is connected to the fabric and what protocols are supported (FICON, FCP) and to gain information about the System z10 using the attributes that are registered (see following).

The FICON channel is now designed to perform registration with the Fibre Channel's Management Service and Directory Service.

It will register:

  • Platforms:
    • Worldwide node name (node name for the platform - same for all channels)
    • Platform type (host computer)
    • Platform name (includes vendor ID, product ID, and vendor-specific data from the node descriptor)
  • Channels:
    • Worldwide port name (WWPN)
    • Node port identification (N_PORT ID)
    • FC-4 types supported (always 0x1B and additionally 0x1C if any Channel-to-Channel (CTC) control units are defined on that channel)
    • Classes of service support by the channel

Platform registration is a service defined in the Fibre Channel - Generic Services 4 (FC-GS-4) standard (INCITS (ANSI) T11 group).

Platform and name server registration applies to all of the FICON Express4, FICON Express2, and FICON Express features (CHPID type FC). This support is exclusive to System z10 and is transparent to operating systems.

Extended-distance FICON - improved performance at extended distance: An enhancement to the industry-standard FICON architecture (FC-SB-3) helps avoid degradation of performance at extended distances by implementing a new protocol for "persistent" Information Unit (IU) pacing. Control units that exploit the enhancement to the architecture can increase the pacing count (the number of IUs allowed to be in flight from channel to control unit). Extended distance FICON also allows the channel to "remember" the last pacing update for use on subsequent operations to help avoid degradation of performance at the start of each new operation.

Improved IU pacing can help to optimize the utilization of the link, for example help keep a 4 Gbps link fully utilized at 50 km, and allows channel extenders to work at any distance, with performance results similar to that experienced when using emulation.

The requirements for channel extension equipment are simplified with the increased number of commands in flight. This can benefit z/OS Global Mirror (Extended Remote Copy - XRC) applications as the channel extension kit is no longer required to simulate specific channel commands. Simplifying the channel extension requirements may help reduce the total cost of ownership of end-to-end solutions.

Extended-distance FICON is transparent to operating systems and applies to all the FICON Express2 and FICON Express4 features carrying native FICON traffic (CHPID type FC). For exploitation, the control unit must support the new IU pacing protocol. The channel will default to current pacing values when operating with control units that cannot exploit extended distance FICON.

Exploitation of extended-distance FICON is supported by IBM System Storage DS8000 series Licensed Machine Code (LMC) level 5.4.1.xx.xx (bundle version 64.1.xx.xx), or later.

Note: To support extended distance without performance degradation, the buffer credits in the FICON director must be set appropriately. The number of buffer credits required is dependent upon the link data rate (1 Gbps, 2 Gbps, or 4 Gbps), the maximum number of buffer credits supported by the FICON director or control unit, as well as application and workload characteristics. High bandwidth at extended distances is achievable only if enough buffer credits exist to support the link data rate.

FCP - increased performance for small block sizes: The Fibre Channel Protocol (FCP) Licensed Internal Code has been modified to help provide increased I/O operations per second for small block sizes. With FICON Express4, there may be up to 57,000 I/O operations per second (all reads, all writes, or a mix of reads and writes), an 80% increase compared to System z9. These results are achieved in a laboratory environment using one channel configured as CHPID type FCP with no other processing occurring and do not represent actual field measurements. A significant increase in I/O operations per second for small block sizes can also be expected with FICON Express2.

This FCP performance improvement is transparent to operating systems that support FCP, and applies to all the FICON Express4 and FICON Express2 features when configured as CHPID type FCP, communicating with SCSI devices.

SCSI IPL now a base function: The SCSI Initial Program Load (IPL) enablement feature #9904 is no longer required. The function is now delivered as a part of the server Licensed Internal Code. SCSI IPL allows an IPL of an operating system from an FCP-attached SCSI disk.

Getting ready for an 8 Gbps SAN infrastructure with FICON Express8

With introduction of FICON Express8 on the System z10 EC and System z10 BC family of servers, you now have additional growth opportunites for your storage area network (SAN). FICON Express 8 supports a link data rate of 8 gigabits per second (Gbps) and autonegotiation to 2 or 4 Gbps for synergy with existing switches, directors, and storage devices. With support for native FICON, High Performance FICON for System z (zHPF), and Fibre Channel Protocol (FCP), the System z10 servers enable you to position your SAN for even higher performance - helping you to prepare for an end-to-end 8 Gbps infrastructure to meet the increased bandwidth demands of your applications.

High performance FICON for System z - improving upon the native FICON protocol: The FICON Express8 features support High Performance FICON for System z (zHPF) which was introduced in October 2008 on the System z10 servers. zHPF provides optimizations for online transaction processing (OLTP) workloads. zHPF is an extension to the FICON architecture and is designed to improve the execution of small block I/O requests. zHPF streamlines the FICON architecture and reduces the overhead on the channel processors, control unit ports, switch ports, and links by improving the way channel programs are written and processed. zHPF-capable channels and devices support both native FICON and zHPF protocols simultaneously (CHPID type FC).

High Performance FICON for System z now supports multitrack operations:

zHPF support of multitrack operations can help increase system performance and improve FICON channel efficiency when attached to the IBM System Storage DS8000 series. zFS, HFS, PDSE, and other applications that use large data transfers with Media Manager are expected to benefit.

In laboratory measurements, multitrack operations (e.g. reading 16x4k bytes/IO) converted to the zHPF protocol on a FICON Express8 channel, achieved a maximum of up to 40% more MB/sec than multitrack operations using the native FICON protocol.

zHPF and support for multitrack operations is exclusive to the System z10 servers and applies to all FICON Express8, FICON Express4, and FICON Express2 features (CHPID type FC). Exploitation is required by z/OS and the control unit. Refer to the Software requirements section.

zHPF with multitrack operations is available in the DS8000 series Licensed Machine Code (LMC) level 5.4.3.xx (bundle version 64.3.xx.xx) or later with the purchase of DS8000 series feature (#7092).

Previously, zHPF was limited to read or write sequential I/O's transferring less than a single track size (for example, 12 4k byte records or 12x4k bytes/IO).

FICON Express8 performance improvements for zHPF and native FICON on the System z10 servers: A FICON Express8 channel exploiting the High Performance FICON for System z (zHPF) protocol, when operating at 8 Gbps, is designed to achieve a maximum throughput of up to 800 MBps when processing large sequential read I/O operations and up to 730 MBps when processing large sequential write I/O operations. This represents an 80 to 100% increase in performance compared to a FICON Express4 channel operating at 4 Gbps on a System z10 server. For those large sequential read or write I/O operations that use the native FICON protocol, the FICON Express8 channel, when operating at 8 Gbps, is designed to achieve up to 510 MBps. This represents a 45 to 55% increase compared to a FICON Express4 channel operating at 4 Gbps on a System z10 server.

The FICON Express8 channel, when operating at 8 Gbps, is also designed to achieve a maximum of 52,000 IO/sec for small data transfer I/O operations that can exploit the zHPF protocol. This represents approximately a 70% increase compared to a FICON Express4 channel operating at 4 Gbps and executing zHPF I/O operations on System a z10 server. For those small data transfer I/O operations that use the native FICON protocol, the FICON Express8 channel, when operating at 8 Gbps, is designed to achieve a maximum of 20,000 IO/sec, which represents approximately a 40% increase compared to a FICON Express4 channel operating at 4 Gbps on a System z10 server. The FICON Express8 features support both the native FICON protocol and the zHPF protocol concurrently in the server Licensed Internal Code.

These measurements for FICON (CHPID type FC) using both the native FICON and zHPF protocols are examples of the maximum MB/sec and IO/sec that can be achieved in a laboratory environment using one FICON Express8 channel on a System z10 server with z/OS V1.10 and no other processing occurring and do not represent actual field measurements. Details are available upon request.

FICON Express8 performance at 2 or 4 Gbps link data rate - it may be time to migrate to a FICON Express8 channel

Performance benefits may be realized by migrating to a FICON Express8 channel and operating at a link data rate of 2 or 4 Gbps. If you migrate now, you may be able to realize performance benefits when your SAN is not yet 8 Gbps-ready.

In laboratory measurements using the zHPF protocol with small data transfer I/O operations, FICON Express8 operating at 2 Gbps achieved a maximum of 47,000 IO/sec, compared to the maximum of 52,000 IO/sec achieved when operating at 4 Gbps or 8 Gbps. This represents approximately a 50% increase compared to a FICON Express4 channel operating at 2 Gbps on a System z10 server.

In laboratory measurements using the native FICON protocol with small data transfer I/O operations, FICON Express8 operating at 2 Gbps or 4 Gbps achieved a maximum of 20,000 IO/sec, which represents approximately a 40% increase compared to a FICON Express4 channel operating at 2 Gbps or 4 Gbps on a System z10 server.

In laboratory measurements using FCP with small data transfer I/O operations, FICON Express8 operating at 4 Gbps, compared to FICON Express4 operating at 4 Gbps, achieved a maximum of 84,000 IO/sec, which represents approximately a 40% increase compared to a FICON Express4 channel operating at 4 Gbps on a System z10 server.

FICON Express8 performance improvements for FCP on the System z10 servers:

The FICON Express8 FCP channel, when operating at 8 Gbps, is designed to achieve a maximum throughput of up to 800 MBps when processing large sequential read I/O operations and up to 730 MBps when processing large sequential write I/O operations. This represents an 80 to 100% increase compared to a FICON Express4 FCP channel operating at 4 Gbps on System z10.

The FICON Express8 FCP channel is designed to achieve a maximum of 84,000 IO/sec when processing read or write small data transfer I/O operations. This represents approximately a 40% increase compared to a FICON Express4 FCP channel when operating at 4 Gbps on a System z10 server.

These measurements for FCP (CHPID type FCP supporting attachment to SCSI devices) are examples of the maximum MB/sec and IO/sec that can be achieved in a laboratory environment, using one FICON Express8 channel on a System z10 server with z/VM V5.4 or Linux on System z distribution Novell SUSE SLES 10 with no other processing occurring, and do not represent actual field measurements. Details are available upon request.

FICON Express8 for channel consolidation:

FICON Express8 may also allow for the consolidation of existing FICON Express, FICON Express2, or FICON Express4 channels onto fewer FICON Express8 channels while maintaining and enhancing performance.

To request assistance for ESCON or FICON channel consolidation analysis using the zCP3000 tool, contact your IBM representative. They will assist you with a capacity planning study to estimate the number of FICON channels that can be consolidated onto FICON Express8. They can also assist you with ESCON to FICON channel migration.

Resource Measurement Facility (RMF): RMF has been enhanced to support FICON Express8. RMF is an IBM product designed to simplify management of single and multiple system workloads. RMF gathers data and creates reports that help your system programmers and administrators optimally tune your systems, react quickly to system delays, and diagnose performance problems. RMF may assist you in understanding your capacity requirements. RMF output is used by the zCP3000 tool to assist with your channel consolidation potential.

FICON end-to-end data integrity checking: FICON Express8 continues the unparalleled heritage of data protection with its native FICON, zHPF, and channel-to-channel (CTC) intermediate data checking and end-to-end data integrity checking for all devices (such as disk and tape), which is transparent to operating systems, middleware, and applications. With end-to-end data integrity checking, Cyclical Redundancy Check (CRC) is generated at the end points for quality of service. This applies to CHPID type FC.

Fibre Channel Protocol (FCP) transmission data checking: FICON Express8 continues the transmission data checking for an FCP channel (communicating with SCSI devices) with it's full-fabric capability. FCP performs intermediate data checking for each leg of the transmission. This applies to CHPID type FCP.

FICON Express8 10KM LX and SX: The System z10 servers continue to support your current fiber optic cabling environments with its introduction of FICON Express8.

  1. FICON Express8 10KM LX (#3325), with four channels per feature, is designed to support unrepeated distances up to 10 kilometers (6.2 miles) over 9 micron single mode fiber optic cabling without performance degradation. To avoid performance degradation at extended distance, FICON switches or directors (for buffer credit provision) or Dense Wavelength Division Multiplexers (for buffer credit simulation) may be required.
  2. FICON Express8 SX (#3326), with four channels per feature, is designed to support 50 or 62.5 micron multimode fiber optic cabling.

For details regarding the unrepeated distances for FICON Express8 10KM LX and FICON Express8 SX refer to System z Planning for Fiber Optic Links (GA23-0367) available on System z10 servers at planned availability in the Library section of Resource Link.

www.ibm.com/servers/resourcelink

All channels on a single FICON Express8 feature are of the same type - 10KM LX or SX.

Both features support small form factor pluggable optics (SFPs) with LC Duplex connectors. The optics continue to permit each channel to be individually serviced in the event of a fiber optic module failure.

The FICON Express8 features, designed for connectivity to servers, switches, directors, disks, tapes, and printers, can be defined as:

  • Native FICON, zHPF, and FICON channel-to-channel (CTC) (CHPID type FC)
  • Fibre Channel Protocol (CHPID type FCP for communication with SCSI devices).

The FICON Express8 features are exclusive to z10 EC and z10 BC servers. Refer to the Software requirements section for operating system support for CHPID types FC and FCP.

Cleaning discipline for FICON Express8 fiber optic cabling

With the introduction of 8 Gbps link data rates, it is even more critical to ensure your fiber optic cabling infrastructure performs as expected. With proper fiber optic cleaning and maintenance, you can be assured that the "data gets through".

With 8 Gbps link data rates over multimode fiber optic cabling, link loss budgets and distances are reduced. Single mode fiber optic cabling is more "reflection sensitive". With high link data rates and single mode fiber optic cabling there is also less margin for error. The cabling is no longer scratch-tolerant and contaminants such as dust and oil can present a problem.

To keep the data flowing, proper handling of fiber trunks and jumper cables is critical as well as thorough cleaning of fiber optic connectors. Work with your data center personnel or IBM personnel to ensure you have fiber optic cleaning procedures in place.

Information regarding related Global Technology Services offerings is available at the following website:

http://www-935.ibm.com/services/us/index.wss/offering /its/a1027996

The Optimized Airflow Assessment for Cabling reviews existing data center cabling and prioritizes tactical plans across the data center to help increase system availability, adapt to changing technologies and transmission protocols and reduce energy-related cooling costs through optimized airflow.

http://www-935.ibm.com/services/us/index.wss/offering /its/a1028860

The Facilities Cabling Services - fiber transport system helps lower the operating cost of the data center, supports the highest level of availability for an IT infrastructure and allows the latest technologies and transmission protocols to be transported, while reducing clogs of unstructured cabling under floor tiles.

If you need further support or assistance on this matter please send an e-mail to cabling@us.ibm.com with your request.

FICON Express4 - 1, 2, or 4 Gbps:

  • Offers two unrepeated distance options (4 kilometer and 10 kilometer) when using single-mode fiber optic cabling
  • Supports a 4 Gbps link data rate with auto-negotiation to 1 or 2 Gbps for synergy with existing switches, directors, and storage devices

The FICON Express4 features have two modes of operation designed for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) traffic (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol traffic (CHPID type FCP) in the z/VM, z/VSE, and Linux on System z environments

Choose the FICON features that best meet your business requirements

To meet the demands of your Storage Area Network (SAN), provide granularity, facilitate redundant paths, and satisfy your infrastructure requirements, there are four features from which to choose.

                                                        Channels
 Feature                     FC #   Infrastructure      per feature
 ------------------------    ----   --------------      -----------
 FICON Express8 10KM LX      3325   Single mode fiber      4
 FICON Express8 SX           3326   Multimode fiber        4
 FICON Express4-2C 4KM LX    3323   Single mode fiber      2
 FICON Express4-2C SX        3318   Multimode fiber        2
 

Choose the features that best meet your granularity, fiber optic cabling, and unrepeated distance requirments.

If you have a requirement for:

  • Two FICON channels, select feature #3318 (SX) or #3323 (LX)
  • A maximum of four FICON channels, you may choose to order two FICON Express4-2C features, with each of the features in a separate I/O domain for high availability
  • A maximum of six FICON channels, you may choose to order one FICON Express8 four-channel feature and one FICON Express4-2C feature
  • A mix of SX (multimode) and LX (single mode) fiber optic cabling, you may choose to order the FICON Express4-2C 4KM LX feature to satisfy your single mode fiber optic cabling requirements, and order the FICON Express8 SX four-channel feature for your multimode fiber optic cabling requirements
  • Eight or more channels - only order the FICON Express4-2C feature if connectivity to FICON control units cannot be spread over two I/O domains for high availability using only FICON Express8 four-channel features

Effective October 27, 2009, the following features are withdrawn from marketing and cannot be ordered. They have been replaced by the FICON Express8 10KM LX and FICON Express SX features.

                                                        Channels
 Feature                     FC #   Infrastructure      per feature
 ----------------------      ----   -----------------   -----------
 FICON Express4 10KM LX      3321   Single mode fiber      4
 FICON Express4 4KM LX       3324   Single mode fiber      4
 FICON Express4 SX           3322   Multimode fiber        4
 

Note: A 4KM LX transceiver is designed to interoperate with a 10KM LX transceiver.

Refer to the Standards section for the characteristics of each of the features.

Note: The ANSI Fibre Channel Physical Interface (FC-PI-2) standard defines 10 kilometer (km) transceivers and 4 km transceivers when using 9 micron single-mode fiber optic cabling. IBM supports these FC-PI-2 variants.

IBM supports interoperability of 10 km transceivers with 4 km transceivers provided the unrepeated distance between a 10 km transceiver and a 4 km transceiver does not exceed 4 km (2.5 miles).

The FICON Express4 features have Small Form Factor Pluggable (SFP) optics to permit each channel to be individually serviced in the event of a fiber optic module failure. The traffic on the other channels on the same feature can continue to flow if a channel requires servicing.

All channels on a single FICON Express4 feature are of the same type - 4KM LX, 10KM LX, or SX. You may carry your current FICON Express2 and FICON Express features (#3319, #3320, #2319, #2320) forward to System z10 BC.

Refer to the Software requirements section for operating system support for CHPID types FC and FCP.

FICON Express2 and FICON Express: Your current FICON Express2 features (1 or 2 Gbps link data rate) can be carried forward to z10 BC. If you have FICON Express features (1 Gbps link data rate) you can also carry them forward to z10 BC. FICON Express LX (#2319) can be defined as CHPID type FCV (FICON bridge) to allow communication with ESCON control units using the ESCON Director Model 5 with the bridge feature. Migration to native FICON is encouraged. The ESCON Director Model 5 was withdrawn from marketing December 31, 2004.

Fiber Quick Connect for FICON LX environments: Fiber Quick Connect (FQC), an optional feature on z10 BC, is offered for all FICON LX (single-mode fiber) channels, in addition to the current support for ESCON (62.5 micron multimode fiber) channels. FQC is designed to significantly reduce the amount of time required for on-site installation and setup of fiber optic cabling. FQC facilitates adds, moves, and changes of ESCON and FICON LX fiber optic cables in the data center, and may reduce fiber connection time by up to 80%.

FQC is for factory installation of Fiber Transport System (FTS) fiber harnesses for connection to channels in the I/O drawer. FTS fiber harnesses enable connection to FTS direct-attach fiber trunk cables from IBM Global Technology Services.

FQC, coupled with FTS, is a solution designed to help minimize disruptions and to isolate fiber cabling activities away from the active system as much as possible.

IBM provides the direct-attach trunk cables, patch panels, and Central Patching Location (CPL) hardware, as well as the planning and installation required to complete the total structured connectivity solution. An ESCON example: Four trunks, each with 72 fiber pairs, can displace up to 240 fiber optic jumper cables, the maximum quantity of ESCON channels in one I/O drawer. This significantly reduces fiber optic jumper cable bulk.

At CPL panels you can select the connector to best meet your data center requirements. Small form factor connectors are available to help reduce the floor space required for patch panels.

CPL planning and layout is done prior to arrival of the server on-site using the default CHannel Path IDdentifier (CHPID) placement report, and documentation is provided showing the CHPID layout and how the direct-attach harnesses are plugged.

Note: FQC supports all of the ESCON channels and all of the FICON LX channels in the I/O drawer of the server.

IBM Site and Facilities Services: IBM Site and Facilities Services has a comprehensive set of scalable solutions to address IBM cabling requirements, from product-level to enterprise-level for small, medium, and large enterprises.

  • IBM Facilities Cabling Services - fiber transport system
  • IBM IT Facilities Assessment, Design, and Construction Services - optimized airflow assessment for cabling

Planning and installation services for individual fiber optic cable connections are available. An assessment and planning for IBM Fiber Transport System (FTS) trunking components can also be performed.

These services are designed to be right-sized for your products or the end-to-end enterprise, and to take into consideration the requirements for all of the protocols and media types supported on the System z10 BC, System z9, and zSeries (for example, ESCON, FICON, Coupling Links, OSA-Express) whether the focus is the data center, the Storage Area Network (SAN), the Local Area Network (LAN), or the end-to-end enterprise.

IBM Site and Facilities Services are designed to deliver convenient, packaged services to help reduce the complexity of planning, ordering, and installing fiber optic cables. The appropriate fiber cabling is selected based upon the product requirements and the installed fiber plant.

The services are packaged as follows:

  • Under IBM Facilities Cabling Services there is the option to provide IBM Fiber Transport System (FTS) trunking commodities (fiber optic trunk cables, fiber harnesses, panel-mount boxes) for connecting to the z10 BC, z10 EC, z9 EC, z9 BC, z990, and z890. IBM can reduce the cable clutter and cable bulk under the floor. An analysis of the channel configuration and any existing fiber optic cabling is performed to determine the required FTS trunking commodities. IBM can also help organize the entire enterprise. This option includes enterprise planning, new cables, fiber optic trunking commodities, installation, and documentation.
  • Under IBM IT Facilities Assessment, Design, and Construction Services there is the Optimized Airflow Assessment for Cabling option to provide you with a comprehensive review of your existing data center cabling infrastructure. This service provides an expert analysis of the overall cabling design required to help improve data center airflow for optimized cooling, and to facilitate operational efficiency through simplified change management.

Refer to the services section of Resource Link for further details. Access Resource Link at:

www.ibm.com/servers/resourcelink

HiperSockets - "Network in a box"

HiperSockets Layer 2 support - for flexible and efficient data transfer for IP and non-IP workloads: Now, the HiperSockets internal networks on System z10 BC can support two transport modes: Layer 2 (Link Layer) as well as the current Layer 3 (Network or IP Layer). Traffic can be Internet Protocol (IP) Version 4 or Version 6 (IPv4, IPv6), or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA). HiperSockets devices are now protocol-independent and Layer 3 independent. Each HiperSockets device has its own Layer 2 Media Access Control (MAC) address, which is designed to allow the use of applications that depend on the existence of Layer 2 addresses such as Dynamic Host Configuration Protocol (DHCP) servers and firewalls.

Layer 2 support can help facilitate server consolidation. Complexity can be reduced, network configuration is simplified and intuitive, and LAN administrators can configure and maintain the mainframe environment the same as they do a non-mainframe environment.

With support of the new Layer 2 interface by HiperSockets, packet forwarding decisions are now based upon Layer 2 information, instead of Layer 3 information. The HiperSockets device can perform automatic MAC address generation to allow uniqueness within and across logical partitions (LPARs) and servers. MAC addresses can also be locally administered. The use of Group MAC addresses for multicast is supported as well as broadcasts to all other Layer 2 devices on the same HiperSockets network. Datagrams are delivered only between HiperSockets devices that are using the same transport mode (Layer 2 with Layer 2 and Layer 3 with Layer 3). A Layer 2 device cannot communicate directly with a Layer 3 device in another LPAR.

A HiperSockets device can filter inbound datagrams by Virtual Local Area Network identification (VLAN ID, IEEE 802.1q), the Ethernet destination MAC address, or both. Filtering can help reduce the amount of inbound traffic being processed by the operating system, helping to reduce CPU utilization.

Analogous to the respective Layer 3 functions, HiperSockets Layer 2 devices can be configured as primary or secondary connectors or multicast routers. This is designed to enable the creation of high-performance and high-availability Link Layer switches between the internal HiperSockets network and an external Ethernet or to connect the HiperSockets Layer 2 networks of different servers.

HiperSockets Layer 2 support is exclusive to System z10 BC, supported by Linux on System z, and by z/VM for guest exploitation. Refer to the

Software requirements section.

HiperSockets Multiple Write Facility for increased performance: HiperSockets performance has been enhanced to allow for the streaming of bulk data over a HiperSockets link between logical partitions (LPARs). The receiving LPAR can now process a much larger amount of data per I/O interrupt. This enhancement is transparent to the operating system in the receiving LPAR. HiperSockets Multiple Write Facility, with fewer I/O interrupts, is designed to reduce CPU utilization of the sending and receiving LPAR.

HiperSockets Multiple Write Facility is supported in the z/OS environment. Refer to the Software requirements section.

Local Area Network (LAN) connectivity - a new generation OSA-Express3 - a new family of LAN adapters: The third generation of Open Systems Adapter-Express (OSA-Express3) features have been introduced to help reduce latency and overhead, deliver double the port density of OSA-Express2, and provide increased throughput.

The OSA-Express3 features support the following environments:

CHPID    OSA-Express3
type      features                 Purpose/Traffic
 
OSC(1)   1000BASE-T   OSA-Integrated Console Controller (OSA-ICC)
                      TN3270E, non-SNA DFT to IPL CPCs and LPARs
                      Operating system console operations
 
OSD(1)   1000BASE-T   Queued Direct Input/Output (QDIO)
         GbE          TCP/IP traffic when Layer 3
         10 GbE       Protocol-independent when Layer 2
 
OSE(1)   1000BASE-T   Non-QDIO, SNA/APPN/HPR and/or TCP/IP passthru
 
OSN      1000BASE-T   OSA-Express for NCP
         GbE          Supports channel data link control (CDLC)
                      LPAR-to-LPAR communication exclusively; no
                      external communication
 

Note: (1) Note that software PTFs or a new release may be required (depending on CHPID type) to support all ports.

Choose the OSA-Express3 features that best meet your business requirements: To meet the demands of your applications, offer granularity, facilitate redundant paths, and satisfy your infrastructure requirements, there are seven features from which to choose. In the 10 GbE environment, Short Reach (SR) is being offered for the first time.

 Feature                     FC #   Infrastructure      Ports per
                                                        feature
 
OSA-Express3 GbE LX          3362   Single mode fiber      4
OSA-Express3 10 GbE LR       3370   Single mode fiber      2
 
OSA-Express3 GbE SX          3363   Multimode fiber        4
OSA-Express3 10 GbE SR       3371   Multimode fiber        2
OSA-Express3-2P GbE SX       3373   Multimode fiber        2
 
OSA-Express3 1000BASE-T      3367   Copper                 4
OSA-Express3-2P 1000BASE-T   3369   Copper                 2
 

Refer to the Standards section for the characteristics of each of the features.

OSA-Express3 for reduced latency and improved throughput: To help reduce latency and improve throughput, the OSA-Express3 features now have an Ethernet hardware data router; what was previously done in firmware (packet construction, inspection, and routing) is now performed in hardware. With the Ethernet hardware data router, there is now direct memory access, and packets flow directly from host memory to the LAN without firmware intervention. OSA-Express3 is also designed to help reduce the round-trip networking time between systems. Up to a 45% reduction in latency at the TCP/IP application layer has been measured.

The OSA-Express3 features are also designed to improve throughput for standard frames (1492 byte) and jumbo frames (8992 byte) to help satisfy the bandwidth requirements of your applications. Up to a 4x improvement has been measured (compared to OSA-Express2).

The above statements are based on OSA-Express3 performance measurements performed in a laboratory environment on a System z10 and do not represent actual field measurements. Results can vary.

Port density or granularity: The OSA-Express3 features have Peripheral Component Interconnect Express (PCI-E) adapters. The previous table identifies whether the feature has 2 or 4 ports for LAN connectivity. Select the density that best meets your business requirements. Doubling the port density on a single feature helps to reduce the number of I/O slots required for high-speed connectivity to the Local Area Network.

Note: The two port features (OSA-Express3-2P GbE SX, and OSA-Express3-2P 1000BASE-T) are exclusive to the z10 BC.

10 GbE cabling and connector: The OSA-Express3 10 GbE features support Long Reach (LR) using 9 micron single mode fiber optic cabling and Short Reach (SR) using 50 or 62.5 micron multimode fiber optic cabling. The connector is new; it is now the small form factor, LC Duplex connector. Previously the SC Duplex connector was supported for LR. The LC Duplex connector is common with FICON, ISC-3, and OSA-Express2 Gigabit Ethernet LX and SX.

The OSA-Express3 features are exclusive to System z10. Refer to the

Software requirements section for the operating systems

supported by each channel path identifier (CHPID) type.

OSA-Express3 support for OSA-Express for NCP: OSA-Express for Network Control Program (NCP), channel path identifier (CHPID) type OSN, is available for use with the OSA-Express3 GbE features as well as the OSA-Express3 1000BASE-T Ethernet features.

OSA-Express for NCP, supporting the channel data link control (CDLC) protocol, delivers connectivity between System z operating systems and IBM Communication Controller for Linux (CCL). CCL allows you to keep your business data and applications on the mainframe operating systems while moving NCP functions to Linux on System z.

CCL delivers a foundation to help enterprises simplify their network infrastructure while supporting traditional Systems Network Architecture (SNA) functions such as SNA Network Interconnect (SNI).

Communication Controller for Linux on System z (Program Number 5724-J38) is the solution for companies that want to help improve network availability by replacing token-ring networks and ESCON channels with an Ethernet network and integrated LAN adapters on System z10, OSA-Express3 or OSA-Express2 GbE or 1000BASE-T.

OSA-Express for NCP is supported in the z/OS, z/VM, z/VSE, TPF, z/TPF, and Linux on System z environments. Refer to the Software requirements section.

OSA-Express3 Ethernet features - Summary of benefits: OSA-Express3 10 GbE LR (single mode fiber), 10 GbE SR (multimode fiber), GbE LX (single mode fiber), GbE SX (multimode fiber), and 1000BASE-T (copper) are designed for use in high-speed enterprise backbones, for local area network connectivity between campuses, to connect server farms to z10, and to consolidate file servers onto z10. With reduced latency, improved throughput, and up to 96 ports of LAN connectivity, (when all are 4-port features, 24 features per server), you can "do more with less."

The key benefits of OSA-Express3 compared to OSA-Express2 are:

  • Reduced latency (up to 45% reduction) and increased throughput (up to 4x) for applications
  • More physical connectivity to service the network and fewer required resources:
    • Fewer CHPIDs to define and manage
    • Reduction in the number of required I/O slots
    • Possible reduction in the number of I/O drawers
    • Double the port density of OSA-Express2
    • A solution to the requirement for more than 48 LAN ports (now up to 96 ports)

The OSA-Express3 features are exclusive to System z10. Refer to the

Software requirements section for the operating systems

supported by each channel path identifier (CHPID) type.

OSA-Express2 availability: OSA-Express2 Gigabit Ethernet and 1000BASE-T Ethernet continue to be available for ordering, for a limited time, if you are not yet in a position to migrate to the latest release of the operating system for exploitation of two ports per PCI-E adapter and if you are not resource-constrained.

Historical summary: Functions that continue to be supported by OSA-Express3 and OSA-Express2:

  • Queued Direct Input/Output (QDIO) - uses memory queues and a signaling protocol to directly exchange data between the OSA microprocessor and the network software for high-speed communication.
    • QDIO Layer 2 (Link layer) - for IP (IPv4, IPv6) or non-IP (AppleTalk, DECnet, IPX, NetBIOS, or SNA) workloads. Using this mode the Open Systems Adapter (OSA) is protocol-independent and Layer-3 independent. Packet forwarding decisions are based upon the Medium Access Control (MAC) address.
    • QDIO Layer 3 (Network or IP layer) - for IP workloads. Packet forwarding decisions are based upon the IP address. All guests share OSA's MAC address.
  • Jumbo frames in QDIO mode (8992 byte frame size) when operating at 1 Gbps (fiber or copper) and 10 Gbps (fiber).
  • 640 TCP/IP stacks per CHPID - for hosting more images.
  • Large send for IPv4 packets - for TCP/IP traffic and CPU efficiency, offloading the TCP segmentation processing from the host TCP/IP stack to the OSA-Express feature.
  • Concurrent LIC update - to help minimize the disruption of network traffic during an update; when properly configured, designed to avoid a configuration off or on (applies to CHPID types OSD and OSN).
  • Multiple Image Facility (MIF) and spanned channels - for sharing OSA among logical channel subsystems.

OSA-Express QDIO data connection isolation for the z/VM environment

Multi-tier security zones are fast becoming the network configuration standard for new workloads. Therefore, it is essential for workloads (servers and clients) hosted in a virtualized environment (shared resources) to be protected from intrusion or exposure of data and processes from other workloads.

With Queued Direct Input/Output (QDIO) data connection isolation you:

  • Have the ability to adhere to security and HIPAA-security guidelines and regulations for network isolation between the operating system instances sharing physical network connectivity
  • Can establish security zone boundaries that have been defined by your network administrators
  • Have a mechanism to isolate a QDIO data connection (on an OSA port), ensuring all internal OSA routing between the isolated QDIO data connections and all other sharing QDIO data connections is disabled. In this state, only external communications to and from the isolated QDIO data connection are allowed. If you choose to deploy an external firewall to control the access between hosts on an isolated virtual switch and sharing LPARs then an external firewall needs to be configured and each individual host and or LPAR must have a route added to their TCP/IP stack to forward local traffic to the firewall.

Internal "routing" can be disabled on a per QDIO connection basis. This support does not affect the ability to share an OSA-Express port. Sharing occurs as it does today, but the ability to communicate between sharing QDIO data connections may be restricted through the use of this support. You decide whether an operating system's or z/VM's Virtual Switch OSA-Express QDIO connection is to be non-isolated (default) or isolated.

Note: QDIO data connection isolation applies to the device statement defined at the operating system level. While an OSA-Express CHPID may be shared by an operating system, the data device is not shared.

QDIO data connection isolation applies to the z/VM environment and to all of the OSA-Express3 and OSA-Express2 features (CHPID type OSD) on System z10 and to the OSA-Express2 features on System z9. Refer to the

Software requirements section.

Coupling connectivity for Parallel Sysplex

Introducing long reach InfiniBand coupling links

Now, InfiniBand can be used for Parallel Sysplex coupling and STP communication at unrepeated distances up to 10 km (6.2 miles) and even greater distances when attached to a qualified optical networking solutions. InfiniBand coupling links supporting extended distance are referred to as 1x (one pair of fiber) IB-SDR or 1x IB-DDR.

  • Long reach 1x InfiniBand coupling links support single data rate (SDR) at 2.5 gigabits per second (Gbps) when connected to a DWDM capable of SDR
  • Long reach 1x InfiniBand coupling links support double data rate (DDR) at 5 Gbps when connected to a DWDM capable of DDR.
Depending on the capability of the attached DWDM, the link data rate will automatically be set to either SDR or DDR.

Long reach 1x InfiniBand coupling links utilize the Host Channel Adapter2 optical long reach fanout card (HCA2-O LR #0168). Like the 12x IB-SDR and DDR InfiniBand coupling link feature (HCA2-O fanout card #0163), the HCA2-O LR fanout card can also be used to exchange timekeeping messages for Server Time Protocol (STP).

This environment supports use of 9 micron single mode fiber optic cables with LC Duplex connectors, the same fiber optic cable you have been using with InterSystem Channel-3 (ISC-3).

There is no change to the Channel Path Identifier (CHPID). It remains CHPID type CIB whether 12x IB-SDR or DDR or 1x IB-SDR or DDR. HCA2-O LR fanout cards are exclusive to System z10 and are supported by z/OS and by z/VM to define, modify, and delete an InfiniBand coupling link, when z/VM is the controlling LPAR for dynamic I/O. Refer to the Software requirements section.

Five coupling link options: The z10 BC supports Internal Coupling channels (ICs), Integrated Cluster Bus-4 (ICB-4), InterSystem Channel-3 (ISC-3) (peer mode), and 12x and 1x InfiniBand (IFB) links for communication in a Parallel Sysplex environment.

  1. Internal Coupling Channels (ICs) can be used for internal communication between Coupling Facilities (CFs) defined in LPARs and z/OS images on the same server.

  2. Integrated Cluster Bus-4 (ICB-4) links are for short distances. ICB-4 links use 10 meter (33 feet) copper cables, of which 3 meters (10 feet) is used for internal routing and strain relief. ICB-4 is used to connect z10 BC-to-z10 BC, z10 EC, z9 EC, z9 BC, z990, and z890. Note: If connecting to a z9 BC or a z10 BC with ICB-4, those servers cannot be installed with the non-raised floor feature. Also, if the z10 BC is ordered with the non-raised floor feature, ICB-4 cannot be ordered.

  3. InterSystem Channel-3 (ISC-3) supports communication over unrepeated distances of up to 10 km (6.2 miles) using 9 micron single mode fiber optic cables and even greater distances with System z qualified optical networking solutions. ISC-3s are supported exclusively in peer mode (CHPID type CFP).

  4. 12x InfiniBand coupling links (12x IB-SDR or 12x IB-DDR) offer an alternative to ISC-3 in the data center and facilitate coupling link consolidation; physical links can be shared by multiple systems or CF images on a single system. The 12x IB links support distances up to 150 meters (492 feet) using industry-standard OM3 50 micron fiber optic cables.

  5. Long Reach 1x InfiniBand coupling links (1x IB-SDR or 1x IB-DDR) are an alternative to ISC-3 and offer greater distances with support for point-to-point unrepeated connections of up to 10 km (6.2 miles) using 9 micron single mode fiber optic cables. Greater distances can be supported with System z qualified optical networking solutions. Long reach 1x InfiniBand coupling links support the same sharing capability as the 12x InfiniBand version allowing one physical link to be shared across multiple CF images on a system.

Note: The InfiniBand link data rates do not represent the performance of the link. The actual performance is dependent upon many factors including latency through the adapters, cable lengths, and the type of workload. Specifically, with 12x InfiniBand coupling links, while the link data rate can be higher than that of ICB, the service times of coupling operations are greater, and the actual throughput is less.

Refer to the Coupling Facility Configuration Options whitepaper for a more specific explanation of when to continue using the current ICB or ISC-3 technology versus migrating to InfiniBand coupling links.

The whitepaper is available at:

http://www.ibm.com/systems/z/advantages/pso/whitepaper.html

Coupling Facility Control Code Level 16

Improved service time with Coupling Facility Duplexing enhancements: Prior to Coupling Facility Control Code (CFCC) Level 16, System-Managed Coupling Facility (CF) Structure Duplexing required two duplexing protocol exchanges to occur synchronously during processing of each duplexed structure request. CFCC Level 16 allows one of these protocol exchanges to complete asynchronously. This allows faster duplexed request service time, with more benefits when the Coupling Facilities are further apart, such as in a multi-site Parallel Sysplex.

List notification improvements: Prior to CFCC Level 16, when a shared queue (subsidiary list) changed state from empty to non-empty, the CF would notify ALL active connectors. The first one to respond would process the new message, but when the others tried to do the same, they would find nothing, incurring additional overhead.

CFCC Level 16 can help improve the efficiency of coupling communications for IMS Shared Queue and WebSphere MQ Shared Queue environments. The Coupling Facility notifies only one connector in a sequential fashion. If the shared queue is processed within a fixed period of time, the other connectors do not need to be notified, saving the cost of the false scheduling. If a shared queue is not read within the time limit, then the other connectors are notified as they were prior to CFCC Level 16.

When migrating CF levels, lock, list and cache structure sizes might need to be increased to support new function. For example, when you upgrade from CFCC Level 15 to Level 16 the required size of the structure might increase. This adjustment can have an impact when the system allocates structures or copies structures from one coupling facility to another at different CF levels.

The coupling facility structure sizer tool can size structures for you and takes into account the amount of space needed for the current CFCC levels.

Access the tool at:

http://www.ibm.com/servers/eserver/zseries/cfsizer/

CFCC Level 16 is exclusive to System z10 and is supported by z/OS and z/VM for guest exploitation. Refer to the Software requirements section.

Implementation Services for Parallel Sysplex: IBM Implementation Services for Parallel Sysplex CICS and WAS Enablement

IBM Implementation Services for Parallel Sysplex Middleware - CICS enablement consists of five fixed-price and fixed-scope selectable modules:

  1. CICS application review
  2. z/OS CICS infrastructure review (module 1 is a prerequisite for this module)
  3. CICS implementation (module 2 is a prerequisite for this module)
  4. CICS application migration
  5. CICS health check

IBM Implementation Services for Parallel Sysplex Middleware - WebSphere Application Server enablement consists of three fixed-price and fixed-scope selectable modules:

  1. WebSphere Application Server network deployment planning and design

  2. WebSphere Application Server network deployment implementation (module 1 is a prerequisite for this module)

  3. WebSphere Application Server health check

IBM Implementation Services for Parallel Sysplex DB2 data sharing

To assist with the assessment, planning, implementation, testing, and backup and recovery of a System z DB2 data sharing environment, IBM Global Technology Services made available the IBM Implementation Services for Parallel Sysplex Middleware - DB2 data sharing on February 26, 2008.

This DB2 data sharing service is designed for clients who want to:

  1. Enhance the availability of data

  2. Enable applications to take full utilization of all servers' resources

  3. Share application system resources to meet business goals

  4. Manage multiple systems as a single system from a single point of control

  5. Respond to unpredicted growth by quickly adding computing power to match business requirements without disruption

  6. Build on the current investments in hardware, software, applications, and skills while potentially reducing computing costs

The offering consists of six selectable modules; each is a stand-alone module that can be individually acquired. The first module is an infrastructure assessment module, followed by five modules which address the following DB2 data sharing disciplines:

  1. DB2 data sharing planning
  2. DB2 data sharing implementation
  3. Adding additional data sharing members
  4. DB2 data sharing testing
  5. DB2 data sharing backup and recovery

For more information on these services contact your IBM representative or refer to:

http://www.ibm.com/services/server

Server Time Protocol (STP)

STP messages: STP is a message-based protocol in which timekeeping information is transmitted between servers over externally defined coupling links. ICB-4, ISC-3, and InfiniBand coupling links can be used to transport STP messages.

Server Time Protocol enhancements The following Server Time Protocol (STP) enhancements are available on the z10 EC, z10 BC, z9 EC, and z10 BC. The prerequisites are that you install STP feature #1021 and that the latest MCLs are installed for the applicable driver.

NTP client support This enhancement addresses the requirements of customers who need to provide the same accurate time across heterogeneous platforms in an enterprise.

The STP design has been enhanced to include support for a Simple Network Time Protocol (SNTP) client on the Support Element. By configuring an NTP server as the STP External Time Source (ETS), the time of an STP-only Coordinated Timing Network (CTN) can track to the time provided by the NTP server, and maintain a time accuracy of 100 milliseconds.

Enhanced accuracy to an External Time Source: The time accuracy of an STP-only CTN has been improved by adding the capability to configure an NTP server that has a pulse per second (PPS) output signal as the ETS device. This type of ETS device is available worldwide from several vendors that provide network timing solutions.

STP has been designed to track to the highly stable, accurate PPS signal from the NTP server, and maintain an accuracy of 10 microseconds as measured at the PPS input of the System z server. A number of variables such as accuracy of the NTP server to its time source (GPS, radio signals for example), and cable used to connect the PPS signal will determine the ultimate accuracy of STP to Coordinated Universal Time (UTC).

In comparison, the IBM Sysplex Timer is designed to maintain an accuracy of 100 microseconds when attached to an ETS with a PPS output. If STP is configured to use a dial-out time service or an NTP server without PPS, it is designed to provide a time accuracy of 100 milliseconds to the ETS device.

For this enhancement, the NTP output of the NTP server has to be connected to the Support Element (SE) LAN, and the PPS output of the same NTP server has to be connected to the PPS input provided on the External Time Reference (ETR) card of the System z10 or System z9 server.

Continuous Availability of NTP servers used as External Time Source: Improved External Time Source (ETS) availability can now be provided if you configure different NTP servers for the Preferred Time Server (PTS) and the Backup Time Server (BTS). Only the PTS or the BTS can be the Current Time Server (CTS) in an STP-only CTN. Prior to this enhancement, only the CTS calculated the time adjustments necessary to maintain time accuracy. With this enhancement, if the PTS/CTS cannot access the NTP Server or the pulse per second (PPS) signal from the NTP server, the BTS, if configured to a different NTP server, may be able to calculate the adjustment required and propagate it to the PTS/CTS. The PTS/CTS in turn will perform the necessary time adjustment steering.

This avoids a manual reconfiguration of the BTS to be the CTS, if the PTS/CTS is not able to access its ETS. In an ETR network when the primary Sysplex Timer is not able to access the ETS device, the secondary Sysplex Timer takes over the role of the primary - a recovery action not always accepted by some customers. The STP design provides continuous availability of ETS while maintaining the special roles of PTS and BTS assigned by the customer.

The availability improvement is available when the ETS is configured as an NTP server or an NTP server using PPS.

NTP Server on Hardware Management Console: Improved security can be obtained by providing NTP server support on the HMC. If an NTP server (with or without PPS) is configured as the ETS device for STP, it needs to be attached directly to the Support Element (SE) LAN. The SE LAN is considered by many users to be a private dedicated LAN to be kept as isolated as possible from the intranet or Internet.

Since the HMC is normally attached to the SE LAN, providing an NTP server capability on the HMC addresses the potential security concerns most users may have for attaching NTP servers to the SE LAN. The HMC, via a separate LAN connection, can access an NTP server available either on the intranet or Internet for its time source. Note that when using the HMC as the NTP server, there is no pulse per second capability available. Therefore, you should not configure the ETS to be an NTP server using PPS.

Enhanced STP recovery when Internal Battery Feature is in use: Improved availability can be obtained when power has failed for a single server (PTS/CTS), or when there is a site power outage in a multi site configuration where the PTS/CTS is installed (the site with the BTS is a different site not affected by the power outage).

If an Internal Battery Feature (IBF) is installed on your System z server, STP now has the capability of receiving notification that customer power has failed and that the IBF is engaged. When STP receives this notification from a server that has the role of the PTS/CTS, STP can automatically reassign the role of the CTS to the BTS, thus automating the recovery action and improving availability.

STP configuration and time information saved across Power on Resets (POR) or power outages: This enhancement delivers system management improvements by saving the STP configuration across PORs and power failures for a single server STP-only CTN. Previously, if the server was PORed or experienced a power outage, the time, and assignment of the PTS and CTS roles would have to be reinitialized. You will no longer need to reinitialize the time or reassign the role of PTS/CTS across POR or power outage events.

Note that this enhancement is also available on the z990 and z890 servers.

Application Programming Interface (API) to automate STP CTN reconfiguration: The concept of "a pair and a spare" has been around since the original Sysplex Couple Data Sets (CDSs). If the primary CDS becomes unavailable, the backup CDS would take over. Many sites have had automation routines bring a new backup CDS online to avoid a single point of failure. This idea is being extended to STP. With this enhancement, if the PTS fails and the BTS takes over as CTS, an API is now available on the HMC so you can automate the reassignment of the PTS, BTS, and Arbiter roles. This can improve availability by avoiding a single point of failure after the BTS has taken over as the CTS.

Prior to this enhancement, the PTS, BTS, and Arbiter roles had to be reassigned manually using the System (Sysplex) Time task on the HMC. For additional details on the API, please refer to System z Application Programming Interfaces, SB10-7030-11.

Additional information is available on the STP Web page:

http://www.ibm.com/systems/z/pso/stp.html

And from the following Redbooks available on the Redbooks website:

http://www.redbooks.ibm.com/
  • Server Time Protocol: Planning Guide, SG24-7280
  • Server Time Protocol: Implementation Guide, SG24-7281

Capacity on Demand

Capacity on Demand - Temporary Capacity: Just-in-time deployment of System z10 BC Capacity on Demand (CoD) is a radical departure from previous System z and zSeries servers. This new architecture allows:

  • Up to eight temporary records to be installed on the CPC and active at any given time
  • Up to 200 temporary records to be staged on the SE
  • Variability in the amount of resources that can be activated per record
  • The ability to control and update records independent of each other
  • Improved query functions to monitor the state of each record
  • The ability to add capabilities to individual records concurrently, eliminating the need for constant ordering of new temporary records for different user scenarios
  • Permanent Licensed Internal Code - Configuration Code (LIC-CC) upgrades to be performed while temporary resources are active

These capabilities allow you to access and manage processing capacity on a temporary basis, providing increased flexibility for on demand environments. The CoD offerings are built from a common Licensed Internal Code - Configuration Code (LIC-CC) record structure. These Temporary Entitlement Records (TERs) contain the information necessary to control which type of resource can be accessed and to what extent, how many times and for how long, and under what condition - test or real workload. Use of this information gives the different offerings their personality. Three temporary-capacity offerings are available:

Capacity Back Up (CBU): Temporary access to dormant processing units (PUs), intended to replace capacity lost within the enterprise due to a disaster. CP capacity or any and all specialty engine types (zIIP, zAAP, SAP, IFL, ICF) can be added up to what the physical hardware model can contain for up to 10 days for a test activation or 90 days for a true disaster recovery. Presently each CBU record comes with a default of five test activations. Additional test activations may be ordered in groups of 5 but a record can not contain more than 15 test activations. Each CBU record provides the entitlement to these resources for a fixed period of time, after which the record is rendered useless. This time period can span from 1 to 5 years and is specified through ordering quantities of CBU years.

CBU Tests Customers may now execute production workload during a CBU test provided that a) an amount of System z production workload Capacity equivalent to the CBU Upgrade is shut down or otherwise made unusable by the Customer for the duration of the test, and b) the appropriate contracts are in place. All new CBU contract documents contain these new CBU Test terms. Existing CBU customers will need to execute IBM Customer Agreement Amendment for IBM System z Capacity Backup Upgrade Tests, form number Z125-8145.

Capacity for Planned Event (CPE): Temporary access to dormant PUs, intended to replace capacity lost within the enterprise due to a planned event such as a facility upgrade or system relocation. This is a new offering and is available only on the System z10. CPE is similar to CBU in that it is intended to replace lost capacity; however, it differs in its scope and intent. Where CBU addresses disaster recovery scenarios that can take up to three months to remedy, CPE is intended for short-duration events lasting up to 3 days, maximum. Each CPE record, once activated, gives you access to all dormant PUs on the machine that can be configured in any combination of CP capacity or specialty engine types (zIIP, zAAP, SAP, IFL, ICF).

On/Off Capacity on Demand (On/Off CoD): Temporary access to dormant PUs, intended to augment the existing capacity of a given system. On/Off CoD helps you contain workload spikes that may exceed permanent capacity such that Service Level Agreements cannot be met and business conditions do not justify a permanent upgrade. An On/Off CoD record allows you to temporarily add CP capacity or any and all specialty engine types (zIIP, zAAP, SAP, IFL, ICF) up to the following limits:

  • The quantity of temporary CP capacity ordered is limited by the quantity of purchased CP capacity (permanently active plus unassigned).
  • The quantity of temporary IFLs ordered is limited by quantity of purchased IFLs (permanently active plus unassigned).
  • Temporary use of unassigned CP capacity or unassigned IFLs will not incur a hardware charge.
  • The quantity of permanent zIIPs plus temporary zIIPs can not exceed the quantity of purchased (permanent plus unassigned) CPs plus temporary CPs and the quantity of temporary zIIPs can not exceed the quantity of permanent zIIPs.
  • The quantity of permanent zAAPs plus temporary zAAPs can not exceed the quantity of purchased (permanent plus unassigned) CPs plus temporary CPs and the quantity of temporary zAAPs can not exceed the quantity of permanent zAAPs.
  • The quantity of temporary ICFs ordered is limited by the quantity of permanent ICFs as long as the sum of permanent and temporary ICFs is less than or equal to 16.
  • The quantity of temporary SAPs ordered is limited by the quantity of permanent SAPs as long as the sum of permanent and temporary SAPs is less than or equal to 32.

Although the System z10 BC will allow up to eight temporary records of any type to be installed, only one temporary On/Off CoD record may be active at any given time. An On/Off CoD record may be active while other temporary records are active.

Management of temporary capacity through On/Off CoD is further enhanced through the introduction of resource tokens. For CP capacity, a resource token represents an amount of processing capacity that will result in 1 MSU of SW cost for 1 day - an MSU-day. For specialty engines, a resource token represents activation of 1 engine of that type for 1 day - an IFL-day, a zIIP-day or a zAAP-day. The different resource tokens are contained in separate pools within the On/Off CoD record. The customer, via the Resource Link ordering process, determines how many tokens go into each pool. Once On/Off CoD resources are activated, tokens will be decremented from their pools every 24 hours. The amount decremented is based on the highest activation level for that engine type during the previous 24 hours.

Resource tokens are intended to help customers bound the hardware costs associated with using On/Off CoD. The use of resource tokens is optional and they are available on either a prepaid or post-paid basis. When prepaid, the customer is billed for the total amount of resource tokens contained within the On/Off CoD record. When post-paid, the total billing against the On/Off Cod record is limited by the total amount of resource tokens contained within the record.

Resource Link offers an ordering wizard to help determine how many tokens you need to purchase for different activation scenarios. Resource tokens within an On/Off CoD record may also be replenished. For more information on the use and ordering of resource tokens, refer to the Capacity on Demand Users Guide, SC28-6871.

Capacity provisioning: An installed On/Off CoD record is a necessary prerequisite for automated control of temporary capacity through z/OS Capacity Provisioning. z/OS Capacity Provisioning allows you to set up rules defining the circumstances under which additional capacity should be provisioned in order to fulfill a specific business need. The rules are based on criteria, such as: a specific application, the maximum additional capacity that should be activated, time and workload conditions. This support provides a fast response to capacity changes and ensures sufficient processing power will be available with the least possible delay even if workloads fluctuate. See z/OS MVS Capacity Provisioning User's Guide (SA33-8299) for more information.

On/Off CoD Test: On/Off CoD allows for a no-charge test. No IBM charges are assessed for the test, including IBM charges associated with temporary hardware capacity, IBM software, or IBM maintenance. This test can be used to validate the processes to download, stage, install, activate, and deactivate On/Off CoD capacity nondisruptively. Each On/Off CoD-enabled server is entitled to only one no-charge test. This test may last up to a maximum duration of 24 hours commencing upon the activation of any capacity resources contained in the On/Off CoD record. Activation levels of capacity may change during the 24-hour test period. The On/Off CoD test automatically terminates at the end of the 24-hour period. In addition to validating the On/Off CoD function within your environment, you may choose to use this test as a training session for your personnel who are authorized to activate On/Off CoD.

SNMP API (Simple Network Management Protocol Application Programming Interface) enhancements have also been made for the new Capacity On Demand features. More information can be found in the System z10 Capacity On Demand User's Guide, SC28-6871.

Capacity on Demand - Permanent Capacity

Customer Initiated Upgrade (CIU) facility: When your business needs additional capacity quickly, Customer Initiated Upgrade (CIU) is designed to deliver it. CIU is designed to allow you to respond to sudden increased capacity requirements by requesting a System z10 BC PU and/or memory upgrade via the Web, using IBM Resource Link, and downloading and applying it to your System z10 BC server using your system's Remote Support connection. Further, with the Express option on CIU, an upgrade may be made available for installation as fast as within a few hours after order submission.

Permanent upgrades: Orders (MESs) of all PU types and memory for System z10 BC servers that can be delivered by Licensed Internal Code, Control Code (LIC-CC) are eligible for CIU delivery. CIU upgrades may be performed up to the maximum available processor and memory resources on the installed server, as configured. While capacity upgrades to the server itself are concurrent, your software may not be able to take advantage of the increased capacity without performing an Initial Programming Load (IPL).

Plan Ahead Memory

Future memory upgrades can now be preplanned to be nondisruptive. The preplanned memory feature will add the necessary physical memory required to support target memory sizes. The granularity of physical memory in the z10 design is more closely associated with the granularity of logical, entitled memory, leaving little room for growth. If you anticipate an increase in memory requirements, a "target" logical memory size can now be specified in the configuration tool along with a "starting" logical memory size. The configuration tool will then calculate the physical memory required to satisfy this target memory. Should additional physical memory be required, it will be fulfilled with the preplanned memory features.

The preplanned memory feature is offered in 4 gigabyte (GB) increments. The quantity assigned by the configuration tool is the number of 4 GB blocks necessary to increase the physical memory from that required for the "starting" logical memory to the physical memory required for the "target" logical configuration. Activation of any preplanned memory requires the purchase of preplanned memory activation features. One preplanned memory activation feature (#1992) is required for each preplanned memory feature (#1991). You now have the flexibility to activate memory to any logical size offered between the starting and target size.

Increased flexibility with z/VM-mode partitions: System z10 BC allows you to define a z/VM-mode partition (LPAR) containing a mix of processor types including CPs and specialty engines - IFLs, zIIPs, zAAPs, and ICFs. With z/VM V5.4 support, this new capability increases flexibility and simplifies systems management by allowing z/VM to manage guests to operate Linux on System z on IFLs, to operate z/VSE and z/OS on CPs, to offload z/OS system software overhead, such as DB2 workloads, on zIIPs, and to offer an economical Java execution environment under z/OS on zAAPs, all in the same VM LPAR.

HMC system support: The new functions available on the Hardware Management Console (HMC) version 2.10.1 as described apply exclusively to z10 BC. However, the HMC version 2.10.1 will continue to support the systems as shown.

The 2.10.1 HMC will continue to support up to two 10/100 Mbps Ethernet LANs. Token Ring LANs are not supported. The 2.10.1 HMC applications have been updated to support HMC hardware without a diskette drive. DVD-RAM, CD-ROM, and/or USB flash memory drive media will be used.

 Family           Machine Type    Firmware Driver       SE Version
 
 z10 BC             2098             76                 2.10.1
 z10 EC             2097             76                 2.10.1
 z9 BC              2096             67                 2.9.2
 z9 EC              2094             67                 2.9.2
 z890               2086             55                 1.8.2
 z990               2084             55                 1.8.2
 z800               2066             3G                 1.7.3
 z900               2064             3G                 1.7.3
 9672 G6            9672/9674        26                 1.6.2
 9672 G5            9672/9674        26                 1.6.2
 

Internet Protocol, Version 6 (IPv6)

HMC version 2.10.1 and Support Element (SE) version 2.10.1 can now communicate using IP Version 4 (IPv4), IP Version 6 (IPv6), or both. It is no longer necessary to assign a static IP address to an SE if it only needs to communicate with HMCs on the same subnet. An HMC and SE can use IPv6 link-local addresses to communicate with each other.

HMC/SE support is addressing the following requirements:

  • The availability of addresses in the IPv4 address space is becoming increasingly scarce.
  • The demand for IPv6 support is high in Asia/Pacific countries since many companies are deploying IPv6.
  • The U.S. Department of Defense and other U.S. government agencies are requiring IPv6 support for any products purchased after June 2008.

More information on the U.S. government requirements can be found at:

http://www.whitehouse.gov/omb/memoranda/fy2005/m05-22.pdf

http://www.whitehouse.gov/omb/egov/documents/IPv6_FAQs.pdf

HMC/SE Console Messenger: On systems prior to z9, the remote browser capability was limited to Platform Independent Remote Console (PIRC), with a very small subset of functionality. Full functionality via Desktop On-Call (DTOC) was limited to one user at a time; it was slow, and was rarely used.

With System z9, full functionality to multiple users was delivered with a fast Web browser solution. You liked this, but requested the ability to communicate to other remote users.

There is now a new Console Manager task that offers basic messaging capabilities to allow system operators or administrators to coordinate their activities. The new task may be invoked directly, or via a new option in Users and Tasks. This capability is available for HMC and SE local and remote users permitting interactive plain-text communication between two users and also allowing a user to broadcast a plain-text message to all users. This feature is a limited instant messenger application and does not interact with other instant messengers.

HMC z/VM Tower Systems Management Enhancements : Building upon the previous z/VM Systems Management support from the Hardware Management Console (HMC), which offered management support for already defined virtual resources, new HMC capabilities are being made available allowing selected virtual resources to be defined. In addition, further enhancements have been made for managing defined virtual resources.

Enhancements are designed to deliver out-of-the-box integrated graphical user interface-based (GUI-based) management of selected parts of z/VM. This is especially targeted to deliver ease-of-use for enterprises new to System z. You can more seamlessly perform hardware and selected operating system management using the HMC Web browser-based user interface.

Support for HMC z/VM tower systems management enhancements is exclusive to z/VM 5.4 and the IBM System z10.

Enhanced installation support for z/VM using the HMC: HMC version 2.10.1 along with Support Element (SE) version 2.10.1 on z10 BC and corresponding z/VM 5.4 support, will now give you the ability to install Linux on System z in a z/VM virtual machine using the HMC DVD drive. This new function does not require an external network connection between z/VM and the HMC, but instead, uses the existing communication path between the HMC and SE.

Note: This support is intended for customers who have no alternative, such as a LAN-based server, for serving the DVD contents for Linux installations. The elapsed time for installation using the HMC DVD drive can be an order of magnitude, or more, longer than the elapsed time for LAN-based alternatives.

Using the legacy support and the z/VM 5.4 support, z/VM can be installed in an LPAR and both z/VM and Linux on System z can be installed in a virtual machine from the HMC DVD drive without requiring any external network setup or a connection between an LPAR and the HMC.

This addresses security concerns and additional configuration efforts using the only other previous solution of the external network connection from the HMC to the z/VM image.

Support for the enhanced installation support for z/VM using the HMC is exclusive to z/VM 5.4 and the IBM System z10.

Dynamic Enhancement:

The following feature is available without requiring preplanning.

  • Dynamic Add Logical CPs without Preplanning
    • Previously, the Image Profile defined the initial and reserved values for the different processor types for that partition. If those values were not defined prior to partition activation/IPL, they could only be updated by reactivating that partition (including reIPL).
    • The HMC/SE now offers a task called Logical Processor Add which can:
      • Increase the "reserved" value for a given processor type (for example, CP, zAAP, zIIP, IFL)
      • Add a new processor type which is not in use yet for that partition
      • Increase the "initial" value for a given processor type
      • Change Running System and/or Save to Profiles

Enhanced Driver Maintenance (EDM)

There are several reliability, availability, and serviceability (RAS) enhancements that have been made to the HMC/SE based on the feedback from the System z9 Enhanced Driver Maintenance field experience.

  • Change to better handle intermittent customer network issues
  • EDM performance improvements
  • EDM user interface features to allow for customer and service personnel to better plan for the EDM
  • A new option to check all licensed internal code which can be executed in advance of the EDM preload or activate.

Change management

There were several enhancements made on the HMC/SE which provide more information for customers and service personnel as well as provide more flexibility.

The Query Channel/Crypto Configure Off/On Pending task will provide specific details on currently active Licensed Internal Code (LIC) change level and the levels which will be active after the Configure Off/On. In addition, the user will have the ability to determine which, if any, channels or Crypto Express2 features will require a configure off/on for a future LIC update process.

Customers and service personnel will be given the ability to redefine OSA-Express3 and OSA-Express2 or Crypto Express2 LIC updates to be Configured Off/On if they desire the update to be done to one port or Crypto at a time rather than all at once for the same port or Crypto type.

The System Information task has been updated to explicitly show any conditions where a LIC change update may not be truly active until an additional exception action is taken. This is generally an exception case that these conditions exist, but the information is now readily available on this one task.

Power/thermal monitoring

On System z9, IBM introduced power/thermal monitoring support with the HMC System Activity Display (SAD) task providing power consumption and air input temperature. On System z10, the HMC will now provide support for the Active Energy Manager (AEM) which will display power consumption/air input temperature as well as exhaust temperature. AEM will also provide some limited status/configuration information which might assist in explaining changes to the power consumption. AEM is exclusive to System z10.

Panel wizards

Panel wizards were added to the HMC and SE in order to improve the user interface. The purpose of the wizards is to guide users through the panel options, provide recommended defaults where possible, and provide easier understanding of input and change of options. The following wizards were added. (Note that the existing tasks which the wizard provides are still available with the enhancement.)

  • Manage User Wizard - provides a wizard for the following tasks:
    • User Profiles
    • Customize User Controls
    • Password Profiles
  • Image Profile Wizard
    • Initial stage of a wizard for Customizing Image Activation Profiles. Further enhancements are being investigated for the future.

z/VM image mode

On System z9, the supported Activation Image Profile Modes included the following. (Note that all of these modes have varying rules on what combination of processors and shared versus dedicated processors are allowed.)

  • ESA/390 - Supports CPs, zAAPs, and zIIPs
  • ESA/390 TPF - Supports CPs
  • Coupling Facility - Supports CPs and ICFs
  • Linux only - Supports CPs and IFLs

System z10 supports an additional Activation Image Profile mode called z/VM. This image mode will support CPs, zAAPs, zIIPs, ICFs, and IFLs. It will allow all the varying rules and processor combinations in the above modes. The only requirement is that z/VM is the base operating system in that image. This allows for easier Image Profile planning for whatever guest operating systems may run in that z/VM image. This also allows running different operating systems within that z/VM image for different purposes or processor requirements.

The key advantage of this support is this: for environments where users need to use z/VM 5.4 to host Linux and z/OS or z/VSE guests in the same "box," they will not have to artificially separate the management of those two environments if they do not want to. They can manage one z/VM image to host the entire collection of guests they want to deploy.

SNMP API enhancements

In addition to the Capacity On Demand Simple Network Management Protocol Application Programming Interface (SNMP API) new features, the following SNMP API enhancements are also available:

  • Query Active Licensed Internal Code Change Levels API
    • Returns Active Licensed Internal Code Change Levels
    • Also returns if any exception conditions exists for Channel/Crypto Configure Off/On, Coupling Facility Control Code (CFCC) Reactivation, or Activation on next Power On Reset/System Activate.
  • Disabled Wait API Event
    • Previously, SNMP Hardware Message Events had to be parsed for text of Hard Event, and there was no automation interface to obtain the Program Status Word (PSW).
    • This new SNMP Disabled Wait Event contains the PSW, Image Name, Partition ID, CPC Serial Number, and CPC Name, and will eliminate any need to parse text of Hardware Message Events.
  • Query PSW API
    • API support for obtaining the contents of the PSW
    • Only valid if the image is in not operating state.

CIM automation APIs

The HMC will support Common Information Model (CIM) as an additional systems management API with functionality similar to the SNMP API. The capabilities include attribute query, operational management functions for System z, CPCs, images, Activation Profiles, indications (SNMP Trap equivalent), Capacity on Demand, and processors.

CIM is defined by the Distributed Management Task Force: www.dmtf.org. The HMC object model extends the DMTF schema version 2.15. The Object Manager is OpenPegasus (V2.5.2): www.openpegasus.org. The HMC also conforms to additional DTMF profiles related to Virtual System, System Virtualization, and Software inventory.

Many toolkits exist to support client scripting. OpenPegasus comes with a C/C++ client toolkit. Standards Based LINUX Instrumentation for Manageability (SBLIM) Java Client: www.sblim.org includes other useful tools, including a Web-based class browser.

The IBM publication Common Information Model (CIM) Management Interface SB10-7154 provides more information on System z10 CIM support.

Up to 30 Logical Partitions

The z10 BC supports 30 Logical Partitions (LPARs) and provides the ability to define up to two Logical Channel Subsystems (LCSS). Each LCSS is capable of supporting up to 256 CHPID definitions and 15 Logical Partitions. With Processor Resource/Systems Manager (PR/SM) and Multiple Image Facility (MIF), you can share ESCON and FICON channels, coupling channels, HiperSocket CHPIDs, and OSA ports across LPARs. All except ESCON channels can span to LPARs defined in different Logical Channel Subsystems.

Support of up to 30 LPARs is supported by z/OS and z/OS.e, z/VSE, z/VM, z/TPF, TPF, and Linux on System z9. e

HiperDispatch

A System z10 exclusive, HiperDispatch represents a cooperative effort between the z/OS operating system and Processor Resource/Systems Manager (PR/SM) on System z10 hardware.

  • Work may be dispatched across fewer logical processors, thereby reducing the multi-processor (MP) effects and potentially lowering the interference among multiple partitions.
  • Specific z/OS tasks may be dispatched to a small subset of logical processors. PR/SM will tie to the same physical processors thus improving the hardware cache re-use and locality of reference characteristics such as reducing the rate of cross-book communication.

The cooperation between the z10 hardware and the z/OS operating system to increase efficiency will provide minimal, if any, benefit on the z10 BC due to the limited number of processors and therefore lower MP effects inherent in the z10 BC design.

Refer to:

http://www.ibm.com/support/techdocs/atsmastr.nsf/Web/ Techdocs

and search on the keyword HIPERDISPATCH for more specific information related to HiperDispatch.

Refer to the Software requirements section.

LPAR Dynamic PU Reassignment System configuration has been enhanced to optimize the CPU-to-book allocation of physical processors (physical units - PUs) dynamically. The initial allocation of customer-usable PUs to physical books can change dynamically to better suit the actual logical partition configurations that are used on the server. Swapping of specialty engines and Central Processors (CPs - general purpose processors) with each other can now occur, as the system attempts to "pack" logical partition configurations into physical configurations that span the least number of books. The effect of this can be observed in dedicated as well as shared partitions that utilize HiperDispatch. The effect of Dynamic PU reassignment will provide minimal, if any, benefit on the z10 BC due to the limited number of processors and the hardware infrastructure inherent in the z10 BC design.

Universal Lift Tool / Ladders

The Universal Lift Tool / Ladders feature (#3759) is designed to provide users with enhanced system availability benefits by improving the service and upgrade times for larger, heavier devices. This feature includes a custom lift / lower mechanism that is specifically designed for use with System z10 frames, allowing these procedures to be accomplished quicker and with fewer people. It is recommended that one of these features be obtained for each customer account / datacenter.

IBM Lifecycle Extension for z/OS V1.7 z/OS V1.7 support was withdrawn September 30, 2008. The Lifecycle Extension for z/OS V1.7 (5637-A01) makes fee-based corrective service for z/OS V1.7 available through September 2010. With the Lifecycle Extension, z/OS V1.7 supports the z10 BC server. Certain functions and features of the z10 BC require later releases of z/OS. For the complete list of software support, see the PSP buckets and the Software requirements section.

Non-raised floor environment

An IBM System z10 Business Class (z10 BC) feature may be ordered to allow use of the z10 BC in a non-raised floor environment. This capability may help ease the cost of entry into the z10 BC; a raised floor may not be necessary for some infrastructures.

The non-raised floor z10 BC implementation is designed to meet all electromagnetic compatibility standards. Feature #7998 must be ordered if the z10 BC is to be used in a non-raised floor environment. A Bolt-down kit (#7992) is also available for use with a non-raised floor z10 BC, providing frame stabilization and bolt-down hardware to help secure a frame to a non-raised floor. Bolt-down kit (#7992) may be ordered for initial box or MES starting January 28, 2009.

Balanced Power Plan Ahead

Phase currents are minimized when they are balanced among the three input phases. Balanced Power Plan Ahead (#3002) is designed to allow you to order the full complement of bulk power regulators (BPRs) on any configuration, to help ensure that the configuration will be in a balanced power environment. The addition of BPRs on an already installed System z10 BC will be disruptive.

You must have a three phase line cord (#8983), (#8984), (#8986), (#8987), (#8988), or (#8998) when using Balanced Power Plan Ahead (#3002).

Bolt-down kits

Bolt-down kits are available for use with z10 BC for the purpose of physically securing your system in place. The need for such a kit is determined by installation particulars, but may be used to help insure that the equipment stays in place in the event of some type of shock or seismic event.

Three varieties of Bolt-Down kits are available:

(#7990) - Bolt-Down Kit, High-Raised Floor 2098

(No Longer Available as of June 30, 2012)

This feature provides frame stabilization and bolt-down hardware to help secure a frame to a concrete floor beneath a 11.75-to 16.0-inch (298mm to 405mm) raised floor.

(#7991) - Bolt-Down Kit, Low-Raised Floor 2098

(No Longer Available as of June 30, 2012)

This feature provides frame stabilization and bolt-down hardware to help secure a frame to a concrete floor beneath a 9.25-to 11.75-inch (235mm to 298mm) raised floor.

(#7992) - Bolt-Down Kit, Non-Raised Floor 2098

(No Longer Available as of June 30, 2012)

This feature provides frame stabilization and bolt-down hardware to help secure a frame to a non-raised floor.

Accessibility by people with disabilities

A U.S. Section 508 Voluntary Product Accessibility Template (VPAT) containing details on accessibility compliance can be requested at:

http://www.ibm.com/able/product_accessibility/index.html


Section 508 of the US Rehabilitation Act

System z10 Business Class servers are capable on delivery, when used in accordance with IBM's associated documentation, of satisfying the applicable requirements of Section 508 of the Rehabilitation Act of 1973, 29 U.S.C. Section 794d, as implemented by 36 C.F.R. Part 1194, provided that any Assistive Technology used with the Product properly interoperates with it.
Back to topBack to top
 

Product positioning

The future runs on System z. IBM's System z10 BC delivers a new face for midrange enterprise computing that gives you a whole new world of capabilities to run modern applications. This competitively priced server delivers unparalleled qualities of service to help manage growth and reduce cost and risk in your business, making it ideally suited as the cornerstone of your new enterprise data center.

With a midrange focus, System z10 BC delivers peace of mind when it comes to advanced availability and affordability through a low entry point with very granular scalability, offering 130 different capacities to grow as you do. The System z10 BC is designed to provide performance improvements of up to 1.5 times the total system capacity for general purpose processing, of the z9 BC Model S07 and with nearly two times the available memory of the z9 BC.

If you are not currently utilizing System z today, you are missing out on capabilities the z10 BC servers provide, greater than 10,000 secure Web transactions per second for your business, up to 120 GB of memory, 2x the previous generation, (expanding to 248 GB of memory by June 2009), and up to 10 customizable Processor Units (PUs) which deliver the kind of computing horsepower you need for varying workloads. Whether your requirements are to run Online Transaction Processing, Data Serving, Batch Processing, Web Serving, Application Development, or all at the same time, the System z10 BC supports 5 different operating systems for unmatched flexibility. Industry-leading virtualization lets you do it all at the same time with resource sharing for further cost savings. And a new host bus interface uses InfiniBand with a link data rate of 6 GBps, enough to support the full capacity and processing power of the new IBM System z10 BC.

With a design for affordable scalability, System z10 BC will continue to offer investment protection and improved price/performance with upgrades. For example, if you have an IFL specialty engine running z/VM and/or Linux on a System z9 BC, an upgrade to z10 BC will provide up to a 1.4 times improvement in processing capacity at no additional cost in most cases - true investment protection.

As part of our commitment to deliver on-going price/performance improvement (founded in the Mainframe Charter) and to help increase the economic value to our clients we are taking actions to reduce the cost of deploying and growing new workloads on System z. Our commitment is to continually assess our client needs and industry conditions and to make changes as required. Our goal is to assure we continue to provide highly competitive alternatives for new workloads being deployed on System z.

Built on a foundation that improves recovery for unplanned outages and reduction of planned outages, the z10 BC goes further to offer a reduction in preplanning requirements by delivering and reserving a fixed Hardware System Area (HSA), and just-in-time deployment of resources that allows greater flexibility in defining and executing temporary capacity needs. If you need more capacity for a short period, with a little preplanning, you just turn it on when you need it. The performance of z10 BC is designed to improve application performance, support more transactions, increase scalability, offer more flexibility, and assist in consolidation of workloads.

Whether you are an existing customer or a new customer looking for better solutions to improve and leverage your company's IT investments, the new face of System z makes the System z10 BC ideally suited as the cornerstone of your new enterprise data center.

Processor Unit Summary

Listed below are the minimums and maximums of processor units that customers may permanently purchase. The feature codes affected are identified in parentheses.

      Total  CP A-Z    IFLs       ICFs    zAAPs   zIIPs   SAPs
       PUs            (#6650)    (#6651) (#6653) (#6654) (#6652)
Model Avail           Min/Max    Min/Max Min/Max Min/Max Min/Max
----- ----- ------- ------------ ------- ------- ------- -------
E10    10    0 - 5     0 - 10     0 - 10  0 - 5   0 - 5   0 - 2
 

All CPs need to be at the same capacity level.

Notes:

  • One CP (#6656 - #6681), IFL (#6650) or ICF (#6651) is required.
  • The total number of PUs purchased can not exceed the total number available.
  • One CP (#6656 - #6681) must be installed with the installation of any zAAPs that are installed or prior to the installation of any zAAPs.
  • The total number of zAAPs installed must be less than or equal to the sum of the active CPs (#6656 - #6681) installed on any machine.
  • There are no dedicated spares per system.
  • Two SAPs are provided as standard PUs with 0 - 2 Additional SAPs (#6652) available to the customer.

Back to topBack to top
 
Models

Model summary matrix

Model PUs Memory IFB I/O Drawers CHPIDs
E10 1 to 10 4 to 120 GB 0 to 12 0 to 4 512

Note: The total maximum number of PUs is 12 when you include SAPs.

Note: Memory reserved for the fixed HSA is in addition to the purchased entitlement.

Note: Each LCSS supports up to 256 CHPIDs.

Note: Memory size maximum to increase from 120 GB, to 248 GB on June 30, 2009.

Note: Single phase line cords support up to a maximum of two I/O drawers using line cords (#8991), (#8990), (#8991), or (#8999).

Note: For more than two I/O drawers, it is necessary to use three phase line cords (#8983), (#8984), (#8986), (#8987), (#8988), or (#8998).

Customer setup (CSU)

Customer set up is not available on this machine.

Devices supported

Peripheral hardware and device attachments

IBM devices previously attached to IBM System z9 and zSeries servers are supported for attachment to System z10 BC channels, unless otherwise noted. The subject I/O devices must meet ESCON or FICON architecture requirements to be supported. I/O devices that meet OEMI architecture requirements are supported only using an external converter. Prerequisite Engineering Change Levels may be required. For further detail, contact IBM service personnel.

While the z10 BC supports devices as described above, IBM does not commit to provide support or service for an IBM device that has reached its End of Service effective date as announced by IBM.

Note: IBM cannot confirm the accuracy of performance, compatibility, or any other claims related to non-IBM products. Questions regarding the capabilities of non-IBM products should be addressed to the suppliers of those products.

For a list of the current supported FICON devices, refer to the following website:

http://www.ibm.com/systems/z/connectivity/

Model conversions

Model Conversions - Hardware upgrades
   From             To
M/T     Model   M/T     Model      Description
2086    A04     2098    E10   (*)  A04  to  E10
 
2096    R07     2098    E10   (*)  R07  to  E10
 
2096    S07     2098    E10   (*)  S07  to  E10
 
2098    E10     2097    E12   (*)  E10  to  E12
 

Feature conversions

Feature conversion list is available upon request.
Back to topBack to top
 

Technical description
TOC Link Physical specifications TOC Link Operating environment TOC Link Limitations
TOC Link Hardware requirements TOC Link Software requirements


Physical specifications

Dimensions:
                          Depth     Width    Height
                          -----     -----     ------
System with All Covers
  - Inches                71.0      30.9       79.26
  - Centimeter           185.4      78.5      201.32
 
System with Covers and Reduction
  - Inches                71.0      30.9       70.3
  - Centimeter           185.4      78.5      178.5
 
Frame on Casters with Packaging (Domestic)
   - Inches               51.4      32.4       79.76
   - Centimeter          130.6      82.2      202.58
 
Frame With Packaging (ARBO Crate)
   - Inches               51.5      36.5       87.6
   - Centimeter          130.8      92.7      222.5
 
Approximate weight:
 
                         New Build
                          Minimum
                          System
                          Model E10
                        ------------
 
System with IBF Feature
  -  kg                    952.5
  -  lb                   2100
System without IBF Feature
  -  kg                    857.3
  -  lb                   1890
 

To assure installability and serviceability in non-IBM industry-standard racks, review the installation planning information for any product-specific installation requirements.

Operating environment

  • Temperature:
    • 10 to 32 degrees C (50 to 89 degrees F) for all models up to 900 meters; maximum ambient reduces 1 degree C per 300 meters above 900 meters
  • Relative Humidity: 8 to 80% (percent)
  • Wet Bulb (Caloric Value): 23 degrees C (73 degrees F) Operating Mode
  • Max Dew Point: 17 degrees C (62.6 degrees F) - Operating Mode
  • Electrical Power:
    • 7.3kVA (typically 0.999 PF at 200V)
    • 7.35 kVA (typically 0.99 PF at 380V)
    • 7.4 kVA (typically 0.98 PF at 480V)

Note: the above KVA is for a maximum configuration in a warm room (system inlet temperature > 28degC/82.4degF). Typical configurations in normal environment will average 4 kVA. Exact values for specific configurations will be available using the Power Estimation Tool for this system.

Capacity of Exhaust: 2440 cubic meters / hour (1435 CFM)

Noise Level:

  • Declared A-Weighted Sound Power Level, LWAd(B) = 7.2
  • Declared A-Weighted Sound Pressure Level, LpAm(dB) = 54

Leakage and Starting Current: 105 mA / 135 A (~10ms)

Limitations

Not applicable.

Hardware requirements

You should review the PSP buckets for minimum MCL and software PTF levels before IPLing operating systems.

The hardware requirements for the System z10 BC and its features and functions are identified below.

Machine Change Levels (MCLs) are required. Descriptions of the MCLs are available now through Resource Link

Access Resource Link at:

http://www.ibm.com/servers/resourcelink

Peripheral hardware and device attachments

IBM devices previously attached to IBM System z9 and zSeries servers are supported for attachment to System z10 BC channels, unless otherwise noted. The subject I/O devices must meet ESCON or FICON architecture requirements to be supported. I/O devices that meet OEMI architecture requirements are supported only using an external converter. Prerequisite Engineering Change Levels may be required. For further detail, contact IBM service personnel.

While the z10 BC supports devices as described above, IBM does not commit to provide support or service for an IBM device that has reached its End of Service effective date as announced by IBM.

Note: IBM cannot confirm the accuracy of performance, compatibility, or any other claims related to non-IBM products. Questions regarding the capabilities of non-IBM products should be addressed to the suppliers of those products.

For a list of the current supported FICON devices, refer to the following website:

http://www.ibm.com/systems/z/connectivity/

Software requirements

Listed are the operating systems and the minimum versions and releases supported by z10 BC, its functions, and its features. Select the releases appropriate to your operating system environments.

Note: Refer to the z/OS, z/VM, z/VSE subsets of the 2098DEVICE Preventive Service Planning (PSP) bucket prior to installing a z10 BC.

System z10 BC requires at a minimum:

  • z/OS V1.7 , with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with PTFs. Note that the IBM zIIP Support for z/OS and z/OS.e V1R6/R7 Web deliverable is required to be installed for HiperDispatch (a zIIP processor is not required)

    Note: z/OS V1.7 support was withdrawn September 30, 2008. The Lifecycle Extension for z/OS V1.7 (5637-A01) makes fee-based corrective service for z/OS V1.7 available through September 2010. With the Lifecycle Extension, z/OS V1.7 supports the z10 BC server. Certain functions and features of the z10 BC require later releases of z/OS.

  • z/OS V1.8, V1.9, or V1.10 with PTFs.
  • z/OS.e V1.8, with PTFs
  • z/VM
    • CFCC Level 16 Guest Exploitation: z/VM V5.2, V5.3 with PTFs, and V5.4
    • QDIO Data Connection Isolation : z/VM V5.3 and V5.4 with PTFs
    • Enhanced installation support for z/VM using the HMC: z/VM V5.4
    • z/VM Mode Partitions: z/VM V5.4
    • HCD Support: z/VM V5.2, V5.3, and V5.4 with PTFs
    • IOCP Support: z/VM V5.2. V5.3, and V5.4 with PTFs
  • z/VSE V3.1 with PTFs, V4.1 with PTFs, or z/VSE V4.2
  • z/TPF V1.1 is required to support 64 engines per z/TPF LPAR.
  • TPF V4.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

z/VM mode partitions requires at a minimum:

  • z/VM V5.4.

Installing Linux from the HMC requires at a minimum:

  • z/VM V5.4.

Dynamic Add Logical CPs requires at a minimum:

  • z/OS V1.10
  • z/VM V5.3 with PTFs
  • z/VM V5.4

HCA2-O fanout (#0163) supporting InfiniBand coupling links (12x IB-SDR on z9 and 12x IB-DDR on z10) at 150 meters (492 feet) on z10 BC and z10 EC requires at a minimum:

  • z/OS V1.7, with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01), and PTFs.
  • z/OS V1.8, or z/OS V1.9, or z/OS V1.10 with PTFs.
  • z/OS.e V1.8 with PTFs.
  • z/VM V5.3 to define, modify, and delete an InfiniBand coupling link, CHPID type CIB, when z/VM is the controlling LPAR for dynamic I/O.

HCA2-O LR fanout (#0168) supporting InfiniBand coupling links (1x IB-SDR or 1x IB-DDR) at an unrepeated distance of 10 km (62 miles) requires at a minimum:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) and PTFs
  • z/OS V1.8, or z/OS V1.9, or z/OS V1.10 with PTFs
  • z/OS.e V1.8 with PTFs
  • z/VM V5.3 to define, modify, and delete an InfiniBand coupling link, CHPID type CIB, when z/VM is the controlling LPAR for dynamic I/O.

Coupling Facility Control Code Level 16 on z10 BC requires at a minimum for exploitation of new features:

  • z/OS V1.7, with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with PTFs.
  • z/OS V1.8, V1.9, or V1.10 with PTFs.
  • z/OS.e V1.8 with PTFs.
  • z/VM V5.2 and V5.3 with PTFs, and V5.4 for guest virtual coupling exploitation.

Hardware Decimal Floating Point on System z10 BC requires at a minimum:

  • z/OS V1.7 with IBM Lifecycle Extension for z/OS V1.7 and PTFs (for High Level Assembler support).
  • z/OS V1.8 with PTFs (for High Level Assembler, Language Environment, DBX, and CDA RTLE support).
  • z/OS.e V1.8 with PTFs (for High Level Assembler, Language Environment, DBX, and CDA RTLE support).
  • z/OS V1.9 with PTFs for full support, for C/C++.
  • (Optionally) IBM 64-bit SDK for z/OS, Java Technology Edition, V6.0.0 SR1
  • z/VM V5.3

Capacity provisioning on System z10 BC requires at a minimum:

  • z/OS V1.9 or z/OS V1.10 with PTFs (see z/OS MVS Capacity Provisioning User's Guide (SA33-8299) for z/OS functions that must be enabled).
  • Linux on System z distributions:
    • Novell SUSE SLES 10 SP2
    • IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases.

Large Page support (1 megabyte pages) on System z10 BC requires at a minimum:

  • z/OS V1.9 or z/OS V1.10 with PTFs.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 SP2
    • Red Hat RHEL 5.2

CP Assist for Cryptographic Function (CPACF) (#3863) on the System z10 BC requires at a minimum:

  • z/OS
    • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) and either the Cryptographic Support for z/OS V1R6/R7 and z/OS.e V1R6/R7 Web deliverable (no longer available), the Enhancements to Cryptographic Support for z/OS and z/OS.e V1R6/R7 Web deliverable (no longer available), or the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable planned to be available November 21, 2008.
    • z/OS V1.8.
    • z/OS.e V1.8
  • z/VM V5.2 for guest exploitation.
  • z/VSE V3.1 and IBM TCP/IP for VSE/ESA V1.5.0 with PTFs.
  • z/TPF V1.1.
  • TPF V4.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4.3 and RHEL 5.

Enhancements to CP Assist for Cryptographic Function (CPACF) on the System z10 BC requires at a minimum:

  • z/OS
    • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) and either the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable
    • z/OS V1.8 or z/OS V1.9 with either the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable
    • z/OS.e V1.8 with either the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable
    • z/OS V1.10
  • z/VM V5.2 for guest exploitation.
  • z/VSE V4.1 and IBM TCP/IP for VSE/ESA V1.5.0 with PTFs.
  • Linux on System z distributions:
    • Novell SUSE SLES 10 SP2
    • Red Hat RHEL 5.2

Configurable Crypto Express2 and Crypto Express2-1P on the System z10 BC requires at a minimum:

  • z/OS
    • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) and the Enhancements to Cryptographic Support for z/OS and z/OS.e V1R6/V1R7 Web deliverable (no longer available), or with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
    • z/OS V1.8.
    • z/OS.e V1.8.
  • z/VM V5.2 for guest exploitation.
  • zVSE V3.1 and IBM TCP/IP for VSE/ESA V1.5.0 with PTFs.
  • z/TPF V1.1 (acceleration mode only).
  • Linux on System z distributions:
    • Novell SUSE SLES 9 SP3 and SLES 10.
    • Red Hat RHEL 4.4 and RHEL 5.1.

Note: z/VSE supports clear-key operations only. Linux on System z and z/VM V5.2, and later, support clear-and secure-key operations.

Note: The Cryptographic Support Web deliverables may be obtained at:

http://www-03.ibm.com/systems/z/os/zos/downloads/

Key management for remote loading of ATM and Point of Sale (POS) keys on System z10 BC requires at a minimum:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) and the Enhancements to Cryptographic Support for z/OS and z/OS.e V1R6/V1R7 Web deliverable (no longer available), or with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2 for guest exploitation.

Improved Key Exchange with Non-CCA Cryptographic systems on System z10 BC requires at a minimum:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01).
  • z/VM V5.2 for guest exploitation.

Support for ISO 16609 CBC Mode T-DES Message Authentication (MAC) requirements on System z10 BC requires at a minimum:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) and the Enhancements to Cryptographic Support for z/OS and z/OS.e V1R6/V1R7 Web deliverable (no longer available), or with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2 for guest exploitation.

Support for RSA keys up to 4096 bits in Length on System z10 BC requires at a minimum:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01), with the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/OS V1.8 or z/OS V1.9 with either the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/OS.e V1.8 with either the Cryptographic Support for z/OS V1R7-V1R9 and z/OS.e V1R7-V1R8 Web deliverable, or the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/OS V1.10
  • z/VM V5.2 for guest exploitation.

Dynamically Add Crypto to Logical Partition on System z10 BC requires at a minimum:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01).
  • z/OS V1.8
  • z/OS.e V1.8
  • z/VM V5.2 for guest exploitation.
  • z/VSE V4.2.
  • Linux on System z distributions:
    • Novell SUSE SLES 10 SP1.
    • Red Hat RHEL 5.1.

Secure Key AES on System z10 BC requires at a minimum:

  • z/OS V1.8, z/OS V1.9 or z/OS V1.10 with the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8.
  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/VM V5.2 for guest exploitation.

Updates to Crypto Facility Query (CFQ) Function on System z10 BC requires at a minimum:

  • z/OS V1.8, z/OS V1.9 or z/OS V1.10 with the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/VM V5.2 for guest exploitation.

Support for 13 thru 19 digit Personal Account Numbers on System z10 BC requires at a minimum:

  • z/OS V1.8, z/OS V1.9 or z/OS V1.10 with the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with the Cryptographic Support for z/OS V1R8-V1R10 and z/OS.e V1R8 Web deliverable.
  • z/VM V5.2 for guest exploitation.

High Performance FICON for System z (zHPF) (CHPID type FC), requires at a minimum:

  • z/OS V1.8, V1.9, or V1.10 with PTFs
  • z/OS V1.7, with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with PTFs.
  • z/OS.e V1.8 with PTFs

FICON Express8 (CHPID type FC) when utilizing native FICON or Channel-To-Channel (CTC), on the z10 EC and z10 BC servers requires at a minimum:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01).
  • z/VM V5.3.
  • z/VSE V4.1
  • z/TPF V1.1.
  • TPF V4.1 at PUT 16.
  • Linux on System z distributions:
    • Novell SUSE SLES 9, SLES 10, and SLES 11.
    • Red Hat RHEL 4 and RHEL 5.

FICON Express8 (CHPID type FC) for support of zHPF single- track operations on the z10 EC and z10 BC servers requires at a minimum:

  • z/OS V1.8, V1.9, or V1.10 with PTFs.
  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with PTFs.
  • Linux on System z distributions:
    • IBM is working with its Linux distribution partners to include support in future Linux on System z distribution releases.

FICON Express8 (CHPID type FC) for support of zHPF multitrack operations on the z10 EC and z10 BC servers requires at a minimum:

  • z/OS V1.9 and V1.10 with PTFs.

FICON Express8 (CHPID type FCP) for support of SCSI devices on the z10 EC and z10 BC servers requires at a minimum:

  • z/VM V5.3.
  • z/VSE V4.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9, SLES 10, and SLES 11.
    • Red Hat RHEL 4 and RHEL 5.

FICON Express4 (CHPID type FC), including Channel-To-Channel (CTC), on z10 BC requires at a minimum:

  • z/OS V1.8.
  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01)
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1.
  • TPF V4.1 at PUT 16.
  • z/TPF 1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

FICON Express4 (CHPID type FCP) for support of SCSI disks on z10 BC requires at a minimum:

  • z/VM V5.2.
  • z/VSE V3.1 with PTFs.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

HiperSockets Layer 2 support on the z10 BC requires at a minimum:

  • z/VM V5.2 for guest exploitation.
  • Linux on System z distributions:
    • Novell SUSE SLES 10 SP2
    • Red Hat RHEL 5.2

HiperSockets Multiple Write Facility on the z10 BC requires at a minimum:

  • z/OS V1.9 with PTFs.
  • z/OS V1.10

OSA-Express3 GbE LX (#3362) and GbE SX (#3363 and #3373) on z10 BC require at minimum:

Supporting CHPID types OSD with

exploitation of four ports per feature on #3362 and #3363 and

exploitation of two ports per feature on #3373

  • z/OS V1.8 or z/OS V1.9 with PTFs.
  • z/OS.e V1.8 with PTFs.
  • z/OS V1.10.
  • z/VM V5.2 with PTFs.
  • z/VSE V4.1 with PTFs.
  • z/TPF 1.1 PUT 4 with APARs.
  • Linux on System z distributions - for four ports per feature on #3362 and #3363:
    • Novell SUSE SLES 10 SP2.
    • Red Hat RHEL 5.2.
  • Linux on System z distributions - for two ports per feature on #3373:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

Supporting CHPID types OSD with exploitation of two ports per feature on #3362 and #3363 and exploitation of one port per feature on #3373.

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01)
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1 with PTFs.
  • TPF V4.1 at PUT 13 with PTF.
  • z/TPF 1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

Supporting CHPID type OSN in support of OSA-Express for NCP:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01) with PTFs
  • z/OS V1.8, z/OS.e V1.8, or V1.9 with PTFs
  • z/VM V5.2.
  • z/VSE V3.1 with PTFs.
  • z/TPF 1.1 PUT 4 with APARs.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 SP2 and SLES 10.
    • Red Hat RHEL 4.3 and RHEL 5.

OSA-Express3 1000BASE-T (#3367 and #3369) on z10 BC requires at minimum:

For CHPID type OSC supporting TN3270E and non-SNA DFT:

Note: One port per PCI-E adapter is available for use. CHPID type OSC does not recognize the second port on a PCI-E adapter.

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01)
  • z/OS V1.8
  • z/OS.e V1.8.
  • z/VM V5.2
  • z/VSE V3.1
;p.For CHPID type OSD and exploitation of four ports per feature (#3367) and two ports per feature (#3369):
  • z/OS V1.8 or z/OS V1.9 with PTFs
  • z/OS.e V1.8 with PTFs
  • z/OS V1.10
  • z/VM V5.2 with PTFs
  • z/VSE V4.1 with PTFs
  • z/TPF 1.1 PUT 4 with APARs
  • Linux on System z distributions - for four ports per feature (#3367):
    • Novell SUSE SLES 10 SP2.
    • Red Hat RHEL 5.2.
  • Linux on System z distributions - for two ports per feature (#3369):
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

For CHPID type OSD and use of one port per PCI-E adapter, two ports per feature:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01)
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1 with PTFs.
  • z/TPF V4.1 PUT 13 with PTF.
  • z/TPF V1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

For CHPID type OSE and support of 4 or 2 ports per feature:

  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1.

For CHPID type OSN.

Note: CHPID type OSN does not use ports. All communication is LPAR-to-LPAR.

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01)
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1 with PTFs
  • z/TPF 4.1.
  • z/TPF 1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 SP2 and SLES 10.
    • Red Hat RHEL 4.3 and RHEL 5.

OSA-Express3 10 GbE SR (#3371) requires at a minimum: Supporting CHPID type OSD and two ports per feature:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01)
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1 with PTFs.
  • z/TPF V4.1 PUT 13 with PTF.
  • z/TPF V1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

OSA-Express QDIO data connection isolation on System z10 and System z9 (CHPID type OSD) requires at a minimum:

  • z/VM V5.3 with PTFs

OSA-Express3 10 GbE LR (#3370) and 10 GbE SR (#3371) on z10 BC require at a minimum:

Supporting CHPID type OSD and two ports per feature:

  • z/OS V1.7 with the IBM Lifecycle Extension for z/OS V1.7 (5637-A01)
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1 with PTFs.
  • TPF 4.1 at PUT 13 with PTFs.
  • z/TPF 1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

OSA-Express2 GbE LX (#3364) and GbE SX (#3365) on z10 BC require at a minimum:

For CHPID type OSD:

  • z/OS V1.7 with IBM Lifecycle Extension for z/OS V1.7
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1.
  • TPF V4.1 at PUT 13 with PTF.
  • z/TPF 1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

For CHPID type OSN in support of OSA-Express for NCP:

  • z/OS V1.7 with IBM Lifecycle Extension for z/OS V1.7 and PTFs
  • z/OS V1.8 with PTFs.
  • z/OS.e V1.8 with PTFs.
  • z/VM V5.2.
  • z/VSE 3.1 with PTFs.
  • TPF 4.1.
  • z/TPF 1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 SP2 and SLES 10.
    • Red Hat RHEL 4.3 and RHEL 5.

OSA-Express2 1000BASE-T Ethernet (#3366) on z10 BC requires at a minimum:

For CHPID type OSC:

  • z/OS V1.7 with IBM Lifecycle Extension for z/OS V1.7
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1.

For CHPID type OSD:

  • z/OS V1.7 with IBM Lifecycle Extension for z/OS V1.7
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1.
  • TPF V4.1 at PUT 13 with PTF.
  • z/TPF 1.1.
  • Linux on System z distributions:
    • Novell SUSE SLES 9 and SLES 10.
    • Red Hat RHEL 4 and RHEL 5.

For CHPID type OSE:

  • z/OS V1.7 with IBM Lifecycle Extension for z/OS V1.7
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE V3.1.

For CHPID type OSN in support of OSA-Express for NCP:

  • z/OS V1.7 with IBM Lifecycle Extension for z/OS V1.7 and PTFs
  • z/OS V1.8.
  • z/OS.e V1.8.
  • z/VM V5.2.
  • z/VSE 3.1 with PTFs.
  • TPF 4.1.
  • z/TPF 1.1
  • Linux on System z distributions:
    • Novell SUSE SLES 9 SP2 and SLES 10.
    • Red Hat RHEL 4.3 and RHEL 5.

Back to topBack to top
 
Publications

The following publications are available in the Library section of Resource Link (TM):

           Title                                     Order Number
----------------------------------------------       ------------
z10 BC System Overview                                  SA22-1085
z10 BC Installation Manual - Physical Planning (IMPP)   GC28-6875
PR/SM (TM) Planning Guide                               SB10-7153
 

The following publications are shipped with the product and available in the Library section of Resource Link:

      Title                                          Order Number
-------------------                                  ------------
z10 BC Installation Manual                           GC28-6874
z10 BC Service Guide                                 GC28-6878
z10 BC Safety Inspection Guide                       GC28-6877
System Safety Notices                                G229-9054
 

The following publications are available in the Library section of Resource Link:

          Title                                        Order Number
------------------------------------------------------ ------------
Application Programming Interfaces for Java            API-JAVA
Application Programming Interfaces                     SB10-7030
Capacity on Demand User's Guide                        SC28-6871
Agreement for Licensed Machine Code                    SC28-6872
CHPID Mapping Tool User's Guide                        GC28-6825
Common Information Model (CIM) Management Interface    SB10-7154
Coupling Links I/O Interface Physical Layer            SA23-0395
ESCON**   and FICON CTC Reference                      SB10-7034
ESCON I/O Interface Physical Layer                     SA23-0394
FICON**   I/O Interface Physical Layer                 SA24-7172
Hardware Management Console Operations Guide (V2.10.1) SC28-6873
IOCP User's Guide                                      SB10-7037
Maintenance Information for Fiber Optic Links          SY27-2597
z10 BC Parts Catalog                                   GC28-6876
Planning for Fiber Optic Links                         GA23-0367
SCSI IPL - Machine Loader Messages                     SC28-6839
Service Guide for HMCs and SEs                         GC28-6861
Service Guide for Trusted Key Entry Workstations       GC28-6862
Standalone IOCP User's Guide                           SB10-7152
Support Element Operations Guide (Version 2.10.1)      SC28-6879
TKE PCIX Workstation User's Guide                      SA23-2211
System z Functional Matrix                             ZSW0-1335
OSA-Express Customer's Guide                           SA22-7935
OSA-ICC User's Guide                                   SA22-7990
 

Publications for System z10 Business Class(TM) can be obtained at Resource Link by accessing the following website:

www.ibm.com/servers/resourcelink

Using the instructions on the Resource Link panels, obtain a user ID and password. Resource Link has been designed for easy access and navigation.

The following Redbooks publications are available now:

  • Server Time Protocol: Planning Guide, SG24-7280
  • Server Time Protocol: Implementation Guide, SG24-7281

For other IBM Redbooks publications, refer to:

http://www.redbooks.ibm.com/

Back to topBack to top
 
Features
TOC Link Features - No charge TOC Link Features - Chargeable TOC Link Feature descriptions
TOC Link Feature exchanges


Features - No charge

LANGUAGE: A specify code for language is required and will be provided by the configurator based on the country code of the order. The specify codes listed below must be used when an alternative to the ELINK configurator default is required.

Note: All of the following No Longer Available as of June 30, 2012.

Specify Code       Description
------------       ----------------------------
 2924              US English
 2928              France
 2929              German
 2930              Spanish Non-Spain
 2931              Spain
 2932              Italian
 2935              Canadian French
 2978              Portuguese
 2979              Brazilian Portuguese
 2980              UK English
 2983              Norwegian
 2987              Sweden Finland
 2988              Netherlands
 2989              Belgian French
 2993              Denmark
 2997              Swiss French, German
 5560              Luxembourg Orders Placed in Belgium
 5561              Iceland Orders Placed in Denmark
 5562              China Orders Placed in Hong Kong
 

Features - Chargeable

System z10 BC           2098     E10
 
Description                                     Feature
----------------------                          -------
HMC w/Dual EN                                    0091
ISAOPT Enablement                                0251
TKE workstation                                  0841
TKE 7.0 LIC                                      0860
FICON Express8 10KM LX                           3325
FICON Express8 SX                                3326
 
Link                                                      Maximum
Type   Name                        Communication Use      Links
-----  -------------------------   --------------------   -------
ICB-4  Integrated Cluster Bus-4    z10 BC, z10 EC, z9 EC   12
#3393                              z9 BC, z990, z890
 
ISC-3  InterSystem Channel-3       z10 BC, z10 EC, z9 EC,  48
#0217, #0218, #0219                z9 BC, z990, z890
 
IFB    12x IB-SDR or DDR           z10 BC, z10 EC (DDR)    12
                                   z9 BC, z9 EC (SDR)
 
       1x IB-SDR or DDR            z10 BC, z10 EC          12
 
  • The maximum number of external Coupling Links combined cannot exceed 56 per server. There is a maximum of 64 coupling link CHPIDs per server (ICs, ICB-4s, active ISC-3 links, and IFBs)
  • The maximum number of IFBs and ICB-4s combined cannot exceed 12 links per server.
  • For each MBA fanout installed for ICB-4s, the number of possible HCA2-O fanouts for coupling is reduced by one
  • An ISC-3 feature on a z10 BC can be connected to z10 EC, z9 or zSeries server in peer mode (CHPID type CFP) operating at 2 Gbps, exclusively. Compatibility mode is not supported.
                  - - - - - Per Server - - - - -
                  Minimum  Maximum    Maximum     Increments   Purchase
  Feature Name    features features connections   per feature increments
----------------- -------- -------- -----------   ----------- ----------
16-port ESCON      0 (1)    32       480           16          4
#2323, #2324                         channels      channels    channels
                                                   1 reserved
                                                   as a spare
 
FICON Express4     0 (1)    32       128           4           4
#3321, #3322, #3324                  channels      channels    channels
 
FICON Express4-2C  0 (1)    32       64            2           2
4KM LX                               channels      channels    channels
#3323
 
FICON Express4-2C  0 (1)    32       64            2           2
SX                                   channels      channels    channels
#3318
 
FICON Express2(6)  0 (1)    28       112           4           4
#3319, #3320                         channels      channels    channels
 
FICON Express(6)   0 (1)    20       40            2           2
#2319, #2320                         channels      channels    channels
 
ICB-4 link (3)     0 (1)     6       12 links(2)      N/A      1 link
#3393
 
ISC-3              0 (1)    12       48 links (2)  4 links     1 link
#0217, #0218, #0219
 
12x IB-DDR (3)     0 (1)     6       12 links (2)  2 links     2 links
IFB #0163
 
1x IB-DDR  (3)     0 (1)     6       12 links (2)  2 links     2 links
IFB #0168
 
OSA-Express3       0        24       96 ports      4 ports     4 ports
#3362, #3363
 
OSA-Express3       0        24       48 ports      2 ports     2 ports
10 GbE LR/SR
#3370, #3371
 
OSA-Express3-2P    0        24       48 ports      2 ports     2 ports
GbE SX
#3373
 
OSA-Express3       0        24       96 ports      4 ports     4 ports
1000BASE-T
#3367
 
OSA-Express3-2P    0        24       48 ports      2 ports     2 ports
1000BASE-T
#3369
 
OSA-Express2       0        24       48 ports      2 ports     2 ports
#3364, #3365, #3366
 
Crypto Express2    0        8        16            2           2 PCI-X(5)
#0863  (4)                           PCI-X         PCI-X       adapters
                                     adapters      adapters
 
Crypto Express2    0        8        8             1           1 PCI-X
-1P                                  PCI-X         PCI-X       adapters
#0870                                adapters      adapters
 

Note: (6) Can be carried forward on an upgrade; cannot be ordered.

  1. Minimum of one I/O feature (ESCON or FICON) or one Coupling Link (ICB-4, ISC-3, IFB) required.
  2. Maximum number of Coupling Links combined cannot exceed 64 per server. (ICB-4s, active ISC-3 links, and IFBs).
  3. Maximum number of coupling CHPIDs is 64 per server (ICP, CBP, CFP, and CIB).
  4. ICB-4s and 12x IB-DDRs are not included in the maximum feature count for I/O slots but are included in the CHPID count.
  5. An initial order of Crypto Express2 is 4 PCI-X adapters (two features). If you order a Crypto Express2-1P, then an initial order is 2 PCI-X adapters (two features). Each PCI-X adapter can be configured as either a coprocessor or an accelerator.

Feature descriptions

(#0084) Hardware Management Console with dual Ethernet

(No Longer Available as of November 9, 2010)

The HMC is a workstation designed to provide a single point of control and single system image for managing local or remote hardware elements. Connectivity is supplied using an Ethernet Local Area Network (LAN) devoted exclusively to accessing the supported local and remote servers. The HMC is designed to support, exclusively, the HMC application. The HMC is supplied with two Ethernet ports capable of operating at 10, 100, or 1000 Mbps. Included is one mouse, one keyboard, a selectable flat-panel display, and a DVD RAM to install Licensed Internal Code (LIC).

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: None.
  • Corequisites: None
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: The HMC is for the exclusive use of the HMC application. Customer applications cannot reside on the HMC. The ESCON Director and Sysplex Timer(R) applications cannot reside on the HMC. TCP/IP is the only supported communication protocol. The HMC supports z10 BCs. It can also be used to support z9 EC, z9 BC, z990, z890, z900, and z800 servers.
  • Field Installable: Yes. Parts removed as a result of feature conversions become the property of IBM.
  • Cable Order: Cables are shipped with the HMC. The Ethernet cables are Category 5 Unshielded Twisted Pair (UTP) with an RJ-45 connector on each end.
(#0089) Ethernet switch

(No Longer Available as of June 30, 2012)

An Ethernet switch is used to manage the Ethernet connection between Support Elements (SEs) and Hardware Management Consoles (HMCs.) With the Virtual Local Area Network (VLAN) capability offered on z10 BC an Ethernet switch is no longer required. This optional feature is available for use when you have more than one HMC in the same ring. The switch is a 16-port Ethernet standalone, unmanaged switch, capable of 10 and 100 Mbps.

  • Minimum: None
  • Maximum: Ten (10).
  • Prerequisites: None
  • Corequisites: None
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes. Parts removed as a result of feature conversions become the property of IBM.
  • Cable Order: Cables are a customer responsibility.
(#0116) 1 CPE Capacity Unit

(No Longer Available as of June 30, 2013)

Pricing feature code equal to the remainder of three days multiplied by the number of CPE Capacity Units divided by 100 purchased in a given Capacity for Planned Event record.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0117) 100 CPE Capacity Unit

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of CPE Capacity Units purchased in a given pre-paid Capacity for Planned Event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: # 0116
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0118) 10000 CPE Capacity Unit

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of CPE Capacity Units purchased in a given pre-paid Capacity for Planned Event record divided by 10,000.

  • Minimum: None.
  • Maximum: 250
  • Prerequisites: # 0117
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0119) 1 CPE Capacity Unit-IFL

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary Integrated Facility for Linux (IFL) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0120) 100 CPE Capacity Unit-IFL

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary Integrated Facility for Linux (IFL) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0119
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0121) 1 CPE Capacity Unit-ICF

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary Internal Coupling Facility (ICF) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0122) 100 CPE Capacity Unit-ICF

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary Internal Coupling Facility (ICF) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0121
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0123) 1 CPE Capacity Unit-zAAP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary System z Application Assist Processor (zAAP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0124) 100 CPE Capacity Unit-zAAP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary System z Application Assist Processor (zAAP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0123
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0125) 1 CPE Capacity Unit-zIIP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary System z Integrated Information Processor (zIIP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0126) 100 CPE Capacity Unit-zIIP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary System z Integrated Information Processor (zIIP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0125
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0127) 1 CPE Capacity Unit-SAP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the three days multiplied by the number of temporary System Assist Processor (SAP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 99
  • Prerequisites: None.
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0128) 100 CPE Capacity Unit-SAP

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of three days multiplied by the number of temporary System Assist Processor (SAP) features in a given Capacity for Planned event record divided by 100.

  • Minimum: None.
  • Maximum: 1
  • Prerequisites: # 0127
  • Corequisites: # 6833
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#0168) HCA2-O LR fanout card for Long Reach 1x InfiniBand:

(No Longer Available as of June 30, 2012)

Long reach 1x InfiniBand coupling links utilize the Host Channel Adapter2 Optical Long Reach fanout card (HCA2-O LR). This fanout is designed to support single data rate (SDR) at 2.5 Gbps link data rate (1x IB-SDR) or double data rate (DDR) at 5 Gbps (1x IB-DDR). The speed will be auto-negotiated and is determined by the capability of the attached Dense Wavelength Division Multiplexer (DWDM) to which it is attached. The DWDM vendor must be qualified by System z. An unrepeated distance of 10 km (6.2 miles) is supported. Greater distances are supported when attached to a System z qualified optical networking solution.

Note: A link data rate of 2.5 Gbps or 5 Gbps does not represent the actual performance of the link.

The HCA2-O LR fanout card has two ports and resides in the processor nest on the front of the book in the CPC cage. The two ports exit the fanout card using LC Duplex connectors (same connector used with ISC-3) and support 9 micron single mode fiber optic cables. These fiber optic cables and connectors are industry standard and are a customer responsibility.

Long Reach 1x InfiniBand coupling links are designed to satisfy extended distance requirements, and to facilitate a migration from ISC-3 coupling links to InfiniBand coupling links.

  • Minimum: None. Order increment is two ports/links - one HCA2-O LR fanout card.
  • Maximum: Six (6) features and 12 ports/links.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: Maximum number of IFB links (whether HCA2-O #0163 or HCA2-O LR #0168) in combination with ICB-4 is 32 ports/links per system. Without ICB-4, the maximum number of IFB links (whether HCA2-O #0163 or HCA2-O LR #0168) is 12 ports/links per system.
(#0217, #0218, #0219) InterSystem Channel-3 (ISC-3)

(No Longer Available as of June 30, 2012)

The InterSystem Channel-3 (ISC-3) feature is a member of the family of Coupling Link options. An ISC-3 feature can have up to four links per feature. The ISC-3 feature is used by coupled servers to pass information back and forth over 2 Gigabits per second (Gbps) links in a Parallel Sysplex environment. The z10 BC ISC-3 feature is compatible with ISC-3 features on System z9 and zSeries servers. While ICB-4 is used for short distances between servers (7 meters - 23 feet), ISC-3 supports an unrepeated distance of up to 10 kilometers (6.2 miles) between servers when operating at 2 Gbps. Extended distance for ISC-3 is available through RPQ. ISC-3 (CHPID type CFP - peer) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The ISC-3 feature is composed of a Mother card (ISC-M #0217) and two Daughter cards (ISC-D #0218). Each daughter card has two ports or links, for a total of four links per feature. Each link is activated using Licensed Internal Code, Configuration Control (LICCC) with ISC links #0219. The ISC-D cannot be ordered. When the quantity of ISC links (#0219) is selected, the appropriate number of ISC-Ms and ISC-Ds is selected by the configuration tool. Additional ISC-Ms may be ordered up to the maximum of ISC-Ds required or twelve (12), whichever is the smaller number. The link is defined in peer (CHPID type CFP) mode only. Compatibility mode is not supported.

Each link utilizes a Long Wavelength (LX) laser as the optical transceiver, and supports use of a 9 micron single mode fiber optic cable terminated with an industry standard small form factor LC Duplex connector. The ISC-3 feature accommodates reuse (at reduced distances) of 50 micron multimode fiber optic cables when the link data rate does not exceed 1 Gbps. A pair of Mode Conditioning Patch cables are then required, one for each end of the link.

  • Minimum: None. Links are ordered in increments of one. It is recommended that initial orders include two links. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4 or ISC-3) must be present.
  • Maximum: 12 features, 48 links (4 links per feature).
  • Prerequisites: None.
  • Compatibility Conflicts: None.
  • Customer Setup: No.
  • Limitations:
    • The maximum number of Coupling Links combined (ICB-4s, active ISC-3 links, and IFBs) cannot exceed 64 per server.
    • The unrepeated distance between 2 Gbps ISC-3 links is limited to 10 kilometers (6.2 miles). If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an LC Duplex connector is required.
(#0229) ICB-4 cable for z10 BC to System z

(No Longer Available as of June 30, 2012)

The Integrated Cluster Bus-4 (ICB-4) cable is a unique 10 meter (33 feet) copper cable to be used with ICB-4 links (#3393) only when the target servers are System z9.

ICB-4 cables will be automatically ordered to match the quantity of ICB-4 links (#3393) on order. The quantity of ICB-4 cables can be reduced, but cannot exceed the quantity of ICB-4 links on order.

Note: When ordering ICB cables, planning for the required number of cables should consider the total number of servers and ICB features to be ordered and enabled in calculating the number of cables to be ordered. As an example, if two servers with four features are being ordered and enabled, the total number of cables required is two. Proper planning will prevent over-ordering the number of cables.

  • Limitations: While the ICB-4 cable is 10 meters in length, 3 meters (10 feet) is used for internal routing and strain relief - 7 meters (23 feet) is available for server-to-server connection.
(#0230) ICB-4 z10 BC cable

(No Longer Available as of June 30, 2012)

The Integrated Cluster Bus-4 (ICB-4) z10 BC cable is a unique 10 meter (33 feet) copper cable to be used with ICB-4 links (#3393) when the target servers are System z10.

ICB-4 cables will be automatically ordered to match the quantity of ICB-4 links (#3393) on order. The quantity of ICB-4 cables can be reduced, but cannot exceed the quantity of ICB-4 links on order.

Note: When ordering ICB cables, planning for the required number of cables should consider the total number of System z10 servers and ICB features to be ordered and enabled in calculating the number of cables to be ordered. As an example, if two servers with four features are being ordered and enabled, the total number of cables required is two. Proper planning will prevent over-ordering the number of cables.

  • Limitations: While the ICB-4 cable is 10 meters in length, 3 meters (10 feet) is used for internal routing and strain relief - 7 meters (23 feet) is available for server-to-server connection.
(#0251) ISAOPT enablement for machine types 2097 (z10 EC) and 2098 (z10 BC)

(No Longer Available as of October 12, 2010)

This feature cannot be ordered. When IBM zEnterprise BladeCenter Extension (zBX) model 001 is ordered or upgraded, the configurator tool selects a quantity of this feature. The quantity of the feature is equal to the quantity of blades selected for the attached IBM zEnterprise BladeCenter Extension (zBX) system at the time of the ocnfiguration.

  • Minimum: None.
  • Maximum: Fifty six (56).
  • Prerequisites: None (see Limitations).
  • Corequisite: None (see Limitations).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: This feature is designed to work with FC #0610, IBM Smart Aanalytics Optimizer blade, on machine type 2458-001.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0839) TKE workstation

(No Longer Available as of December 31, 2009)

This is an optional feature. The Trusted Key Entry (TKE) workstation is a combination of hardware and software, network-connected to the server, and designed to provide a security-rich, flexible method for master and operational key entry as well as local and remote management of the cryptographic coprocessor features. Crypto Express2 default configuration on the z10 BC is a coprocessor. This optional feature provides basic key management -- key identification, exchange, separation, update, backup, as well as security administration. The TKE workstation has one Ethernet port and supports connectivity to an Ethernet Local Area Network (LAN) operating at 10 and 100 Mbps.

The feature shipment includes a system unit, mouse, keyboard, 17-inch (431.8 mm) flat panel display, DVD-RAM drive to install Licensed Internal Code (LIC), and a PCI-X Cryptographic Coprocessor. The workstation has one Ethernet port and a serial port for attaching a Smart Card Reader.

If Trusted Key Entry is required on z10 BC, then a TKE workstation must be used. TKE workstations can also be used to control the z9 EC, z9 BC, z990, and z890 servers.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: CP Assist for Cryptographic Function (#3863) and Crypto Express2 feature (#0863).
  • Corequisite: TKE 5.3 LIC (#0854) loaded on TKE workstation prior to shipment.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: LAN cabling is a customer responsibility. A Category 5 Unshielded Twisted Pair (UTP) cable terminated with RJ-45 connector is required.
(#0840) TKE Workstation

(No Longer Available as of November 9, 2010)

This is a chargable optional feature. The Trusted Key Entry (TKE) workstation is a combination of hardware and software, network-connected to the server, and designed to provide a security-rich, flexible method for master and operational key entry as well as local and remote management of the cryptographic coprocessor features. Crypto Express2 or Crypto Express3 default configuration on the z10 EC and z10 BC is a coprocessor. This optional feature provides basic key management such as key identification, exchange, separation, update, backup, as well as security administration. The TKE workstation has one Ethernet port and supports connectivity to an Ethernet Local Area Network (LAN) operating at 10 and 100 Mbps.

The feature shipment includes a system unit, mouse, keyboard, flat panel display, DVD-RAM drive to install Licensed Internal Code (LIC), and a PCI-X Cryptographic Coprocessor. The workstation has one Ethernet port and a USB port for attaching a Smart Card Reader.

If Trusted Key Entry is required on z10 EC and z10 BC, then a TKE workstation must be used. TKE workstations can also be used to control the z9 BC, z9 EC, z10 BC and z10 EC servers.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: CP Assist for Cryptographic Function (#3863) and any of the following: Crypto Express2 feature (#0863), Crypto Express3 feature (#0864), Crypto Express2-1P (#0870), Crypto Express3-1P (#0871).
  • Corequisite: TKE 6.0 LIC (#0858) loaded on TKE workstation prior to shipment.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: LAN cabling is a customer responsibility. A Category 5 Unshielded Twisted Pair (UTP) cable terminated with RJ-45 connector is required.
(#0854) TKE 5.3 LIC

(No Longer Available as of December 31, 2009)

The Trusted Key Entry (TKE) 5.3 level of Licensed Internal Code (LIC) is installed in a TKE workstation (#0839). TKE 5.3 LIC is a no-charge enablement feature which is loaded prior to shipment when a TKE workstation is ordered. The TKE 5.2 LIC includes support for the Smart Card Reader.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: TKE workstation (#0859, #0839).
  • Corequisites:
    • For z10 BC, CP Assist for Cryptographic Function (CPACF) (#3863) and Crypto Express2 (#0863) or Crypto Express2-1P (#0870).
    • For z9 EC, z9 BC, CPACF (#3863), and Crypto Express2 (#0863).
    • For z990 and z890 (2084, 2086), CPACF (#3863) and PCIXCC (#0868) or Crypto Express2 (#0863).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0858) TKE 6.0 Licensed Internal Code (LIC)

(No Longer Available as of June 30, 2013)

The Trusted Key Entry (TKE) 6.0 level of Licensed Internal Code (LIC) is installed in a TKE workstation (#0839 and #0840). TKE 6.0 LIC is a no-charge enablement feature which is loaded prior to shipment when a TKE workstation is ordered. The TKE 6.0 LIC includes support for the Smart Card Reader.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: TKE workstation (#0839, #0840).
  • Corequisites: For z9 BC and 10 BC, CP Assist for Cryptographic Function (CPACF) (#3863), including any of the following: Crypto Express2 (#0863), Crypto Express2-1P (#0870), Crypto Express3 (#0864), Crypto Express3-1P (#0871). For z9 EC and z10 EC, CP Assist for Cryptographic Function (CPACF) (#3863), including any of the following: Crypto Express2 (#0863), Crypto Express3 (#0864).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0863) Crypto Express2

(No Longer Available as of December 31, 2009)

The Crypto Express2 feature is designed to satisfy high-end server security requirements. It contains two PCI-X adapters which are configured independently, either as a coprocessor supporting secure key transactions or as an accelerator for Secure Sockets Layer (SSL) acceleration.

The Crypto Express2 feature is designed to conform to the Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification, and supports User Defined Extension (UDX) services to implement cryptographic functions and algorithms (when defined as a coprocessor).

  • Minimum: None or two. An initial order is two features.
  • Maximum: 8 features (16 PCI-X adapters, two PCI-X adapters per feature).
  • Prerequisites: CP Assist for Cryptographic Function (CPACF) (#3863).
  • Corequisites: TKE workstation (#0859, #0839) if you require security-rich, flexible key entry or remote key management if not using Integrated Cryptographic Service Facility (ICSF) panels.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: None. Internal connection. No external cables.
(#0864) Crypto Express3

(No Longer Available as of June 30, 2012)

The Crypto Express3 feature is designed to satisfy high-end server security requirements. It contains two PCI-E adapters which are configured independently, either as a coprocessor supporting secure key transactions or as an accelerator for Secure Sockets Layer (SSL) acceleration.

The Crypto Express3 feature is designed to conform to the Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification, and supports User Defined Extension (UDX) services to implement cryptographic functions and algorithms (when defined as a coprocessor).

  • Minimum: None or two. An initial order is two features.
  • Maximum: 8 features (16 PCI-E adapters, two PCI-E adapters per feature).
  • Prerequisites: CP Assist for Cryptographic Function (CPACF) (#3863).
  • Corequisites: TKE workstation (#0859, #0839, #0840) if you require security-rich, flexible key entry or remote key management if not using Integrated Cryptographic Service Facility (ICSF) panels.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: None. Internal connection. No external cables.
(#0867) TKE 7.1 LIC

(No Longer Available as of June 30, 2013)

The Trusted Key Entry (TKE) 7.1 level of Licensed Internal Code (LIC) is installed in a TKE workstation (#0841). TKE 7.1 LIC is a no-charge enablement feature which is loaded prior to shipment when a TKE workstation is ordered. The TKE 7.1 LIC includes support for the Smart Card Reader (#0885)

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: TKE workstation (#0841).
  • Prerequisites: CP Assist for Cryptographic Function (#3863); Crypto Express3 (#0864).
  • Compatibility Conflicts: TKE workstations with TKE 7.1 LIC can be used to control z196, z114, z10 EC, and z10 BC servers.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
(#0870) Crypto Express2-1P

(No Longer Available as of December 31, 2009)

The Crypto Express2-1P feature is designed to satisfy high-end server security requirements, yet is a single port lower-cost entry. It contains one PCI-X adapter which is configured either as a coprocessor supporting secure key transactions or as an accelerator for Secure Sockets Layer (SSL) acceleration.

The Crypto Express2-1P feature is designed to conform to the Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification, and supports User Defined Extension (UDX) services to implement cryptographic functions and algorithms (when defined as a coprocessor).

  • Minimum: None or two. An initial order is two features.
  • Maximum: 8 features (8 PCI-X adapters, one PCI-X adapters per feature).
  • Prerequisites: CP Assist for Cryptographic Function (CPACF) (#3863).
  • Corequisites: TKE workstation (#0859, #0839) if you require security-rich, flexible key entry or remote key management if not using Integrated Cryptographic Service Facility (ICSF) panels.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: None. Internal connection. No external cables.
(#0871) Crypto Express3-1P

The Crypto Express3-1P feature is designed to satisfy high-end server security requirements, yet is a single port lower-cost entry. It contains one PCI-E adapter which is configured either as a coprocessor supporting secure key transactions or as an accelerator for Secure Sockets Layer (SSL) acceleration.

The Crypto Express3-1P feature is designed to conform to the Federal Information Processing Standard (FIPS) 140-2 Level 4 Certification, and supports User Defined Extension (UDX) services to implement cryptographic functions and algorithms (when defined as a coprocessor).

  • Minimum: None or two. An initial order is two features.
  • Maximum: 8 features (8 PCI-E adapters, one PCI-E adapters per feature).
  • Prerequisites: CP Assist for Cryptographic Function (CPACF) (#3863).
  • Corequisites: TKE workstation (#0859, #0840, #0839) if you require security-rich, flexible key entry or remote key management if not using Integrated Cryptographic Service Facility (ICSF) panels.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes. Parts removed as a result of feature change become the property of IBM.
  • Cable Order: None. Internal connection. No external cables.
(#0884) TKE additional smart cards

(No Longer Available as of June 30, 2012)

These are Java**-based smart cards which provide a highly efficient cryptographic and data management application built-in to read-only memory for storage of keys, certificates, passwords, applications, and data. The TKE blank smart cards are compliant with FIPS 140-2 Level 2.

  • Minimum: None. Order increment is one. When one is ordered a quantity of 10 smart cards are shipped.
  • Maximum: 99 (990 blank Smart Cards).
  • Prerequisites: CP Assist for Cryptographic Function (#3863), Crypto Express2 (#0863), or Crypto Express2-1P (#0870).
  • Corequisites: TKE workstation with 5.3 level of LIC (#0854) for secure key parts entry and cryptographic hardware management or ISPF panels for clear key entry and cryptographic hardware management, and TKE Smart Card Reader (#0885).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes.
  • Cable Order: Not applicable.
(#0885) TKE Smart Card Reader

(No Longer Available as of June 30, 2012)

The TKE Smart Card Reader feature supports the use of smart cards, which resemble a credit card in size and shape, but contain an embedded microprocessor, and associated memory for data storage. Access to and the use of confidential data on the smart cards is protected by a user-defined Personal Identification Number (PIN).

The TKE LIC allows the storing of key parts on diskettes or paper, or optionally on smart cards, or to use a TKE authority key stored on a diskette, or optionally on a smart card, and to log on to the Cryptographic Coprocessor using a passphrase, or optionally a logon key pair. One (1) feature includes two Smart Card Readers, two cables to connect to the TKE workstation, and 20 smart cards.

  • Minimum: None. Order increment is one. Included are two Smart Card Readers and 20 smart cards.
  • Maximum: Ten.
  • Prerequisites: CP Assist for Cryptographic Function (#3863), Crypto Express2 feature (#0863), or Crypto Express2-1P (#0870).
  • Corequisites: TKE workstation with 5.3 level of LIC (#0854) for secure key parts entry and Crypto hardware management or ISPF panels for clear key entry and Crypto hardware management.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes.
  • Cable Order: None. External cables to connect two Smart Card Readers to the TKE workstation are shipped with the feature.
(#1750) Licensed Internal Code (LIC) ship using Net Indicator

(No Longer Available as of June 30, 2013)

This indicator flag is added to orders that are Licensed Internal Code (LIC) only and delivered by Web tools such as Customer Initiated Upgrade (CIU). There are no parts. The flag is generated by the system and not orderable.

(#1991) Pre-planned memory

Pre-planned memory features are used to build the physical infrastructure for Plan Ahead memory. Each feature equates to 4 GB of physical memory.

  • Minimum number of features: None.
  • Maximum number of features: 61.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#1992) Pre-planned memory activation

(No Longer Available as of June 30, 2013)

Pre-planned memory activation features are required to activate the physical memory installed using feature #1991 into usable, logical memory. One feature #1992 is needed for each feature #1991.

  • Minimum number of features: None.
  • Maximum number of features: 61.
  • Prerequisites: Pre-planned memory (#1991)
  • Corequisites: 1 None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#2323) 16-port ESCON

(No Longer Available as of June 30, 2012)

The Enterprise Systems Connection (ESCON) channel supports the ESCON architecture and provides the capability to directly attach to ESCON-supported Input/Output (I/O) devices (storage, disk, printers, control units) in a switched point-to-point topology at unrepeated distances of up to 3 kilometers (1.86 miles) at a link data rate of 17 megabytes (MB) per second. The ESCON channel utilizes 62.5 micron multimode fiber optic cabling terminated with an MT-RJ connector. The high density ESCON feature has 16 ports or channels, 15 of which can be activated for customer use. One channel is always reserved as a spare, in the event of a failure of one of the other channels.

Feature 2323 cannot be ordered. The configuration tool selects the quantity of features based upon the order quantity of ESCON channels (#2324), distributing the channels across features for high availability. After the first pair, ESCON features are installed in increments of one.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 32 features. A maximum of 480 active channels, 15 channels per feature.
  • Prerequisites: None.
  • Corequisites: ESCON channel (#2324).
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: The unrepeated distance between ESCON channels is limited to 3 kilometers (1.86 miles) using 62.5 micron multimode fiber optic cables. If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 62.5 micron multimode fiber optic cable terminated with an MT-RJ connector is required. In the event that the target or downstream device does not support an MT-RJ connector, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed.
(#2324) ESCON channel port

(No Longer Available as of June 30, 2012)

ESCON channels are available on a channel (port) basis in increments of four. The channel quantity is selected and Licensed Internal Code, Configuration Control (LICCC) is shipped to activate the desired quantity of channels on the 16-port ESCON feature (#2323). Each channel utilizes a Light Emitting Diode (LED) as the optical transceiver, and supports use of a 62.5 micron multimode fiber optic cable terminated with a small form factor, industry-standard MT-RJ connector.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 480 channels.
  • Prerequisites: None.
  • Corequisites: If a 62.5 multimode fiber optic cable terminated with an ESCON Duplex connector is being reused to connect this feature to a downstream device, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed.
  • Compatibility Conflicts: The 16-port ESCON feature has a small form factor optical transceiver that supports an MT-RJ connector only. A multimode fiber optic cable with an ESCON Duplex connector is not supported with this feature.
  • Customer Setup: No.
  • Limitations: The unrepeated distance between ESCON channels is limited to 3 kilometers (1.86 miles) using 62.5 micron multimode fiber optic cables. If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 62.5 micron multimode fiber optic cable terminated with an MT-RJ connector is required. In the event that the target or downstream device does not support an MT-RJ connector, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed.
(#3211) Internal Battery (IBF)

(No Longer Available as of June 30, 2012)

Internal battery backup feature. When selected, the actual number of IBFs will be determined based on the power requirements and model. The batteries are installed in pairs.

  • Minimum number of features: None.
  • Maximum number of features: Two. (One pair is maximum).
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
(#3321) FICON Express4 10KM LX

(No Longer Available as of October 27, 2009)

The FICON Express4 10KM LX (long wavelength) feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a Storage Area Network (SAN). The FICON Express4 10KM LX feature supports an unrepeated distance of 10 kilometers (6.2 miles). Each of the four independent ports/channels is capable of 1 gigabit per second (Gbps), 2 Gbps, or 4 Gbps depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels. p. FICON Express4 10KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs. p. The FICON Express4 10KM LX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy. p. There are two modes of operation supported by each FICON Express4 10KM LX channel, independently, for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol (CHPID type FCP) which supports attachment to SCSI devices using Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z environments

The FICON Express4 10KM LX feature utilizes a long wavelength (LX) laser as the optical transceiver and supports use of a 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector. LX may also be referred to as LW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 32 features, up to 128 channels; can be any combination of FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: If a 50/125 or 62.5/125 micrometer multimode fiber optic cable is being reused with the FICON Express4 10KM LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors/receptacles in the enterprise. When using MCP cables, the speed is limited to 1 Gbps.

    Note: The speed must be set to 1 Gbps in a switch. The channel and control unit do not have the capability to be manually set to a speed.

  • Compatibility Conflicts: The FICON Express4 10 LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching/downstream device has a long wavelength (LX) transceiver The sending and receiving transceivers must be the same (LX to LX).
  • Customer Setup: No.
  • Limitations:
    • The FICON Express4 10KM LX feature does not support FICON Bridge (CHPID type FCV).
    • The unrepeated distance between FICON Express4 10KM LX channels is limited to 10 kilometers (6.2 miles).
    • IBM supports interoperability of 10 km transceivers with 4 km transceivers provided the unrepeated distance between a 10 km transceiver and a 4 km transceiver does not exceed 4 km (2.5 miles).
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3322) FICON Express4 SX

(No Longer Available as of October 27, 2009)

The FICON Express4 SX (short wavelength) feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a storage area network. Each of the four independent ports/channels is capable of 1 gigabit per second (Gbps), 2 Gbps, or 4 Gbps depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels.

FICON Express4 SX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express4 SX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

There are two modes of operation supported by each FICON Express4 SX channel, independently, for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol (CHPID type FCP) which supports attachment to SCSI devices using Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z environments

The FICON Express4 SX feature utilizes a short wavelength (SX) laser as the optical transceiver, and supports use of a 50/125 micrometer multimode fiber optic cable or a 62.5/125-micrometer multimode fiber optic cable terminated with an LC Duplex connector.

Note: IBM does not support a mix of 50 and 62.5 micron fiber optic cabling in the same link. SX may also be referred to as SW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 32 features, up to 128 channels; can be any combination of FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The FICON Express4 SX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching/downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Setup: No.
  • Limitations:
    • The FICON Express4 SX feature does not support FICON Bridge (CHPID type FCV).
    • The unrepeated distance between FICON Express4 SX channels using multimode fiber optic cabling is as follows:
                                            Fiber
                      Fiber core        Bandwidth
    Link data rate  in microns (u)    @ wavelength    Unrepeated distance
    --------------  --------------    -------------   -------------------
    4 Gbps               50 u         2000 MHz-km         270 meters
                       SX laser         @850 nm           886 feet
     
    4 Gbps               50 u          500 MHz-km         150 meters
                       SX laser         @850 nm           492 feet
     
    4 Gbps              62.5 u         200 MHz-km          70 meters
                       SX laser          @850 nm          230 feet
     
    4 Gbps              62.5 u         160 MHz-km          55 meters
                       SX laser          @850 nm          180 feet
     
    2 Gbps               50 u         2000 MHz-km         500 meters
                       SX laser         @850 nm           1640 feet
     
    2 Gbps              50 u           500 MHz-km         300 meters
                       SX laser          @850 nm          984 feet
     
    2 Gbps              62.5 u         200 MHz-km         150 meters
                       SX laser          @850 nm          492 feet
     
    2 Gbps              62.5 u         160 MHz-km         120 meters
                      SX laser          @850 nm           394 feet
     
    1 Gbps               50 u         2000 MHz-km         860 meters
                       SX laser         @850 nm         2,822 feet
     
    1 Gbps              50 u           500 MHz-km         500 meters
                     SX laser          @850 nm          1,640 feet
     
    1 Gbps              62.5 u         200 MHz-km         300 meters
                       SX laser          @850 nm          984 feet
     
    1 Gbps              62.5 u         160 MHz-km         250 meters
                       SX laser          @850 nm          820 feet
     
    
    • Field Installable: Yes.
    • Cable Order: A customer-supplied cable is required. A 50/125 micrometer multimode fiber optic cable, or a 62.5/125 micrometer multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3318) FICON Express4-2C SX

(No Longer Available as of June 30, 2012)

The FICON Express4-2C SX (short wavelength) feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a storage area network. Each of the two independent ports/channels is capable of 1 gigabit per second (Gbps), 2 Gbps, or 4 Gbps depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels.

FICON Express4-2C SX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express4-2C SX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

There are two modes of operation supported by each FICON Express4-2C SX channel, independently, for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol (CHPID type FCP) which supports attachment to SCSI devices using Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z environments

The FICON Express4-2C SX feature utilizes a short wavelength (SX) laser as the optical transceiver, and supports use of a 50/125 micrometer multimode fiber optic cable or a 62.5/125-micrometer multimode fiber optic cable terminated with an LC Duplex connector.

Note: IBM does not support a mix of 50 and 62.5 micron fiber optic cabling in the same link. SX may also be referred to as SW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 32 features, up to 64 channels; can be any combination of FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The FICON Express4-2C SX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching/downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Setup: No.
  • Limitations:
    • The FICON Express4-2C SX feature does not support FICON Bridge (CHPID type FCV).
    • The unrepeated distance between FICON Express4-2C SX channels using multimode fiber optic cabling is as follows:
                                            Fiber
                      Fiber core        Bandwidth
    Link data rate  in microns (u)    @ wavelength    Unrepeated distance
    --------------  --------------    -------------   -------------------
    4 Gbps               50 u         2000 MHz-km         270 meters
                       SX laser         @850 nm           886 feet
     
    4 Gbps               50 u          500 MHz-km         150 meters
                       SX laser         @850 nm           492 feet
     
    4 Gbps              62.5 u         200 MHz-km          70 meters
                       SX laser          @850 nm          230 feet
     
    4 Gbps              62.5 u         160 MHz-km          55 meters
                       SX laser          @850 nm          180 feet
     
    2 Gbps               50 u         2000 MHz-km         500 meters
                       SX laser         @850 nm           1640 feet
     
    2 Gbps              50 u           500 MHz-km         300 meters
                       SX laser          @850 nm          984 feet
     
    2 Gbps              62.5 u         200 MHz-km         150 meters
                       SX laser          @850 nm          492 feet
     
    2 Gbps              62.5 u         160 MHz-km         120 meters
                      SX laser          @850 nm           394 feet
     
    1 Gbps               50 u         2000 MHz-km         860 meters
                       SX laser         @850 nm         2,822 feet
     
    1 Gbps              50 u           500 MHz-km         500 meters
                     SX laser          @850 nm          1,640 feet
     
    1 Gbps              62.5 u         200 MHz-km         300 meters
                       SX laser          @850 nm          984 feet
     
    1 Gbps              62.5 u         160 MHz-km         250 meters
                       SX laser          @850 nm          820 feet
     
    
    • Field Installable: Yes.
    • Cable Order: A customer-supplied cable is required. A 50/125 micrometer multimode fiber optic cable, or a 62.5/125 micrometer multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3323) FICON Express4 2C 4 KM LX

(No Longer Available as of June 30, 2012)

The FICON Express4 2C 4 KM LX (long wavelength) feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a Storage Area Network (SAN). The FICON Express4 4KM feature supports an unrepeated distance of 4 kilometers (2.5 miles). Each of the two independent ports/channels is capable of 1 gigabit per second (Gbps), 2 Gbps, or 4 Gbps depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels.

FICON Express4 2C 4 KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express4 2C 4 KM LX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

There are two modes of operation supported by each FICON Express4 2C 4 KM LX channel, independently, for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol (CHPID type FCP) which supports attachment to SCSI devices using Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z environments

The FICON Express4 2C 4 KM LX feature utilizes a long wavelength (LX) laser as the optical transceiver and supports use of a 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector. LX may also be referred to as LW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 32 features, up to 64 channels; can be any combination of FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: If a 50/125 or 62.5/125 micrometer multimode fiber optic cable is being reused with the FICON Express4 2C 4 KM LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors/receptacles in the enterprise. When using MCP cables, the speed is limited to 1 Gbps.

    Note: The speed must be set to 1 Gbps in a switch. The channel and control unit do not have the capability to be manually set to a speed.

  • Compatibility Conflicts: The FICON Express4 2C 4 KM LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching/downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX). LX may also be referred to as LW by vendors.
  • Customer Setup: No.
  • Limitations:
    • The FICON Express4 2C 4 KM LX feature does not support FICON Bridge (CHPID type FCV).
    • The unrepeated distance between FICON Express4 2C 4 KM LX channels is limited to 4 kilometers (2.5 miles). If greater distances are desired, the FICON Express4 10KM LX feature (#3321) should be ordered.
    • IBM supports interoperability of 10 km transceivers with 4 km transceivers provided the unrepeated distance between a 10 km transceiver and a 4 km transceiver does not exceed 4 km (2.5 miles).
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.

Open Systems Adapter (OSA) family of LAN adapters

All of the OSA features support the Queued Direct Input/Output (QDIO) architecture, allowing an OSA feature to directly communicate with the server's communications program through the use of data queues in memory. QDIO is designed to eliminate the use of channel programs and Channel Control Words (CCWs), which can help reduce host interrupts and accelerate TCP/IP packet transmission.

There are multiple Channel Path Identifier (CHPID) types that may be supported by an OSA port, independently. Refer to each of the features for the CHPID types supported.

  • CHPID type OSC - OSA-Integrated Console Controller (OSA-ICC) supporting TN3270E and non-SNA DFT 3270 emulation.
  • CHPID type OSD - Queued Direct Input/Output (QDIO), supporting Transmission Control Protocol/Internet Protocol (TCP/IP) when in Layer 3 mode. Use TN3270E or Enterprise Extender for SNA traffic. When in Layer 2 mode the port is protocol-independent.
  • CHPID type OSE - Non-QDIO, supporting TCP/IP and SNA/APPN/HPR.
  • CHPID type OSN - OSA-Express for NCP supporting LPAR-to-LPAR communication to access IBM Communication Controller for Linux on System z.
(#3324) FICON Express4 4KM LX

(No Longer Available as of October 27, 2009)

The FICON Express4 4KM LX (long wavelength) feature conforms to the Fibre Connection (FICON) architecture and the Fibre Channel (FC) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a Storage Area Network (SAN). The FICON Express4 4KM feature supports an unrepeated distance of 4 kilometers (2.5 miles). Each of the four independent ports/channels is capable of 1 gigabit per second (Gbps), 2 Gbps, or 4 Gbps depending upon the capability of the attached switch or device. The link speed is auto-negotiated, point-to-point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels. ;p. FICON Express4 4KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs. ;p. The FICON Express4 4KM LX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy. ;p. There are two modes of operation supported by each FICON Express4 4KM LX channel, independently, for connectivity to servers, switches, directors, disks, tapes, and printers:

  1. Native FICON and FICON Channel-to-Channel (CTC) (CHPID type FC) in the z/OS, z/VM, z/VSE, z/TPF, TPF, and Linux on System z environments
  2. Fibre Channel Protocol (CHPID type FCP) which supports attachment to SCSI devices using Fibre Channel switches or directors in z/VM, z/VSE, and Linux on System z environments

The FICON Express4 4KM LX feature utilizes a long wavelength (LX) laser as the optical transceiver and supports use of a 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector. LX may also be referred to as LW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 32 features, up to 128 channels; can be any combination of FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: If a 50/125 or 62.5/125 micrometer multimode fiber optic cable is being reused with the FICON Express4 4KM LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors/receptacles in the enterprise. When using MCP cables, the speed is limited to 1 Gbps.

    Note: The speed must be set to 1 Gbps in a switch. The channel and control unit do not have the capability to be manually set to a speed.

  • Compatibility Conflicts: The FICON Express4 4KM LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching/downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX). LX may also be referred to as LW by vendors.
  • Customer Setup: No.
  • Limitations:
    • The FICON Express4 4KM LX feature does not support FICON Bridge (CHPID type FCV).
    • The unrepeated distance between FICON Express4 4KM LX channels is limited to 4 kilometers (2.5 miles). If greater distances are desired, the FICON Express4 10KM LX feature (#3321) should be ordered.
    • IBM supports interoperability of 10 km transceivers with 4 km transceivers provided the unrepeated distance between a 10 km transceiver and a 4 km transceiver does not exceed 4 km (2.5 miles).
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3325) FICON Express8 10KM LX

(No Longer Available as of June 30, 2012)

The FICON Express8 10KM LX (long wavelength) feature conforms to the Fibre connection (FICON) architecture, the High Performance FICON for System z (zHPF) architecture, and the Fibre Channel Protocol (FCP) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a storage area network (SAN).

Each of the four independent ports/channels is capable of 2 gigabits per second (Gbps), 4 Gbps, or 8 Gbps depending upon the capability of the attached switch or device. The link speed is autonegotiated, point-to- point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels.

FICON Express8 10KM LX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express8 10KM LX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross-site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

Each FICON Express8 10KM LX channel can be defined independently, for connectivity to servers, switches, directors, disks, tapes, and printers as:

  1. Native FICON, High Performance FICON for System z (zHPF), and FICON Channel-to-Channel (CTC) - (CHPID type FC); native FICON and zHPF protocols are supported simultaneously.
  2. Fibre Channel Protocol (CHPID type FCP),which supports attachment to SCSI devices directly or through Fibre channel switches or directors.

The FICON Express8 10KM LX feature utilizes a long wavelength (LX) laser as the optical transceiver and supports use of a 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector. LX may also be referred to as LW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB, IFB or ISC-3) must be present in a server.
  • Maximum: 32 features; can be any combination of FICON Express8, FICON Express4, FICON Express2, and FICON Express features.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known. Ensure the attaching/downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX).
  • Customer Setup: No.
  • Limitations:
    • The FICON Express8 10KM LX feature does not support FICON Bridge (CHPID type FCV).
    • The FICON Express8 10KM LX feature does not support autonegotiation to 1 Gbps.
    • The FICON Express8 10 KM LX feature is designed to support distances up to 10 kilometers (6.2 miles) over 9 micron single mode fiber optic cabling without performance degradation. To avoid performance degradation at extended distance, FICON switches or directors (for buffer credit provision) or Dense Wavelength Division Multiplexers (for buffer credit simulation) may be required.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9/125 micrometer single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3326) FICON Express8 SX

(No Longer Available as of June 30, 2012)

The FICON Express8 SX (short wavelength) feature conforms to the Fibre connection (FICON) architecture, the High Performance FICON for System z (zHPF) architecture, and the Fibre Channel Protocol (FCP) architecture, providing connectivity between a combination of servers, directors, switches, and devices (control units, disk, tape, printers) in a storage area network (SAN).

Each of the four independent ports/channels is capable of 2 gigabits per second (Gbps), 4 Gbps, or 8 Gbps depending upon the capability of the attached switch or device. The link speed is autonegotiated, point-to- point, and is transparent to users and applications. Each of the channels utilizes a small form factor pluggable (SFP) optic and is designed to be individually repaired without affecting the other channels. FICON Express8 SX (CHPID type FC or FCP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

The FICON Express8 SX feature also supports cascading (the connection of two FICON Directors in succession) to minimize the number of cross- site connections and help reduce implementation costs for disaster recovery applications, GDPS, and remote copy.

Each FICON Express8 SX channel, can be defined independently for connectivity to servers, switches, directors, disks, tapes, and printers as:

  1. Native FICON, High Performance FICON for System z (zHPF), and FICON Channel-to-Channel (CTC) - (CHPID type FC); native FICON and zHPF protocols are supported simultaneously.
  2. Fibre Channel Protocol (CHPID type FCP), which supports attachment to SCSI devices directly or through Fibre Channel switches or directors

The FICON Express8 SX feature utilizes a short wavelength (SX) laser as the optical transceiver, and supports use of a 50/125 micrometer multimode fiber optic cable or a 62.5/125-micrometer multimode fiber optic cable terminated with an LC Duplex connector.

Note: IBM does not support a mix of 50 and 62.5 micron fiber optic cabling in the same link. SX may also be referred to as SW by vendors.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB, PSIFB or ISC-3) must be present in a server.
  • Maximum: 32 features; can be any combination of FICON Express8, FICON Express4, FICON Express2, and FICON Express LX and SX features.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known. Ensure the attaching/downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Set-up: No.
  • Limitations:
    • The FICON Express8 SX feature does not support FICON Bridge (CHPID type FCV).
    • The FICON Express8 SX feature does not support autonegotiation to 1 Gbps.
    • FICON Express8 is designed to support distances up to 10 kilometers (6.2 miles) without performance degradation. To avoid performance degradation at extended distance, FICON switches or directors (for buffer credit provision) or Dense Wavelength Division Multiplexers (for buffer credit simulation) may be required.
    • For unrepeated distances for FICON Express8 SX refer to System z Planning for Fiber Optic Links (GA23-0367) available on System z10 servers at planned availability in the Library section of Resource Link.

      www.ibm.com/servers/resourcelink

    • Field Installable: Yes.
    • Cable Order: A customer-supplied cable is required. A 50/125 micrometer multimode fiber optic cable, or a 62.5/125 micrometer multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3362) OSA-Express3 Gigabit Ethernet LX

(No Longer Available as of June 30, 2012)

The OSA-Express3 Gigabit Ethernet (GbE) long wavelength (LX) feature has four ports. Two ports reside on a PCI-E adapter and share a channel path identifier (CHPID). There are two PCI-E adapters per feature. Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express3 GbE LX supports CHPID types OSD and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features, 96 ports (four ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: If a 50 or 62.5 micron multimode fiber optic cable is being reused with the OSA-Express2 GbE LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors and receptacles in the enterprise.
  • Compatibility Conflicts: The OSA-Express2 GbE LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables are required, one for each end of the link.

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3363) OSA-Express3 Gigabit Ethernet SX

(No Longer Available as of June 30, 2012)

The OSA-Express3 Gigabit Ethernet (GbE) short wavelength (SX) feature has four ports. Two ports reside on a PCI-E adapter and share a channel path identifier (CHPID). There are two PCI-E adapters per feature. Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express3 GbE SX supports CHPID types OSD and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features, 96 ports (four ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None known.
  • Compatibility Conflicts: The OSA-Express3 GbE SX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 50 or a 62.5 micron multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3364) OSA-Express2 Gigabit Ethernet LX

(No Longer Available as of June 30, 2009)

The OSA-Express2 Gigabit Ethernet (GbE) long wavelength (LX) feature has two independent ports. Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express2 GbE LX supports CHPID types OSD and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features, 48 ports (two ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: If a 50 or 62.5 micron multimode fiber optic cable is being reused with the OSA-Express2 GbE LX feature, a pair of Mode Conditioning Patch cables are required, one for each cable end. Select the correct cable based upon the type of fiber and the connectors and receptacles in the enterprise.
  • Compatibility Conflicts: The OSA-Express2 GbE LX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a long wavelength (LX) transceiver. The sending and receiving transceivers must be the same (LX to LX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device. If multimode fiber optic cables are being reused, a pair of Mode Conditioning Patch cables are required, one for each end of the link.

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3365) OSA-Express2 GbE SX

(No Longer Available as of June 30, 2009)

The OSA-Express2 Gigabit Ethernet (GbE) short wavelength (SX) feature has two independent ports. Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express2 GbE SX supports CHPID types OSD and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features, 48 ports (two ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None known.
  • Compatibility Conflicts: The OSA-Express2 GbE SX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 50 or a 62.5 micron multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3366) OSA-Express2 1000BASE-T Ethernet

(No Longer Available as of December 31, 2009)

The OSA-Express2 1000BASE-T Ethernet feature has two independent ports. Each port supports attachment to either a 10BASE-T (10 Mbps), 100BASE-TX (100 Mbps), or 1000BASE-T (1000 Mbps or 1 Gbps) Ethernet Local Area Network (LAN). The feature supports auto-negotiation and automatically adjusts to 10, 100, or 1000 Mbps, depending upon the LAN. When the feature is set to autonegotiate, the target device must also be set to autonegotiate. The feature supports the following settings: 10 Mbps half or full duplex, 100 Mbps half or full duplex, 1000 Mbps (1 Gbps) full duplex. OSA-Express2 1000BASE-T Ethernet supports CHPID types OSC, OSD, OSE, and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

When configured at 1 Gbps, the 1000BASE-T Ethernet feature operates in full duplex mode only and supports jumbo frames when in QDIO mode (CHPID type OSD).

  • Minimum: None.
  • Maximum: 24 features, 48 ports (two ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: When the OSA-Express2 feature is set to autonegotiate, the target device must also be set to autonegotiate. Both ends must match (autonegotiate on or off).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. The OSA-Express2 1000BASE-T Ethernet feature supports use of an EIA/TIA Category 5 Unshielded Twisted Pair (UTP) cable terminated with an RJ-45 connector with a maximum length of 100 meters (328 feet).

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3367) OSA-Express3 1000BASE-T Ethernet

(No Longer Available as of June 30, 2012)

The OSA-Express3 1000BASE-T Ethernet feature has four ports. Two ports reside on a PCI-E adapter and share a channel path identifier (CHPID). There are two PCI-E adapters per feature. Each port supports attachment to either a 10BASE-T (10 Mbps), 100BASE-TX (100 Mbps), or 1000BASE-T (1000 Mbps or 1 Gbps) Ethernet Local Area Network (LAN). The feature supports auto-negotiation and automatically adjusts to 10, 100, or 1000 Mbps, depending upon the LAN. When the feature is set to autonegotiate, the target device must also be set to autonegotiate. The feature supports the following settings: 10 Mbps half or full duplex, 100 Mbps half or full duplex, 1000 Mbps (1 Gbps) full duplex. OSA-Express3 1000BASE-T Ethernet supports CHPID types OSC, OSD, OSE, and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

When configured at 1 Gbps, the 1000BASE-T Ethernet feature operates in full duplex mode only and supports jumbo frames when in QDIO mode (CHPID type OSD).

  • Minimum: None.
  • Maximum: 24 features, 96 ports (four ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: When the OSA-Express3 feature is set to autonegotiate, the target device must also be set to autonegotiate. Both ends must match (autonegotiate on or off).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. The OSA-Express3 1000BASE-T Ethernet feature supports use of an EIA/TIA Category 5 Unshielded Twisted Pair (UTP) cable terminated with an RJ-45 connector with a maximum length of 100 meters (328 feet).

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3369) OSA-Express3-2P 1000BASE-T Ethernet

(No Longer Available as of June 30, 2012)

The OSA-Express3-2P 1000BASE-T Ethernet feature has two ports which reside on a single PCI-E adapter and share one channel path identifier (CHPID). Each port supports attachment to either a 10BASE-T (10 Mbps), 100BASE-TX (100 Mbps), or 1000BASE-T (1000 Mbps or 1 Gbps) Ethernet Local Area Network (LAN). The feature supports auto-negotiation and automatically adjusts to 10, 100, or 1000 Mbps, depending upon the LAN. When the feature is set to autonegotiate, the target device must also be set to autonegotiate. The feature supports the following settings: 10 Mbps half or full duplex, 100 Mbps half or full duplex, 1000 Mbps (1 Gbps) full duplex. OSA-Express3 1000BASE-T Ethernet supports CHPID types OSC, OSD, OSE, and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

When configured at 1 Gbps, the 1000BASE-T Ethernet feature operates in full duplex mode only and supports jumbo frames when in QDIO mode (CHPID type OSD).

  • Minimum: None.
  • Maximum: 24 features, 48 ports (two ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: When the OSA-Express3 feature is set to autonegotiate, the target device must also be set to autonegotiate. Both ends must match (autonegotiate on or off).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. The OSA-Express3 1000BASE-T Ethernet feature supports use of an EIA/TIA Category 5 Unshielded Twisted Pair (UTP) cable terminated with an RJ-45 connector with a maximum length of 100 meters (328 feet).

    Note: No cable is required when a port is defined as CHPID type OSN.

(#3370) OSA-Express3 10 Gigabit Ethernet LR

(No Longer Available as of June 30, 2012)

The OSA-Express3 10 Gigabit Ethernet (GbE) long reach (LR) feature has two ports. Each port resides on a PCI-E adapter and has its own channel path identifier (CHPID). There are two PCI-E adapters per feature. OSA-Express3 10 GbE LR is designed to support attachment to a 10 Gigabits per second (Gbps) Ethernet Local Area Network (LAN) or Ethernet switch capable of 10 Gbps. OSA-Express3 10 GbE LR supports CHPID type OSD exclusively. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features, 48 ports (two ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The OSA-Express3 10 GbE LR feature supports use of an industry standard small form factor LC Duplex connector. A conversion kit may be required if there are fiber optic cables terminated with SC Duplex connectors. Ensure the attaching or downstream device has a long reach (LR) transceiver. The sending and receiving transceivers must be the same (LR to LR which may also be referred to as LW or LX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 9 micron single mode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
(#3371) OSA-Express3 10 Gigabit Ethernet SR

(No Longer Available as of June 30, 2012)

The OSA-Express3 10 Gigabit Ethernet (GbE) Short Reach (SR) feature has two ports. Each port resides on a PCI-E adapter and has its own channel path identifier (CHPID). There are two PCI-E adapters per feature. OSA-Express3 10 GbE SR is designed to support attachment to a 10 Gigabits per second (Gbps) Ethernet Local Area Network (LAN) or Ethernet switch capable of 10 Gbps. OSA-Express3 10 GbE SR supports CHPID type OSD exclusively. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features, 48 ports (two ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: The OSA-Express3 10 GbE SR feature supports use of an industry standard small form factor LC Duplex connector. A conversion kit may be required if there are fiber optic cables terminated with SC Duplex connectors. Ensure the attaching or downstream device has a Short Reach (SR) transceiver. The sending and receiving transceivers must be the same (SR-to-SR).
  • Customer Setup: No.
  • Limitations: OSA-Express3 10 GbE SR supports CHPID type OSD exclusively. It does not support any other CHPID type.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 50 or 62.5 micron multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.
    • Unrepeated distance:
      • With 50 micron fiber at 2000 MHz-km: 300 meters (984 feet)
      • With 50 micron fiber at 500 MHz-km: 82 meters (269 feet)
      • With 62.5 micron fiber at 200 MHz-km: 33 meters (108 feet)
(#3373) OSA-Express3-2P Gigabit Ethernet SX

(No Longer Available as of June 30, 2012)

The OSA-Express3-2P Gigabit Ethernet (GbE) short wavelength (SX) feature has two ports which reside on a single PCI-E adapter and share one channel path identifier (CHPID). Each port supports attachment to a one Gigabit per second (Gbps) Ethernet Local Area Network (LAN). OSA-Express3 GbE SX supports CHPID types OSD and OSN. It can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 24 features, 48 ports (two ports per feature). The maximum quantity of all OSA-Express3 and OSA-Express2 features cannot exceed 24 features per server.
  • Prerequisites: None.
  • Corequisites: None known.
  • Compatibility Conflicts: The OSA-Express3 GbE SX feature supports use of an LC Duplex connector. A conversion kit may be required if there are fiber optic cables with SC Duplex connectors. Ensure the attaching or downstream device has a short wavelength (SX) transceiver. The sending and receiving transceivers must be the same (SX to SX).
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 50 or a 62.5 micron multimode fiber optic cable terminated with an LC Duplex connector is required for connecting this feature to the selected device.

    Note: No cable is required when a port is defined as CHPID type OSN.

Internal Coupling Channel (IC)

This description is for information purposes only. ICs are not identified as a feature. The Internal Coupling channel (IC) is for internal communication between Coupling Facilities defined in Logical Partitions (LPARs) and z/OS images on the same server. ICs do have a Channel Path Identifier (CHPID), which is type ICP, and are assigned using IOCP or HCD. There is no physical hardware. Care should be taken to ensure the planned use of ICs and external Coupling Links (ICB-4s, active ISC-3s, and IFBs) ordered does not exceed 64 CHPIDs per server. ICs (CHPID type ICP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

  • Minimum: None.
  • Maximum: 32 ICs.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: Compatibility mode is not supported.
  • Customer Setup: ICs must be defined in the IOCDS using either IOCP or HCD.
  • Limitations: The maximum number of Coupling Link CHPIDs combined (ICs, ICB-4s, active ISC-3 links, and IFBs) cannot exceed 64 per server.
  • Field Installable: Yes.
  • Cable Order: None. There are no external cables.
(#3393) ICB-4 link

(No Longer Available as of June 30, 2012)

The Integrated Cluster Bus-4 (ICB-4) link is a member of the family of Coupling Link options. ICB-4 operates at 2 GigaBytes per second. ICB-4 is used by coupled servers to pass information back and forth over high speed links in a Parallel Sysplex environment when the distance between servers is no greater than 7 meters (23 feet). Cables are required. ICB-4 is a "native" connection used between z10 BC, z10 EC, z9 EC, z9 BC, z990, and z890 servers. ICB-4s (CHPID type CBP) can be defined as a spanned channel and can be shared among LPARs within and across LCSSs.

An ICB-4 link consists of one link that attaches directly to an STI port on an MBA fanout card in a book, does not require connectivity to a card in the I/O drawer, and provides one output port to support ICB-4 to ICB-4 connectivity. One ICB-4 connection is required for each end of the link.

  • Minimum: None. At least one I/O feature (ESCON or FICON) or Coupling Link feature (ICB-4, ISC-3, or InfiniBand) must be present in a server.
  • Maximum: 6 features / 12 ICB-4 links.
  • Prerequisites: Raised floor installations only
  • Corequisites: An ICB-4 feature is required for each end of the link, whether an z10 BC, z10 EC, z9 EC, z9 BC, z990, or z890.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations:
    • An ICB-4 can only communicate with an ICB-4.
    • The distance between ICB-4s cannot exceed 7 meters (23 feet).
    • The combined total of ICB-4s and IFBs (#0163, #0168) cannot exceed 6 features (12 links) per server.
    • The maximum number of Coupling Links combined (ICB-4s, active ISC-3 links, and InfiniBand) cannot exceed 64 per server.
  • Field Installable: Yes.
  • Cable Order: A cable is required and must be ordered. The connector is unique on z10 BC. A 10 meter (33 feet) ICB-4 cable (#0229 z10 BC to System z or #0230 z10 BC to System z10) is used with the ICB-4 link -- 3 meters (10 feet) is used for internal routing and strain relief and 7 meters (23 feet) is available for server-to-server connection. This cable is unique to ICB-4.
(#3863) CP Assist for Cryptographic Function (CPACF) enablement

(No Longer Available as of June 30, 2013)

CPACF, supporting clear key encryption, is activated using the no-charge enablement feature (#3863). The CP Assist for Cryptographic Function (CPACF), is shared between two Processor Units (PUs). For every Processor Unit defined as a Central Processor (CP) or an Integrated Facility for Linux (IFL) the following is standard: Advanced Encryption Standard (AES), Data Encryption Standard (DES), Triple Data Encryption Standard (TDES), Pseudo Random Number Generation (PRNG), Secure Hash Algorithm (SHA-1), SHA-224, SHA-256, SHA-384 and SHA-512 are shipped enabled on all z10 BC servers and do not require the no-charge enablement feature. For new servers, shipped from the factory, CPACF enablement (#3863) is loaded prior to shipment. For other than new shipments, Licensed Internal Code is shipped by an enablement diskette. The function is enabled using the Support Element (SE).

  • Minimum: None.
  • Maximum: One.
  • Prerequisites: None.
  • Corequisites: Crypto Express2 (#0863), or Crypto Express2-1P (# 0870)
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.

ETR feature on z10 BC is standard

The External Time Reference (ETR) feature is now standard and supports attachment to the Sysplex Timer Model 2 (9037-002) at an unrepeated distance of up to three kilometers (1.86 miles) and a link data rate of 8 Megabits per second.

The Sysplex Timer Model 2 is the centralized time source that sets the Time-Of-Day (TOD) clocks in all attached servers to maintain synchronization. The Sysplex Timer Model 2 provides the stepping signal that helps ensure that all TOD clocks in a multi-server environment increment in unison to permit full read or write data sharing with integrity. The Sysplex Timer Model 2 is a key component of an IBM Parallel Sysplex environment and a GDPS availability solution for On Demand Business.

Time synchronization and time accuracy on &zshort: If you require time synchronization across multiple servers (for example you have a Parallel Sysplex) or you require time accuracy either for one or more System z servers or you require the same time across heterogeneous platforms (System z, Unix, AIX, etc) you can meet these requirements by either installing a Sysplex Timer Model 2 (9037-002) or by implementing Server Time Protocol (STP).

The Sysplex Timer Model 2 is the centralized time source that sets the Time-Of-Day (TOD) clocks in all attached servers to maintain synchronization. The Sysplex Timer Model 2 provides the stepping signal that helps ensure that all TOD clocks in a multi-server environment increment in unison to permit full read or write data sharing with integrity. The Sysplex Timer Model 2 is a key component of an IBM Parallel Sysplex environment and a GDPS availability solution for On Demand Business.

The z10 BC server requires the External Time Reference (ETR) feature to attach to a Sysplex Timer. The ETR feature is standard on the z10 BC and supports attachment at an unrepeated distance of up to three kilometers (1.86 miles) and a link data rate of 8 Megabits per second. The distance from the Sysplex Timer to the server can be extended to 100 km using qualified Dense Wavelength Division Multiplexers (DWDMs). However, the maximum repeated distance between Sysplex Timers is limited to 40 km.

The ETR feature utilizes 62.5 micron multimode fiber optic cabling terminated with an MT-RJ connector. The ETR features do not reside in the I/O drawer and do not require connectivity to the I/O drawer. Compatibility Conflicts: The ETR features have a small form factor optical transceiver that supports an MT-RJ connector only. A multimode fiber optic cable with an ESCON Duplex connector is not supported with this feature.

  • Customer Setup: No.
  • Limitations: The unrepeated distance between an ETR feature and a Sysplex Timer Model 2 is limited to 3 kilometers (1.86 miles). If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 62.5 micron multimode fiber optic cable terminated with an MT-RJ connector is required. Since the Sysplex Timer Model 2 supports use of an ESCON Duplex connector, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed. The Server Time Protocol (STP) feature is the follow on to the Sysplex Timer. It is designed to provide the capability for multiple servers and Coupling Facilities to maintain time synchronization with each other, without requiring a Sysplex Timer(R).

Server Time Protocol is a server-wide facility that is implemented in the Licensed Internal Code (LIC) of z10 BC and presents a single view of time to Processor Resource/Systems Manager (PR/SM). STP uses a message-based protocol in which timekeeping information is passed over externally defined Coupling Links InterSystem Channel-3 (ISC-3) links configured in peer mode, Integrated Cluster Bus-3 (ICB-3) links, Integrated Cluster Bus-4 (ICB-4) links, and Parallel Sysplex InfiniBand (PSIFB) links. These can be the same links that already are being used in a Parallel Sysplex(R) for Coupling Facility (CF) message communication.

STP is designed to support a multisite sysplex configuration up to 100 km (62 miles) using qualified DWDMs.

The ETR feature utilizes 62.5 micron multimode fiber optic cabling terminated with an MT-RJ connector. The ETR features do not reside in the I/O drawer and do not require connectivity to the I/O drawer.

  • Compatibility Conflicts: The ETR features have a small form factor optical transceiver that supports an MT-RJ connector only. A multimode fiber optic cable with an ESCON Duplex connector is not supported with this feature.
  • Customer Setup: No.
  • Limitations: The unrepeated distance between an ETR feature and a Sysplex Timer Model 2 is limited to 3 kilometers (1.86 miles). If greater distances are desired, an RPQ request should be submitted.
  • Field Installable: Yes.
  • Cable Order: A customer-supplied cable is required. A 62.5 micron multimode fiber optic cable terminated with an MT-RJ connector is required. Since the Sysplex Timer Model 2 supports use of an ESCON Duplex connector, a 62.5 MM MT-RJ to ESCON Conversion Kit may be needed.
(#6095) 20-inch large flat-panel display

(No Longer Available as of November 9, 2010)

The business black 20 inch flat-panel display offers the benefits of a flat-panel monitor including improved use of space and reduced energy consumption compared to CRT monitors.

  • Minimum: None.
  • Maximum: Ten.
  • Prerequisites: None.
  • Corequisites: HMC feature (for example, #0084, or #0090)
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes. Parts removed as a result of feature conversion become the property of IBM.
  • Cable Order: None.
(#6096) Flat-panel display

(No Longer Available as of June 30, 2012)

The business black flat-panel display offers the benefits of a flat-panel monitor including improved use of space and reduced energy consumption compared to CRT monitors.

  • Minimum: None.
  • Maximum: Ten (10).
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None
  • Field Installable: Yes. Parts removed as a result of feature conversion become the property of IBM.
  • Cable Order: None.
(#6501) Power Sequence Controller (PSC)

(No Longer Available as of June 30, 2012)

The Power Sequence Controller provides the ability to turn control units on and off from the Central Processor Complex. The PSC feature consists of one PSC24V card, one PSC Y-cable, and two PSC relay boxes that are mounted near the I/O drawers within the server. The PSC24V card always plugs into the first I/O drawer and displaces one I/O feature.

  • Minimum number of features: None.
  • Maximum number of features: Three.
  • Prerequisites: One I/O drawer per #6501.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
(#6811) Integrated Facility for Linux (IFL)

Processor Unit (PU) characterization option. The IFL is a Processor Unit that is purchased and activated for exclusive use of Linux on System z.

  • Minimum number of features: None.
  • Maximum number of features: 10.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6812) Internal Coupling Facility (ICF)

Processor Unit (PU) characterization option. The ICF is a Processor Unit purchased and activated for exclusive use by the Coupling Facility Control Code (CFCC).

  • Minimum number of features: None.
  • Maximum number of features: 10.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6652) System Assist Processor (SAP), optional

(No Longer Available as of June 30, 2013)

Processor Unit (PU) characterization option. The optional SAP is a Processor Unit that is purchased and activated for use as a SAP. This optional SAP is a chargeable feature.

  • Minimum number of features: None.
  • Maximum number of features: Two (2).
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#6805) Additional CBU Test

(No Longer Available as of June 30, 2013)

An additional test activation that can be purchased with each CBU temporary entitlement record. There can be no more than 15 tests per CBU TER.

  • Minimum: 0.
  • Maximum: 15 per each instance of #6818, Capacity back up, CBU.
  • Prerequisites: #6818
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6814) System z Application Assist Processor (zAAP)

Processor Unit (PU) characterization option. The zAAP is a specialized Processor Unit that provides a Java execution environment for a z/OS environment. zAAPs are designed to operate asynchronously with the CPs to execute Java programming under control of the IBM Java Virtual Machine (JVM).

The IBM JVM processing cycles are designed to be executed on the configured zAAPs with no anticipated modifications to the Java applications. Execution of the JVM processing cycles on a zAAP is a function of the Software Developer's Kit (SDK) 1.4.1 for zSeries or later, z/OS V1.7 or later, and Processor Resource/Systems Manager (PR/SM).

IBM does not impose software charges on zAAP capacity. Additional IBM software charges will apply when additional CP capacity is used.

Customers are encouraged to contact their specific ISVs and USVs directly to determine if their charges will be affected.

  • Minimum number of features: None.
  • Maximum number of features: 5.
  • Prerequisites: For each zAAP installed there must be a corresponding CP permanently purchased and installed.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
(#6815) System z Integrated Information Processor (zIIP)

Processor Unit (PU) characterization option. The zIIP is a subcapacity Processor Unit purchased and activated to accept eligible work from z/OS. The zIIP's execution environment will accept eligible work from z/OS. The operating system is designed to manage and direct the work between the general purpose processor (CP) and the zIIP. DB2 UDB for z/OS V8 exploits the zIIP capability for eligible workloads.

The zIIP is designed so that a program can work with z/OS to have eligible portions of its enclave Service Request Block (SRB) work directed to the zIIP. The z/OS operating system, acting on the direction of the program running in SRB mode, controls the distribution of the work between the general purpose processor (CP) and the zIIP. Using a zIIP can help free up capacity on the general purpose processor.

  • Minimum number of features: None.
  • Maximum number of features: 5.
  • Prerequisites: For each zIIP installed there must be a corresponding CP permanently purchased and installed.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None known.
  • Field Installable: Yes.
(#6816) Unassigned Integrated Facility for Linux (IFL)

Processor Unit characterization option. An unassigned IFL is a Processor Unit purchased for future use as an IFL (#6811). It is offline and unavailable for use.

  • Minimum number of features: None.
  • Maximum number of features: 10.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#7960 - #7968) Fiber Quick Connect

The Fiber Quick Connect (FQC) features are optional features for factory installation of the IBM Fiber Transport System (FTS) fiber harnesses for connection to ESCON channels with MT-RJ connectors and FICON LX channels with LC Duplex connectors. FQC, when ordered, supports all of the installed ESCON channels features and all of the FICON LX features in all of the installed I/O drawers. FQC cannot be ordered on a partial drawer basis. Fiber Quick Connect is for factory installation only, and is available on new servers and on initial upgrades to z10 BC FQC is not available as an MES to an existing z10 BC

Each ESCON direct-attach fiber harness connects to six ESCON channels at one end and one coupler in a Multi-Terminated Push-On Connector (MTP) coupler bracket at the opposite end. Each FICON LX direct-attach fiber harness connects to six FICON LX channels at one end and one coupler in an MTP coupler bracket at the opposite end.

These descriptions are for information purposes only. They cannot be ordered. The configuration tool selects the appropriate features and quantities based upon the server configuration.

(#7960) FQC 1st bracket + mounting hardware

This feature cannot be ordered. When FQC is ordered, the configuration tool selects the required number of MTP mounting brackets and bracket clamps based upon the 16-port ESCON feature quantity and the 2-port or 4-port FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324), 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7961) FQC additional brackets (2nd-5th)

This feature cannot be ordered. When FQC is ordered, the configuration tool selects the required number of MTP 10-position coupler brackets to support the 16-port ESCON feature quantity and the 2-port or 4-port FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324), 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7985) MT-RJ 3.5 ft (1.07 meters) multimode harnesses (qty 5)

(No Longer Available as of June 30, 2012)

This feature cannot be ordered. The description is for information purposes only. A harness is 3.5 feet (1.07 meters) in length. A quantity of 5 harnesses supporting 30 ESCON channels are supplied with this feature. The direct-attach harness supports 62.5 micron multimode fiber optic trunk cables. A fiber harness is for use in an I/O drawer and supports the 16-port ESCON feature with the optical transceiver supporting the industry-standard small form factor MT-RJ connector.

A fiber harness has six MT-RJ connectors on one end to attach to six ESCON channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the 16-port ESCON feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7986) MT-RJ 5 ft (1.5 meters) multimode harnesses (qty 5)

(No Longer Available as of June 30, 2012)

This feature cannot be ordered. The description is for information purposes only. A harness is 5 feet (1.5 meters) in length. A quantity of 5 harnesses supporting 30 ESCON channels are supplied with this feature. The direct-attach harness supports 62.5 micron multimode fiber optic trunk cables. A fiber harness is for use in an I/O drawer and supports the 16-port ESCON feature with the optical transceiver supporting the industry-standard small form factor MT-RJ connector.

A fiber harness has six MT-RJ connectors on one end to attach to six ESCON channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the 16-port ESCON feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 16-port ESCON (#2323/#2324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7987) LC Duplex 3.5 ft (1.07 meters) single mode harnesses (qty 2)

(No Longer Available as of June 30, 2012)

This feature cannot be ordered. The description is for information purposes only. A harness is 3.5 feet (1.07 meters) in length. A quantity of 2 harnesses supporting 12 FICON LX channels are supplied with this feature. The direct-attach harness supports 9 micron single mode fiber optic trunk cables. A fiber harness is for use in an I/O drawer and supports 2-port or 4-port FICON LX features with optical transceivers supporting the industry-standard small form factor LC Duplex connector.

A fiber harness has six LC Duplex connectors on one end to attach to six FICON LX channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#7988) LC Duplex 5 ft (1.5 meter) single mode harnesses (qty 2)

(No Longer Available as of June 30, 2012)

This feature cannot be ordered. The description is for information purposes only. A harness is 5 feet (1.5 meters) in length. A quantity of 2 harnesses supporting 12 FICON LX channels are supplied with this feature. The direct-attach harness supports 9 micron single mode fiber optic trunk cables. A fiber harness is for use in an I/O drawer and supports 2-port or 4-port FICON LX features with optical transceivers supporting the industry-standard small form factor LC Duplex connector.

A fiber harness has six LC Duplex connectors on one end to attach to six FICON LX channels. The opposite end has one MTP connector for plugging into the MTP coupler bracket. When FQC is ordered, the configuration tool selects the required number of harnesses based upon the FICON LX feature quantity.

  • Minimum number of features: None.
  • Prerequisites: 2-port FICON Express LX (#2319) 4-port FICON Express2 LX (#3319), 4-port FICON Express4 10KM LX (#3321), 4-port FICON Express4 4KM LX (#3324)
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Field Installable: No.
(#9975) Height reduction for shipping 2098 (z10 BC)

(No Longer Available as of June 30, 2012)

This feature is required if it is necessary to reduce the shipping height of the z10 BC. This feature should be selected only when it has been deemed necessary for delivery clearance purposes. It should be ordered only IF ABSOLUTELY essential. This feature elongates the install time and increases the risk of cabling errors during the install activity.

This optional feature should be ordered if you have doorways with openings less than 1941 mm (76.4 inches) high. This feature accommodates doorway openings as low as 1832 mm (72.1 inches). Top hat and side covers are shipped separately. If Internal Battery features (#3210) are a part of the order, they will be shipped separately. Instructions are included for the reassembly on site by IBM personnel.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
(#9976) Frame height reduction for return of 2086/2096/2098 (z890/z9 BC/z10 BC)

(No Longer Available as of June 30, 2012)

The frame height reduction for 2086/2096/2098 provides the tools and instructions to reduce the height of a 2086/2096/2098 when returned to IBM on an upgrade from a z890 to a z10 BC or a z9 BC to a z10 BC or a z10 BC to a z10 EC.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: None
  • Compatibility Conflicts: None known.
(#6817) One 1 CBU year

(No Longer Available as of June 30, 2013)

Used to set the expiration date of a Capacity back up (CBU) temporary entitlement record.

  • Minimum number of features: None.
  • Maximum number of features: Five (5).
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#6818) Capacity back up (CBU)

(No Longer Available as of June 30, 2013)

This feature code corresponds to the number of different CBU Temporary Entitlement Records (TERs) ordered. Each CBU TER contains configuration data corresponding to the number of years, number of tests, and various engine types.

  • Minimum number of features: None.
  • Maximum number of features: Eight (8) per ordering session.
  • Prerequisites: None.
  • Corequisites: CBU Authorization (#9910).
  • Compatibility Conflicts: None known.
  • Customer Setup: The CBU TER must be installed via the HMC Configuration Manager before it can be activated.
  • Limitations: None.
  • Field Installable: Yes.
(#6819) Five (5) additional CBU tests

(No Longer Available as of October 27, 2009)

Additional test activations that can be purchased with each CBU temporary entitlement record. There is a default of five tests per CBU TER and there can be no more than 15 tests per CBU TER.

  • Minimum: None.
  • Maximum: Three ( 3) per each instance of Capacity back up (#6818).
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6820) Single CBU CP-year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary CP capacity features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6821) 25 CBU CP-year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the temporary Central Processor (CP) capacity features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6822) Single CBU IFL Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary Integrated Facility for Linux (IFL) features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6823) 25 CBU IFL Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary IFL features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6824) Single CBU ICF-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary ICF features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6825) 25 CBU ICF-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary Internal Coupling Facility (ICF) features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6826) Single CBU zAAP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary zAAP features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6827) 25 CBU zAAP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary System z Application Assist Processor (zAAP) features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6828) Single CBU zIIP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary System z Integrated Information Processor (zIIP) features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6829) 25 CBU zIIP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary zIIP features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6830) Single CBU SAP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the remainder of the number of CBU years multiplied by the number of temporary SAP features divided by 25.

  • Minimum: None.
  • Maximum: 24.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6831) 25 CBU SAP-Year

(No Longer Available as of June 30, 2013)

Pricing feature equal to the quotient of the number of CBU years multiplied by the number of temporary System Assist Processor (SAP) features divided by 25.

  • Minimum: None.
  • Maximum: 250.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6832) CBU Replenishment

(No Longer Available as of June 30, 2013)

This feature is used to restore the ability to activate a CBU TER. Each CBU TER comes with a default of one activation. An activation enables the resources required for disaster recovery. After an activation, no subsequent activations nor any additional testing of this CBU TER can occur until this feature is ordered.

  • Minimum: None.
  • Maximum: One.
  • Prerequisites: Capacity back up (#6818).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: No.
  • Limitations: None.
  • Field Installable: Yes.
(#6833) Capacity for Planned Event (CPE)

(No Longer Available as of June 30, 2013)

This feature code corresponds to the number of different CPE Temporary Entitlement Records (TERs) ordered.

  • Minimum: None.
  • Maximum: Eight (8) per ordering session.
  • Prerequisites: None.
  • Corequisites: CPE authorization (#9912).
  • Compatibility Conflicts: None known.
  • Customer Setup: The CBU TER must be installed via the HMC Configuration Manager before it can be activated.
  • Limitations: None.
  • Field Installable: Yes.
(#9896) On/Off CoD Authorization

(No Longer Available as of June 30, 2013)

This feature is ordered to enable the ordering of On/Off Capacity on Demand (On/Off CoD) through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9898) Permanent Upgrade authorization

(No Longer Available as of June 30, 2013)

This feature is ordered to enable the ordering of Licensed Internal Code Configuration Control (LICCC) enabled, permanent capacity upgrades through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9900) On-Line Capacity on Demand (CoD) Buying

(No Longer Available as of June 30, 2013)

This feature is ordered to enable purchasing either permanent capacity upgrades or temporary capacity upgrades through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9910) CBU Authorization

(No Longer Available as of June 30, 2013)

This feature enables the purchase of Capacity back up (CBU). This feature is generated when Capacity back up (#6818) is ordered, or it can be ordered by itself. This feature along with On-Line Capacity on Demand (#9900) is required to order CBU through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: Capacity back up (#6818).
  • Compatibility Conflicts: None known.
  • Field Installable: Yes.
(#9912) CPE Authorization

(No Longer Available as of June 30, 2013)

This feature is ordered to enable the purchase of Capacity for Planned Event (CPE). This feature is generated when Capacity for Planned Event (#6833) is ordered. This feature along with On-Line Capacity on Demand (#9900) is required to order CPE through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: One.
  • Prerequisites: None.
  • Corequisites: Capacity for Planned Event (#6833).
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9917) 1 MSU-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9918) 100 MSU-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 MSU-day (#9917)
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9919) 10,000 MSU-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 MSU-day (#9917), 100 MSU-days (#9918)
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9920) IFL-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9921) 100 IFL-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 IFL-day (#9920)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9922) 1 ICF-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9923) 100 ICF-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 ICF-day (#9921)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9924) 1 zIIP-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary zIIP resource tokens purchased through Resource Link.

  • Minimum number of features: 1.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9925) 100 zIIP-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary zIIP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 zIIP-day (#9924)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9926) 1 zAAP-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary zAAP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9927) 100 zAAP-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary zAAP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 zAAP-day (#9926)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.
(#9928) 1 SAP-day

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary SAP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 99.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: None.
  • Compatibility Conflicts: None known.
  • Customer Setup: Not applicable.
  • Limitations: None.
  • Field Installable: Yes.
(#9929) 100 SAP-days

(No Longer Available as of June 30, 2013)

Pricing feature is used to facilitate the billing of On/Off Capacity on Demand (On/Off CoD) temporary SAP resource tokens purchased through Resource Link.

  • Minimum number of features: None.
  • Maximum number of features: 250.
  • Prerequisites: On-Line Capacity on Demand (#9900).
  • Corequisites: 1 SAP-day (#9928)
  • Compatibility Conflicts: None known.
  • Customer Setup: NA.
  • Limitations: None.
  • Field Installable: Yes.

Feature exchanges

Not available.
Back to topBack to top
 

Accessories

None.

Customer replacement parts

None.
Back to topBack to top
 

Machine elements

Not available.
Back to topBack to top
 

Supplies

None.

Supplemental media

None.

Trademarks

(R), (TM), * Trademark or registered trademark of International Business Machines Corporation.

** Company, product, or service name may be a trademark or service mark of others.

UNIX is a registered trademark in the United States and other countries licensed exclusively through X/Open Company Limited.
 © IBM Corporation 2017.
Back to topBack to top

Contact IBM

Considering a purchase?