PCIe LP 4 Gb 2-Port Fibre Channel Adapter (FC EL09; CCIN 5774)

Learn about the specifications and operating system requirements for the feature code (FC) EL09 adapter.

Overview

The PCIe LP 4 Gb 2-Port Fibre Channel adapter is a 64 bit, short form factor x4, PCIe adapter with an LC-type external fiber connector that provides single initiator capability over an optical fiber link or loop. The adapter automatically negotiates the highest data rate between the adapter and an attaching device at 1 Gbps, 2 Gbps, or 4 Gbps of which the device or switch is capable. Distances between the adapter and an attaching device or switch can reach up to 500 meters running at 1 Gbps data rate, up to 300 meters running at 2 Gbps data rate, and up to 150 meters running at 4 Gbps data rate. When used with IBM® Fibre Channel storage switches supporting longwave optics, the adapter can reach distances of up to 10 kilometers running at either 1 Gbps, 2 Gbps, or 4 Gbps data rates.

The adapter can be used to attach devices either directly, or with Fibre Channel switches. If you are attaching a device or switch with an SC type fiber connector, you must use an LC-SC 50 micron fiber converter cable (FC 2456) or an LC-SC 62.5 micron fiber converter cable (FC 2459).

The adapter has the following features:
  • Compliant with the PCIe Base and Card Electromechanical (CEM) 1.0a specifications:
    • x1 and x4 lane link interface at 2.5 Gbit/s (auto-negotiated with system)
    • Supports VC0 (1 Virtual Channel) and TC0 (1 Traffic Class)
    • Configuration and IO Memory read/write, completion, message
    • Support for 64-bit addressing
    • ECC error protection
    • Link CRC on all PCIe packets and message information
    • Large payload size: 2048 bytes for read and write
    • Large read request size: 4096 bytes
  • Compatible with 1, 2, and 4 Gb Fibre Channel interface:
    • Auto-negotiate between 1 Gb, 2 Gb or 4 Gb link attachments
    • Support for all Fibre Channel topologies: point-to-point, arbitrated loop, and fabric
    • Support for Fibre Channel class 2 and 3
    • Maximum Fibre Channel throughput achieved by using full duplex hardware support
  • End-to-end data path parity and CRC protection, including internal data path RAMs
  • Architectural support for multiple upper layer protocols
  • Internal high-speed SRAM memory
  • ECC protection of local memory, includes single-bit correction and double-bit protection
  • Embedded shortwave optical connection with diagnostics capability
  • Onboard Context Management by firmware (per port):
    • Up to 510 FC Port Logins
    • Up to 2047 concurrent Exchanges
    • I/O multiplexing down to the FC Frame level
  • Data buffers capable of supporting 64+ buffer-to-buffer (BB) credits per port for shortwave applications
  • Link management and recovery handled by firmware
  • Onboard diagnostic capability accessible by optional connection
  • Parts and construction compliant with the European Union Directive of Restriction of Hazardous Substances (RoHS)
  • Performance up to 4.25 Gbps full duplex

The following figure shows the adapter.

Figure 1. EL09 adapter
Graphic of the adapter

Specifications

Item
Description
Adapter FRU number
000E0807, 000E0904*
* Designed to comply with RoHS requirement
Wrap plug FRU number
12R9314
I/O bus architecture
PCIe Base and Card Electromechanical (CEM) 1.0a
x4 PCIe bus interface
Slot requirement
One available PCIe x4, x8, or x16 slot
Voltage
3.3 V
Form factor
Short, low-profile
FC compatibility
1, 2, 4 Gigabit
Cables

50/125 micron fiber (500 MHz*km bandwidth cable)

  • 1.0625 Gbps 0.5 – 500 m
  • 2.125 Gbps 0.5 – 300 m
  • 4.25 Gbps 0.5 – 150 m

62.5/125 micron fiber (200 MHz*km bandwidth cable)

  • 1.0625 Gbps 0.5 – 300 m
  • 2.125 Gbps 0.5 – 150 m
  • 4.25 Gbps 0.5 – 70 m
Maximum number
For details about the maximum number of adapters that are supported, see PCIe adapter placement rules and slot priorities and select the system you are working on.
For details about slot priorities and placement rules, see PCIe adapter placement rules and slot priorities and select the system you are working on.

Operating system or partition requirements

If you are installing a new feature, ensure that you have the software that is required to support the new feature and that you determine whether any prerequisites must be met for this feature and attaching devices. To check for the prerequisites, see IBM Prerequisite website.

The adapter is supported on the following versions of the operating systems:
  • Linux
    • Red Hat Enterprise Linux Version 7, or later, with current maintenance updates available from Red Hat.
    • Red Hat Enterprise Linux Version 6.5, or later, with current maintenance updates available from Red Hat.
    • SUSE Linux Enterprise Server 11, Service Pack 3, or later, with current maintenance updates available from SUSE.
    • For support details, see the Linux Alert website.

Adapter LED states

Green and yellow LEDs can be seen through openings in the mounting bracket of the adapter. Green indicates firmware operation and yellow signifies port activity. Table 1 summarizes normal LED states. There is a 1 Hz pause when the LED is off between each group of fast flashes (1, 2 or 3). Observe the LED sequence for several seconds to ensure that you correctly identify the state.

Table 1. Normal LED states
Green LED Yellow LED State
On 1 fast flash 1 Gbps link rate - normal, link active
On 2 fast flashes 2 Gbps link rate - normal, link active
On 3 fast flashes 4 Gbps link rate - normal, link active

Power-On Self Test (POST) conditions and results are summarized in Table 2. These states can be used to identify abnormal states or problems. Follow the action to be taken for each condition.

Table 2. POST conditions and results
Green LED Yellow LED State Action to be taken
Off Off Wake-up failure (dead board) Perform AIX®, IBM i, or Linux operating system diagnostics procedure.
Off On POST failure (dead board) Perform AIX, IBM i, or Linux operating system diagnostics procedure.
Off Slow flash Wake-up failure monitor Perform AIX, IBM i, or Linux operating system diagnostics procedure.
Off Fast flash POST failure Perform AIX, IBM i, or Linux operating system diagnostics procedure.
Off Flashing POST processing in progress None
On Off Failure while functioning Perform AIX, IBM i, or Linux operating system diagnostics procedure.
On On Failure while functioning Perform AIX, IBM i, or Linux operating system diagnostics procedure.
Slow flash Slow flash Offline for download None
Slow flash Fast flash Restricted offline mode, waiting for restart None
Slow flash Flashing Restricted offline mode, test active None
Fast flash Off Debug monitor in restricted mode None
Fast flash On Not defined None
Fast flash Slow flash Debug monitor in test fixture mode None
Fast flash Fast flash Debug monitor in remote debug mode None
Fast flash Flashing Not defined None

Device ID jumper

The default setting for the two device ID jumpers labeled P0_JX and P1_JX is to set the jumpers on pins 1 and 2 as shown in Figure 2. Do not change the jumper settings for a standard installation.

Figure 2. Device ID jumper
Device ID jumper

Replacing hot swap HBAs

Fibre Channel host bus adapters (HBAs) connected to a fiber array storage technology (FAStT) or DS4000® storage subsystem have a child device that is called a disk array router (dar). You must unconfigure the disk array router before you can hot swap an HBA that is connected to a FAStT or DS4000 storage subsystem. For instructions, see Replacing hot swap HBAs in the IBM System Storage® DS4000 Storage Manager Version 9, Installation and Support Guide for AIX, HP-UX, Solaris, and Linux on Power Systems™ Servers, order number GC26-7848.




Last updated: Thu, June 27, 2019