Remote Direct Memory Access over Converged Ethernet

Remote Direct Memory Access (RDMA) enables a host to make a subset of its memory directly available to a remote host. After RDMA connectivity is established between two TCP/IP stacks, either host can write to the memory of the remote host with no involvement from the remote host processors or operating system. RDMA enables efficient communications between the hosts because all the low-level functions are managed by RDMA network interface cards (RNICs) that are connected to each host, rather than by the software stack as is normally done for TCP/IP communications.

RDMA was traditionally confined to high-performance computing (HPC) environments where the cost of maintaining RDMA-capable network fabrics such as InfiniBand was justified given the emphasis of performance over cost. Now that RDMA is available on Ethernet fabrics through standards such as RDMA over Converged Ethernet (RoCE), the cost of adopting RDMA is lower because it can be enabled on the existing Ethernet fabrics that are used for IP network communications. Standard Ethernet management techniques are used to configure the RNIC adapters.

z/OS® Communications Server provides support for sockets over RDMA by using SMC-R protocols. VTAM® device drivers use Peripheral Component Interconnect Express® (PCIe) operations to manage IBM® 10GbE RoCE Express features that are defined to z/OS. Start of changeUp to 16 10GbE RoCE Express PFID values can be defined to a z/OS TCP/IP stack.End of change