Installation prerequisites for DB2 pureScale Feature (Linux)
Ensure you have created your DB2 pureScale Feature installation plan. Your installation plan helps ensure that your system meets the prerequisites and that you have performed the preinstallation tasks. This topic details requirements for: software prerequisites (including operating system, GPFS™, and Tivoli® SA MP), storage hardware requirements, network prerequisites, and hardware and firmware prerequisites.
Software prerequisites
In DB2 Cancun Release 10.5.0.4 and later fix packs, the DB2 pureScale Feature supports Linux virtual machines.
Linux distribution | Kernel version level | Required packages | OpenFabrics Enterprise Distribution (OFED) package |
---|---|---|---|
Red Hat Enterprise Linux (RHEL) 5.91 | 2.6.18-348.el5 | libstdc++ (both 32-bit and 64-bit libraries) |
To install OFED on RHEL 5.9 and higher, run a group installation of "OpenFabrics Enterprise Distribution". |
Red Hat Enterprise Linux (RHEL)
6.1 through 6.54 RHEL 6.6 is not supported |
2.6.32-131.0.15.el6 | For InfiniBand network type (both 32-bit and 64-bit libraries unless specified) : libstdc++-4.4.5-6.el6.x86_64 |
For InfiniBand network type, run a group installation of "InfiniBand Support" package. For RoCE network type, subscribe to the Red Hat High Performance Network, then run a group install of "InfiniBand Support" package. This automatically installs the "RHEL server High Performance Networking" package which is mandatory for RDMA over Ethernet support on RoCE network. |
SUSE Linux Enterprise Server (SLES) 10 Service Pack (SP) 4 | 2.6.16.60-0.85.1-smp | libstdc++ (both 32-bit and 64-bit libraries) |
For SLES 10 SP4, you must install OFED packages
from the maintenance repository with the following additional packages
that OFED depends on: ofed For more information about installing OFED, see Configuring the network settings of hosts for a DB2 pureScale environment on an InfiniBand network (Linux) |
SUSE Linux Enterprise Server (SLES) 11 Service Pack (SP) 2 | 3.0.13-0.27 (SP2) | libstdc++ (both 32-bit and 64-bit libraries) |
The minimum level required for OFED packages
is 1.5.2 For SLES 11 SP2 and later service packs, you must install OFED packages from the maintenance repository with additional packages that OFED depends on. For more information about installing OFED on SLES 11, see Configuring the network settings of hosts for a DB2 pureScale environment on an InfiniBand network (Linux). |
- On
Red Hat Linux:
- For single communication adapter ports at CFs on InfiniBand network, the minimum support level is RHEL 5.9.
- For multiple communication
adapter ports on InfiniBand network and single or multiple communication adapter port at CFs on
RoCE network, the minimum support level is RHEL 6.1.
i686 which is 32-bit packages might not get installed by default when installing x86_64 server. Make sure that all the 32-bit dependencies are explicitly installed. For example:
(on RHEL 5.9, the extension is .i386 ) Alternatively, run the yum command after creating a source from local DVD or after registering to RHN:libstdc++-4.4.5-6.el6.i686, pam-1.1.1-8.el6.i686, pam_krb5-2.3.11-6.el6.i686, pam-devel-1.1.1-8.el6.i686, pam_pkcs11-0.6.2-11.1.el6.i686, pam_ldap-185-8.el6.i686
yum install *.i686
- On RHEL 6.5, the librdmacm and ibacm packages must be at, or higher than, librdmacm-1.0.18.1-1.el6.x86_64.rpm and ibacm-1.0.9-0.git49af5a8.el6.x86_64.rpm, respectively. The level of these packages installed as part of the “InfiniBand Support” group package, might be lower than the required levels.
- On SLES 10 Service Pack 4, the minimum supported kernel version level is the default kernel (2.6.16.60-0.85.1-smp).
- In
some installations, if Intel TCO
WatchDog Timer Driver modules are loaded by default, they should be
blacklisted, so that they do not start automatically or conflict with
RSCT. To blacklist the modules, edit the following files:
- To verify if the modules are loaded
lsmod | grep -i iTCO_wdt; lsmod | grep -i iTCO_vendor_support
- Edit the configuration files:
- On RHEL
5.9 and RHEL 6.1, edit file /etc/modprobe.d/blacklist.conf:
# RSCT hatsd blacklist iTCO_wdt blacklist iTCO_vendor_support
- On SLES, edit file /etc/modprobe.d/blacklist:
add blacklist iTCO_wdt blacklist iTCO_vendor_support
- On RHEL
5.9 and RHEL 6.1, edit file /etc/modprobe.d/blacklist.conf:
- To verify if the modules are loaded
- GPFS:
- On Version 10.5 Fix Pack 8 and later fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 4.1.1.4 efix 14.
- On DB2 Cancun Release 10.5.0.4 and later fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 3.5.0.17. The installation of DB2 pureScale Feature performs the update to the required level automatically.
- On Version 10.5 Fix Pack 3 and earlier fix packs, if you have IBM General Parallel File System ( GPFS) already installed, it must be GPFS 3.5.0.7.
- Tivoli SA
MP:
- On DB2 Cancun Release 10.5.0.4 and later fix packs, if you have IBM Tivoli System Automation for Multiplatforms (Tivoli SA MP) already installed, it must be Tivoli SA MP 3.2.2.8. The installation of DB2 pureScale Feature upgrades existing Tivoli SA MP installations to this version level.
- On Version 10.5 Fix Pack 3 and earlier fix packs, if you have Tivoli SA MP already installed, it must be Tivoli SA MP Version 3.2.2.5.
Storage hardware requirements
Recommended free disk space | Minimum required free disk space | |
---|---|---|
Disk to extract installation | 3 GB | 3 GB |
Installation path | 6 GB | 6 GB |
/tmp directory | 5 GB | 2 GB |
/var directory | 5 GB | 2 GB |
/usr directory | 2 GB | 512 MB |
Instance home directory | 5 GB | 1.5 GB1 |
- The disk space that is required for the instance home directory is calculated at run time and varies. Approximately 1 to 1.5 GB is normally required.
- Instance shared files: 10 GB1
- Data: dependent on your specific application needs
- Logs: dependent on the expectant number of transactions and the applications logging requirements
Network prerequisites
On a TCP/IP protocol over Ethernet (TCP/IP) network, a DB2 pureScale environment requires only 1 high speed network for the DB2 cluster interconnect. Running your DB2 pureScale environment on a TCP/IP network can provide a faster setup for testing the technology. However, for the most demanding write-intensive data sharing workloads, an RDMA protocol over Converged Ethernet (RoCE) network can offer better performance.
InfiniBand (IB) networks and RoCE networks using RDMA protocol require two networks: one (public) Ethernet network and one (private) high speed communication network for communication between members and CFs. The high speed communication network must be an IB network, a RoCE network, or a TCP/IP network. A mixture of these high speed communication networks is not supported.
The rest of this network prerequisites section applies to using RDMA protocol.
Communication adapter type | Switch | IBM Validated Switch | Cabling |
---|---|---|---|
InfiniBand (IB) | QDR IB | Mellanox part number MIS5030Q-1SFC Mellanox 6036SX (IBM part number: 0724016 or 0724022) |
QSFP cables |
10 Gigabit Ethernet (10GE) | 10GE |
|
Small Form-factor Pluggable Plus (SFP+) cables |
- DB2 pureScale environments with Linux systems and InfiniBand communication adapter require FabricIT EFM switch based fabric management software. For communication adapter port support on CF servers, the minimum required fabric manager software image that must be installed on the switch is image-PPC_M405EX-EFM_1.1.2500.img. The switch might not support a direct upgrade path to the minimum version, in which case multiple upgrades are required. For instructions on upgrading the fabric manager software on a specific Mellanox switch, see the Mellanox website: http://www.mellanox.com/content/pages.php?pg=ib_fabricit_efm_management&menu_section=55. Enabling subnet manager (SM) on the switch is mandatory for InfiniBand networks. To create a DB2 pureScale environment with multiple switches, you must have communication adapter on CF servers and configure switch failover on the switches. To support switch failover, see the Mellanox website for instructions on setting up the subnet manager for a high availability domain.
- Cable
considerations:
- On InfiniBand networks: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and inter-switch links. If using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports connected from CFs and members to the switches. For example, in a two switch DB2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4 )/2).
- On a RoCE network, the maximum number of ISLs can be further limited by the number of ports supported by the Link Aggregate Communication Protocol (LACP) which is one of the setup required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24 port switch with Blade OS 6.3.2.0, has a limitation of maximum 8 ports in each LACP trunk between the two switches effectively cap the maximum of ISLs to four (4 ports on each switch).
- In general, any 10GE switch that supports global pause flow control, as specified by IEEE 802.3x is also supported. However, the exact setup instructions might differ from what is documented in the switch section, which is based on the IBM validated switches. Refer to the switch user manual for details.
Communication adapter type | Switch | Cabling |
---|---|---|
InfiniBand (IB) | Voltaire 40 Gb InfiniBand Switch1, for example part number 46M6005 | QSFP cables 2 |
10 Gigabit Ethernet (10GE)3 | BNT Virtual Fabric 10 Gb Switch Module for IBM BladeCenter, for example part number 46C7191 |
- To create a DB2 pureScale environment with multiple switches, set up communication adapter for the CF hosts.
- Cable
considerations:
- On InfiniBand networks: The QSFP 4 x 4 QDR cables are used to connect hosts to the switch, and for inter-switch links, too. If using two switches, two or more inter-switch links are required. The maximum number of inter-switch links required can be determined by using half of the total communication adapter ports connected from CFs and members to the switches. For example, in a two switch DB2 pureScale environment where the primary and secondary CF each have four communication adapter ports, and there are four members, the maximum number of inter-switch links required is 6 (6 = (2 * 4 + 4 )/2). On a 10GE network, the maximum number of ISLs can be further limited by the number of ports supported by the Link Aggregate Communication Protocol (LACP) which is one of the setup required for switch failover. As this value can differ in different switch vendors, refer to the switch manual for any such limitation. For example, the Blade Network Technologies G8124 24 port switch with Blade OS 6.3.2.0, has a limitation of maximum 8 ports in each LACP trunk between the two switches effectively cap the maximum of ISLs to four (4 ports on each switch).
- For more information about using DB2 pureScale Feature with application cluster transparency in BladeCenter, see this developerWorks® article: http://www.ibm.com/developerworks/data/library/techarticle/dm-1110purescalebladecenter/.
Hardware and firmware prerequisites
In DB2 Cancun Release 10.5.0.4 and later fix packs, the DB2 pureScale Feature is supported on any rack mounted server or blade server.
- Mellanox ConnectX-2 generation card supporting RDMA over converged Ethernet (RoCE) or InfiniBand
- Mellanox ConnectX-3 generation card supporting RDMA over converged Ethernet (RoCE) or InfiniBand
- Mellanox ConnectX-2 generation card supporting RDMA over converged Ethernet (RoCE)
- Mellanox ConnectX-3 generation card supporting RDMA over converged Ethernet (RoCE)
- Mellanox ConnectX-2 Dual Port 10GbE Adapter for IBM System x (81Y9990)
- Mellanox ConnectX-2 Dual-port QSFP QDR IB Adapter for IBM System x (95Y3750)
- Mellanox ConnectX-3 FDR VPI IB/E Adapter for IBM System x (00D9550)
- Mellanox ConnectX-3 10 GbE Adapter for IBM System x (00D9690)
Server | 10 Gigabit Ethernet (10GE) adapter | Minimum 10GE network adapter firmware version | InfiniBand (IB) Host Channel Adapter (HCA) | Minimum IB HCA firmware version |
---|---|---|---|---|
BladeCenter HS22 System x blades | Mellanox 2-port 10 Gb Ethernet Expansion Card with RoCE, for example part number 90Y3570 | 2.9.1000 | 2-port 40 Gb InfiniBand Card (CFFh), for example part number 46M6001 | 2.9.1000 |
BladeCenter HS23 System x blades | Mellanox 2-port 10 Gb Ethernet Expansion Card (CFFh) with RoCE, part number 90Y3570 | 2.9.1000 | 2-port 40 Gb InfiniBand Expansion Card (CFFh) - part number 46M6001 | 2.9.1000 |
KVM Virtual Machine | Mellanox ConnectX-2 EN 10 Gb Ethernet Adapters with RoCE | 2.9.1200 | Not supported | N/A |
IBM
Flex System X 240 Compute Node IBM Flex System X 440 Compute Node |
IBM Flex System® EN4132 2-port 10Gb RoCE Adapter | 2.10.2324 + uEFI Fix 4.0.320 | Not supported | N/A |
- Install the latest supported firmware for your System x server from http://www.ibm.com/support/us/en/.
- KVM-hosted environments for a DB2 pureScale Feature are supported on rack-mounted servers only.
- Availability of specific hardware or firmware can vary over time and region. Check availability with your supplier.