Cardinal Research Cluster takes flight at University of Louisville

With a high-performance IBM System x iDataPlex solution

Published on 28-Mar-2012

Validated on 01 Oct 2013

"I wanted a high-performance computing cluster that would be used to the max, and that’s exactly what the university got with IBM." - Priscilla Hancock, Ph.D., vice president and chief information officer, University of Louisville

University of Louisville


Deployment country:
United States

Technical Computing, Collaborative Innovation, Empowering People, General Parallel File System (GPFS), High Availability

IBM Business Partner:


The University of Louisville in Kentucky is a leading center for advanced research, using the power of its computing cluster to help develop new cancer drugs, find effective treatments for spinal cord injuries and develop new solar cell materials.

Business need:
The University of Louisville needed a flexible and powerful high-performance computing solution to give researchers the tools they need to conduct advanced studies in areas such as new types of cancer medicines.

The school implemented 204 IBM System x® iDataPlex® dx360 class servers with Intel® Xeon® processors, IBM System Networking switches, Mellanox switches and four IBM System Storage® DS3500 Express devices.

The IBM solution doubles the capacity of existing supercomputer cluster, delivers peak speeds of 40 teraflops and enables advanced research and future innovation.

Case Study

The University of Louisville in Kentucky is a leading center for advanced research, using the power of its computing cluster to help develop new cancer drugs, find effective treatments for spinal cord injuries and develop new solar cell materials.

Researchers engaged in scientific fields of study such as bioinformatics, engineering and computational chemistry, however, did not always have a centralized high-performance computing cluster to facilitate their work, says Priscilla Hancock, Ph.D., vice president and chief information officer at the University of Louisville.

“When I arrived there wasn’t any central computing research capability,” she explains. “What I sometimes say is that before our Cardinal Research Cluster, all we offered was email.”

Not having a supercomputing resource meant academic departments had to build or find their own computing nodes, explains Mike Dyre, director of research computing at the University of Louisville.

“A researcher who didn’t have funding basically had nothing but a desktop PC, or perhaps something in the department to use for doing research,” says Dyre. “We had a very uneven implementation to do research, and that’s why we established a centralized computer that’s fairly large and accessible to everyone.”

A supercomputer built on IBM solutions, in two phases

The university initially tapped federal grants to implement the school’s first centralized and secure high-performance computing center in 2009, says Hancock. Incorporating 312 IBM System x iDataPlex dx340 nodes, the first phase of what became known as the Cardinal Research Cluster delivered a much-needed dose of high-performance computing. And it didn’t take long to reach 100 percent utilization. “I wanted a high-performance computing cluster that would be used to the max, and that’s exactly what the university got with IBM,” says Hancock.

When additional funding came through for an expansion of the solution, a primary goal was to achieve greater efficiency and cost effectiveness through a simplified networking and storage strategy. In both phases, researchers examined solutions from IBM and others. “When they voted on the proposals, IBM was the unanimous choice not just once, but twice,” says Hancock. “Not only were all of the researchers unanimous in their decision, all of them were happy. That was pretty amazing.”

The second phase expansion of the Cardinal Research Cluster included 204 IBM System x iDataPlex dx360 class servers featuring Intel Xeon processors paired with IBM System Networking switches, Mellanox switches and four IBM System Storage DS3500 Express devices. IBM Business Partner Sumavi, Inc. assisted with systems integration along with IBM STG Lab Services and Training.

“The thing that drew us and the faculty to IBM was the variety of computing solutions they had,” says Dyre. “Other vendors wanted to sell us a cluster, but IBM was able to offer a complete package that also integrated the network and the storage as one entity.”

The resulting solution delivers 5,052 processing cores with between two and four gigabytes of memory per core. Additional hardware components include IBM System x3650 class servers, which function as the head nodes for the System Storage DS3500 devices with 500 terabytes of shared storage capacity, as well as fourteen general-purpose computation on graphics processing unit (GPGPU) nodes to provide the cluster with additional parallel processing capabilities.

On the software side, the Cardinal Research Cluster uses Red Hat Linux and a range of specialized applications such as the MATLAB numerical computing environment and the BLAST algorithm for comparing primary biological sequence information. Storage across the data center is orchestrated with IBM General Purpose File System (GPFS™).

Powerful performance for diverse computing needs

One of the most significant benefits of the solution is its ability to handle both massive parallel processing jobs, which involve rapid calculation, as well as high throughput jobs, where a large amount of addressable memory is most critical, says Hancock.

In benchmark tests, the cluster has achieved peak speeds of 40 teraflops—40 trillion floating point operations per second, says Dyre. The new switches from IBM and Mellanox, meanwhile, have virtually eliminated any previous networking bottlenecks in the system.

This has in turn freed up researchers to submit increasingly more complex jobs. “We’ve more than doubled our capacity as far as the number of cores and we have more than tripled the storage capacity,” says Dyre. “And what’s great is that it’s being fully utilized. When we talk about people submitting thousands of jobs, they literally submit 10,000 jobs at once and so the cluster runs all the time. We're not having any trouble finding any people to use it.”

IBM gave University of Louisville a Shared University Research (SUR) Award to help further its research efforts. This award includes the donation of extra computing systems and gives the university access to IBM engineers who work closely with the university’s IT staff to get maximum performance from the supercomputer.

A partnership built on shared goals

Hancock says working with IBM has allowed her to provide university researchers with everything they need to conduct important work such as discovering new types of cancer drugs and treatments. And even after the implementation was complete, IBM continues to be available.

“IBM gave us a very fair offer, they stayed with it from start to finish, and they’re still here,” says Hancock. “IBM wants to make sure it works for us. And even if something goes wrong, they have stayed at the table until everyone is satisfied.”

Dyre couldn’t agree more: “I think we're in a pretty good spot right now with IBM. We're happy with the product we have, we're happy with the service we're getting, and we’re happy with the reps we have. What more can you ask for?”

Products and services used

IBM products and services that were used in this case study.

Storage: DS3500 Express, System Networking, System x, System x: iDataPlex, System x: System x running Linux - Red Hat, System x: System x3650 M3

General Parallel File System

Operating system:

STG Lab Services: Other, STG Lab Services: Storage, STG Lab Services: System x

Legal Information

© Copyright IBM Corporation 2012 IBM Systems and Technology Group Route 100 Somers, New York 10589 Produced in the United States of America March 2012 IBM, the IBM logo,, iDataPlex and System x are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright and trademark information” at Intel, the Intel logo, Xeon and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. This document is current as of the initial date of publication and may be changed by IBM at any time. Not all offerings are available in every country in which IBM operates. The performance data and client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions. It is the user’s responsibility to evaluate and verify the operation of any other products or programs with IBM products and programs. THE INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS” WITHOUT ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT. IBM products are warranted according to the terms and conditions of the agreements under which they are provided.