The University of Washington gains advantage

With a high-performance, flexible, scalable IBM server and storage solution

Published on 20-Feb-2013

"IBM has a great track record of commitment to its products. IBM has the depth of engineering understanding and flexibility that made it possible for us to deploy the HPC solution we wanted." - Chance Reschke, IT research consultant, University of Washington

Customer:
The University of Washington

Industry:
Education

Deployment country:
United States

Solution:
Technical Computing, General Parallel File System (GPFS)

Overview

Founded in 1861 in Seattle, the University of Washington is one of the largest public universities on the West Coast of the United States. Called UW for short, the school is a leading research institution and offers studies in a broad array of academic fields. As at many higher learning institutions, access to a centralized HPC cluster had been on the UW faculty wish list for years.

Business need:
The University of Washington (UW) needed a cost-effective, flexible, scalable, high-performance computing (HPC) solution for advanced research and to use as a recruiting tool for new faculty members.

Solution:
The UW HPC solution uses IBM® BladeCenter® HS22 and HS23 servers and IBM System x® 3650 class servers using intelligent Intel® Xeon® technology, and IBM System Storage® DS5300.

Benefits:
The solution gives UW a compelling recruitment tool and delivers powerful HPC resources to academic departments through a centrally-supported infrastructure that has achieved 85 percent utilization.

Case Study

Founded in 1861 in Seattle, the University of Washington is one of the largest public universities on the West Coast of the United States. Called UW for short, the school is a leading research institution and offers studies in a broad array of academic fields. As at many higher learning institutions, access to a centralized HPC cluster had been on the UW faculty wish list for years.

The demands of more research-heavy departments, pressure on data center infrastructure from loosely managed systems, and the need to improve faculty recruitment and retention helped UW determine that it was time to implement a competitive HPC cluster, says Chance Reschke, IT research consultant at UW.

Addressing diverse computing requirements

While science, technology, engineering and math fields have historically had the most demand for HPC solutions, Reschke and his colleagues felt it was important that the HPC offering be available to all academic departments.

“On one extreme, we have people like our Department of Ethnomusicology, which has a wealth of original, irreplaceable field-recording data that needs long-term preservation, but no real computation,” says Reschke. “On the other hand, our nuclear physicists were interested in modeling the process by which the atomic nucleus is constructed, an effort that requires immense computational power, data storage and network bandwidth.”

Reschke faced several main challenges. How do you design a university HPC system that can accommodate so many diverse needs? And how can it be built in a way that is accessible, affordable, flexible and scalable? Working with IBM, Reschke implemented a solution constructed from components known at the university as Hyak and lolo. This centrally supported HPC infrastructure was purpose-built for the faculty it serves.

“We didn't just arbitrarily build a system,” says Reschke. “We built a system that we felt reflected users’ actual needs.”

Driving demand with innovative condo model

Hyak—UW’s HPC cluster— is a Chinook Jargon word meaning “hurry,” or “fast.” Lolo, the name of the storage service, means “to carry” or “the whole load.” The combined solution was implemented using a process pioneered by UW called the condominium model.

“What we did is provide an infrastructure that doesn’t change much—the condo building—and invited faculty to populate that infrastructure with CPUs,” explains Reschke. “With our design, it’s much easier to take out just the CPUs after three years and replace them with new equipment while avoiding expensive reengineering overhead.”

The solution is being implemented in three phases to further smooth out costs and allow for new CPU upgrades approximately every three years. Reschke says the cluster currently performs at about 45 teraflops but is expected to exceed eight petaflops by the end of the third phase.

Building success with IBM solutions

Hyak is built around a BladeCenter H Chassis using BladeCenter HS22 and HS23 servers featuring intelligent Intel Xeon processors. In lolo, the solution uses System x3650 class servers and predominately System Storage DS5300 devices.

IBM Tivoli® Storage Manager is layered on top of IBM General Parallel File System (GPFS™) as the solution’s archive file system, which currently stands at about 750 terabytes but is expected to exceed five petabytes within the next three years. The overall HPC solution runs Red Hat Enterprise Linux as the base operating system to support an array of custom applications used by academics, and Extreme Cloud Administration Toolkit (xCAT) for cluster management.

Because participating faculty have access to each other’s unused capacity, Hyak has achieved a remarkable 85 percent average utilization. Adds Reschke: “Utilization is key. The best-run laboratory-scale systems rarely exceed 60 percent utilization, and typically far less than that.”

Reschke says the decision to use BladeCenter as the cluster’s foundation was critical because it enabled UW to invest confidently in an infrastructure that was built to last. The willingness of IBM to work with UW on unsupported products—in this case the Myrinet 10g interconnects used for the overall cluster—was another key factor.

“IBM has a great track record of commitment to its products,” says Reschke. “IBM has the depth of engineering understanding and flexibility that made it possible for us to deploy the HPC solution we wanted.”

Delivering benefits across campus

The power of the solution is already paying dividends. UW has been able to attract and retain some of the world’s foremost authorities in fields such as nuclear physics and chemical engineering because the cluster provides these top researchers with the computing resources they require.

The UW chemical engineering department, for example, is using Hyak to unravel complex bioinformatics questions such as how methane and carbon dioxide cycle in and out of the earth’s atmosphere. Using custom software and 560 cores on Hyak, tens of millions of RNA sequences were analyzed in 20 days. Using only one core, the same task would have taken 30 years.

With the success of Hyak and lolo spreading across campus through word of mouth, and the educational outreach efforts led by Reschke, even departments not traditionally associated with HPC are taking a closer look at the solution’s capabilities. A member of the communications department, for example, plans to use Hyak and lolo for in-depth analysis of social networks.

“By having these capable systems on campus, we attract users who had been struggling to find a place to satisfy their computational requirements,” says Reschke. “We're broadening the base and bringing in a much more diverse population of users than we ever would have before. Having a campus HPC resource has been valuable in building this community.”

And by working with IBM, the academic community at UW will continue to have the solutions they need. “With IBM, we have an engineering partner, not just a business partner,” says Reschke. “I think that’s unique among all of the vendors of high-performance computing solutions.”

For more information

Please contact your IBM representative or IBM Business Partner, or visit us at: ibm.com/systems/x/solutions

For more information about the University of Washington, visit: washington.edu

Products and services used

IBM products and services that were used in this case study.

Hardware:
BladeCenter H Chassis, BladeCenter HS22, BladeCenter HS23, Storage: DS5300, System x: System x3650 M3

Software:
Tivoli Storage Manager, General Parallel File System, Linux

Operating system:
Linux

Legal Information

© Copyright IBM Corporation 2013 IBM Corporation Systems and Technology Group Route 100 Somers, New York 10589 Produced in the United States of America February 2013 IBM, the IBM logo, ibm.com, BladeCenter, GPFS, System Storage, System x, and Tivoli are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright and trademark information” at ibm.com/legal/copytrade.shtml Intel, the Intel logo, Xeon and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries. Linux is a registered trademark of Linus Torvalds in the United States, other countries or both. This document is current as of the initial date of publication and may be changed by IBM at any time. Not all offerings are available in every country in which IBM operates. The client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions. THE INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS” WITHOUT ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING WITHOUT ANY WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND ANY WARRANTY OR CONDITION OF NON-INFRINGEMENT. IBM products are warranted according to the terms and conditions of the agreements under which they are provided.