The high performance computing visualization lab at Utah State University looks deceptively normal. The wall of flat-panel screens at the front, the dozen or so iMac workstations and the simple classroom setup don’t look so very different from the many student computer labs located across campus.
Looks, however, can be deceiving.
In the corner, two slightly larger workstations hum away, and their vents are putting out enough warm air to serve as a space heater. The two computers are Microway Opteron 24-core Tesla Whisperstations, and they’re working through a problem. Simliarly, all the other workstations in the lab are specially set up with 10GB connectivity to HPC’s true power, a room of servers down the hall.
The server room is a cool 60 degrees Fahrenheit, and it houses row after row of black boxes with green flashing lights. HPC clusters inhabit just tow of these flashing rows–but this little lab is part of a research revolution.
“High performance computing is making some centuries-ld problems solvable,” said Robert Spall, chair of uSU’s high performance computing steering committee. “The things we are able to do now were thought to be impossible 30 years ago. Astrophysicists, for example, are striving to shed light on dark matter and dark energy that make up 95 percent of our universe. The research is impossible without sophisticated high performance computing power.”
At USU, HPC has opened the door to higher levels of genetic modeling, economic forecasting, fluid dynamics computation, climate modeling and forecasting, and other research problems dealing with massive amounts of data calculations and variables.
The idea behind it is the same as the old aphorism, “Many hands make light work.”
To better understand HPC’s power, it helps to imagine an automobile assembly line. At each assembly station a job is performed, some more complex than others, but all jobs essential to the end product. Many of those tasks can be performed simultaneously and independently. It usually doesn’t matter in what order the jobs are completed, just that they arrive at the final assembly point together to build the entire automobile.
In the same way, high performance computers are connected to become clusters. The clusters parse out a large model or equation by assigning smaller tasks to many different nodes. The smaller tasks are solved independently on various nodes, which all compile together at the end to arrive at a final solution.
Utah State university developed high performance computing capacity in 2005 when a team of USU faculty researchers, led by PI and founding HPC@USU director Thomas Hauser, received an NSF Major Research Implementation grant. USU’s Vice President for Research Office helped support its growth and transitioned it to a research support center, available to any faculty or student researcher free of charge.
currently HPC@USU supports two clusters with 64 nodes each. On Wasatch Cluster (HPC@USU supercomputers are named after mountain ranges), each node contains two quad-core processors. That means each node can solve eight different problems, or eight different parts of the same problem, at once.
“Most desktop computers today have three or possible four cores, so while they can perform some tasks in parallel, they don’t match the HPC cluster nodes for raw computational power,” said Jonathan Huppi, system administrator for HPC@USU.
“Many aspects of HPC emphasize the collaborative nature of computing research. you do not just sit with a computer in a darkened room and perform computational analysis.”
All USU faculty and student researchers may use these computers–easily and free of charge. By requesting an account of the HPC@USU website (www.hpc.usu.edu), users can quickly begin their research projects and computations on the nodes.
“The process is so simple that we rarely see most users,” said Barbara Sidwell, training, outreach and business development coordinator for HPC@USU. “the work is almost always accomplished online.”
If the heart of HPC@USU’s function is in the virtual world, what is the value of housing it on USU’s campus? “Cloud computing” has become a media buzzword, and several resources, including Amazon’s elastic computing services or other national labs, offer some HPC services.
Most publicly available clouds are optimized for services that don’t require a fully dedicated server, or don’t need to be available all the time. In contrast, jobs on a supercomputer run continuously until they are completed.
“On-campus supercomputing resources eliminate the wait lines and high costs of running computations on external computers,” said Huppi. “Researchers at universities or laboratories with large supercomputers sometimes have to wait weeks or months for computations time, and commercial services can be prohibitively expensive.”
Some researchers new to USU have been able to revisit otherwise abandoned research questions that were unsolvable because they couldn’t get access to a supercomputer at other institutions, said Huppi.
In that way, HPC@USU has become an asset for USU in terms of faculty recruitment. Offering inexpensive, quick, reliable access to these computers is an attractive draw for new faculty members, opening doors to new research problems from all disciplines. Those researchers also become better candidates for external grants, having access to tools necessary to solve complex research problems.
At the same time, supercomputing experience develops outstanding careers from graduate and undergraduate students involved in HPC. Graduate Students from HPC@USU now work and study at, among others, the University of Alberta in Edmonton, Canada; Idaho National Laboratory; the University of Utah; the Department of Defense; U.S. Geological Survey; and the Sandia National Laboratory.
“Researchers from all disciplines who have computational analysis skills are in demand on an international scale, as are the system administrators and application specialists who help develop and maintain the hardware and software solutions to enable the research.” said Spall.
HPC@USU’s researchers are solving new and exciting problems, yet new results raise additional questions to be explored. Climate modeling research, for example, can currently use HPC resources to achieve an impressive 3 km scale. Soon, though, existing HPC technologies will not be powerful enough to run new models. Instead, the next generation of sophisticated computational resources will be required.
The next version of HPC resources may still look like black boxes, but looks can be deceiving.
What’s in the HPC@USU Queue
A sampling of the research solutions being aided by USU high performance computing resources:
Alexander Boldyrev (Chemistry and Biochemistry)
Boldyrev’s group is developing the theory of chemical bonding in pure and mixed atomic clusters, in an effort to design novel cluster-based materials, catalysts and nanodevices.
Tyler Brough (Economics and Finance)
Brough is working to create a dynamic structural model of optimal liquidity trading. He uses data of all transitions and trades from a single firm to create recommendations on how to balance patient and quick trades.
Noelle Cockett and Jill Maddox (University of Melbourne, Australia) (Animal, Dairy and Veterinary Sciences)
Cockett and Maddox are part of a consortium working to sequence the sheep genome. They are using HPC to integrate small genetic variations onto linkage maps.
Clay Isom (Animal, Dairy and Veterinary Sciences)
Isom uses HPC to understand early pig embryo development using anew gene sequencing techniques. A pig genome is represented by small genetic profiles stored in a database, and can be compared with other profiles.
Jiming Jin (Natural Resources and Watershed Sciences)
Jin is working with regional weather and climate modeling and analysis, hydrological modeling and the interactions between the land surface and the atmosphere.
Michael Johnson and Steven Barfuss (Utah Water Research Laboratory)
Johnson and Barfuss use HPC@USU to conduct computer modeling and design of water systems, sewer systems, stormwater conveyance systems, hydrologic events and floods.
Timothy Taylor (Biological Engineering) and Robert Spall (Mechanical and Aerospace Engineering)
Taylor and Spall are working with Thermo Fisher Scientific to investigate fluid mixing in single-use bioreactors. Spall, specifically, is using HPC to perform the computational fluid dynamics analysis.
High Performance Games
HPC@USU is a valuable resource for breaking new research ground and creating and attracting new talent. But it can also be about games.
Few things are more exciting than hacking and espionage. USU student used HPC@USU’s Viz Lab resources last November to compete in International Capture the Flag, a security competition, held to test the security skills of the participants. Seventy-two teams with 900 participants from 16 countries competed in the game of hacking, challenge-solving, and state-sponsored warfare. The USU team led the competition in its early stages, but the Plaid Parliament of Pwning, a team from Carnegie Mellow university, ultimately won.
Click edit button to change this text.