Source: PNNL Press Release, January 17, 2012. Energy storage, power grid development benefiting from 162-Teraflop system.
A new, 162-Teraflop peak supercomputer at the Department of Energy’s Pacific Northwest National Laboratory is helping scientists do more complex, advanced research in areas such energy storage and future power grid development. It also uses less energy than similar computers because of its unique, water-fed cooling system.
With the ability to compute as fast as about 20,000 typical personal computers combined, the Olympus supercomputer is the first large-scale computer exclusively available to PNNL researchers and their collaborators.
“Taking a cue from Washington state’s Mount Olympus, this computer is enabling PNNL scientists to reach new scientific heights — and at a low cost,” said Kevin Regimbal, director of the new PNNL Institutional Computing program. “PNNL has pooled its resources in a tough economy to build the best possible computational resource that will enable new scientific discoveries.”
Before, PNNL research staff purchased smaller computer systems for their specific research project needs, but the size and power of those systems were limited to individual project budgets. Now PNNL research projects can use Olympus.
“PNNL is getting more computer power for its investment, since costs are reduced when we purchase components in large volumes,” Regimbal said. The system’s larger size also allows scientists to complete significantly more complex calculations, which help them dig deeper into their research areas, he added.
The initial purchase and installation of Olympus cost $4.4 million. About $3.9 million of that came from internal lab funding for general computing capabilities, while $500,000 came from individual PNNL research projects that invested in specific capabilities needed for their work.
Energy-efficient cooling
Unlike other large-scale computers, Olympus doesn’t use air conditioning to remain cool. Instead, it uses water. The novel system uses a closed loop of water that absorbs the heat generated by Olympus as it crunches data.
The system is expected to use about 70 percent less energy than traditional computer cooling with air conditioning, which could save PNNL as much as $61,000 a year on Olympus’ cooling costs.
Discovery through computation
Olympus is the heart of the new PNNL Institutional Computing program, which aims to advance scientific discovery through computational science. The cluster became fully operational in mid-October 2011 and it’s already working on many PNNL research projects. Olympus is helping analyze how power grids of the future could operate and design better batteries for energy storage.
The system will also be used to improve computer models developed at PNNL, such as the NW Chem computational chemistry suite and STOMP, which simulates the movement of water and contaminants below ground. And PNNL is encouraging its scientists who may not normally use computation as part of their research to consider incorporating it in their next project with the help of the new system.
“High performance computing and simulation will be essential to future scientific discoveries. Olympus allows PNNL to be a player in that future,” said Steven Ashby, PNNL’s deputy director of science & technology. “It also will help us to nurture a culture of computational science that will enable our scientists and engineers to solve some of the most pressing problems facing the nation.”
Olympus Fast Facts:
- Theoretical peak processing speed of 162 Teraflops, meaning Olympus can complete computations as fast as about 20,000 typical personal computers combined.
- 80 Gigabytes per second of disk bandwidth, meaning it can read and write information to a disk about 800 times faster than a typical personal computer.
- 38.7 Terabytes of total memory, equaling the memory of about 10,000 typical personal computers combined.
- 4 Petabytes of total disk space provided by Advanced HPC. The system’s disk space is the same as about 4,000 typical personal computers or 80,000 standard DVDs combined.
- 604 computer nodes provided by Atipa, including 1,200 dual AMD Interlagos 16-core processors
- About 3.75 miles of interconnect cable provided by Atipa, including a 648-port QLogic core switch
- Motivair Chilled Door rear-door rack cooling system
- A graphic processing unit (GPU) testbed of 32 nodes, with each node consisting of a dual AMD Interlagos 16-core processor running at 2.1 Ghz, 64 Gigabytes of memory, 1 Terabyte of local disk space, a Quad Data Rate InfiniBand network and one NVIDIA Tesla M2090 GPU.