Grids grasp at multiple threads to block blackouts
Analyzing the consequences of power grid component failures requires tapping data that in some ways are as far flung as the components themselves, a Pacific Northwest National Laboratory (PNNL) researcher says.
Graph analysis, the basis for a PNNL algorithm to analyze contingency violations – generator failures, line overloads and the like – deals with significantly uneven data, says Zhenyu “Henry” Huang, a senior research engineer in the Energy Technology Development Group. “The data can be anywhere in the memory and it’s non-uniform.”
Most of today’s high-performance computers slow down under such conditions, but multithreaded machines, like PNNL’s Cray XMT, are designed to handle calculations involving big sets of scattered data. Its Threadstorm processors simultaneously manage up to 128 threads each to minimize memory bottlenecks.
The XMT’s hybrid architecture also helps. Under the scheme the researchers devised, the Threadstorm nodes would select the contingencies while nodes equipped with AMD Opteron processors would do the actual analysis.
In a test using just 64 of the XMT’s processors the graph analysis algorithm ran faster than on a standard shared-memory computer for a typical interconnection-wide grid, Huang says. The researchers expect better performance on a larger number of processors.
They’ve also compared their data with actual power grid analysis results and found an agreement of about 70 percent – that is, the graph analysis algorithm captures 70 percent of the important contingencies. Huang says agreement should increase as the team improves the algorithm.
The researchers currently run the contingency selection algorithm on the XMT Threadstorm nodes and contingency analysis on a parallel cluster computer. The team is working on improving communication between the XMT and the cluster as a step toward running the entire code on the XMT, with communication between the Threadstorm and Opteron nodes.
The team also is devising ways to display contingency analysis data. “One idea is to visualize the information on a map,” Huang says, “which we believe is the most intuitive way to present information to operators so they can quickly link the numbers to actual locations in the power grid.”
In the long run, Huang sees at least two ways transmission operators can address the demand contingency analysis creates for major computer power.
First, utilities could acquire powerful parallel computers – but not necessarily top-flight supercomputers – that will let them analyze the power system quickly enough to take action, Huang says.
Second, companies could work with supercomputer facilities like PNNL and other DOE laboratories.
“That computer can provide a service to run analyses and push the results back to the control rooms,” Huang adds. Grid operators “don’t have to have computers. They only care about the information.”
About the Author
The author is a former Krell Institute science writer.