The Pacific Northwest National Laboratory’s the Electricity Infrastructure Operations Center. (Photo: PNNL.)

Most days, we use electricity without much thought – until, say, a stormy day causes outages. Yet no matter the weather, a group of specialized workers called operators keep an eye on the electrical grid and ensure that it functions seamlessly and safely as possible.

Electricity gets to consumers in a series of complex steps. First, energy generated at its source –dams, wind, solar, fossil fuels, nuclear – is converted to high voltage with the help of substations so it can travel long distances over power lines. Once it’s in a transmission network, electricity can move to homes, offices and other places of commerce.

The operator’s job is challenging and demanding because power networks inevitably experience faults. Generators trip. The demand for electricity can fluctuate wildly as it did this past summer as customers tried to stay cool during record heatwaves. All the while, consumers expect to get the electricity that they’re paying for. During all sorts of scenarios, operators look for anomalies and adjust for them in real time to ensure that electricity flows and isn’t going to waste.

Over the past two decades, researchers have built tools based on machine learning and artificial intelligence to help operators manage changes in the grid. At the Pacific Northwest National Laboratory, researchers have designed a tool that can monitor potentially irregular voltage disturbances in real time. Programmers also have developed ways to forecast energy price fluctuations and how much power must be generated.

Yet these tools aren’t widely used. In 2016, the energy analyst agency zPryme interviewed 200 North American utility companies and found that just under a third of operators who have machine learning at their disposal are using it. Only about 20% of respondents in that same survey understood how machine learning can help them do their job.

Human factor researchers Corey K. Fallon and Brett A. Jefferson of PNNL want to boost those numbers. They search for new ways to get operators to not only use but also to trust and work with these algorithms to enhance their stations’ performances.

“We want to make sure that these tools take human factor variables into consideration – workload, fatigue, time pressure,” Fallon says. “Otherwise, the tools are not going to be useful. They won’t get adopted.”

These tools have the potential to help operators do their job faster, which is important because time is of the essence. Often, operators must act in 30 minutes or less when something requires a fix. “There isn’t a lot of time to sit around and contemplate,” Fallon says. Machine learning and artificial intelligence algorithms can give operators information almost instantly, but if those computer-generated recommendations don’t make sense or are inaccurate, operators might ignore those suggestions in the future.

And machine learning can run into bottlenecks, Jefferson notes, and an algorithm sometimes encounters scenarios it can’t solve.

Together, Fallon, Jefferson and colleagues have devised techniques to evaluate ML tools for operators to ultimately improve user acceptance.

Every five minutes, operators run grid software that scans for violations that could disrupt service. The grid can be operational if up to two components of it go down. Beyond that, operators must ramp power up or down strategically or divert power in one direction or another.

One tool, called ACAT (a contingency analysis tool) is built on a neural network and can provide recommendations for operators in those scenarios. Jefferson and his colleagues are interested in whether the recommendations are feasible and if operators can maintain situational awareness with other tasks as they work with the software.

From a study of one operator, Jefferson and his team found cases where the tool wasn’t useful. In some situations, the recommendations contradicted operational procedures. Other times, the machine assigned substations numbers rather than names (operators were used to the latter) . Despite the small sample size, Jefferson has shared these results with other utility operators and engineers. He hopes that continuing to study these human preferences can help improve the accuracy of the tool in future iterations.

Most machine learning algorithms not only give the end user a result, but a confidence score, which tells a user how certain the algorithm is in its decision.

But those scores are imperfect, Fallon says, and algorithms can misclassify events. For instance, it can be highly confident about an outcome, but it might have decided on the wrong recommendation. The converse can also be true: The algorithm lacks confidence in a correctly categorized event. “When you have confidence scores that are misaligned with actual underlying performance, that can erode trust,” Fallon notes.

Fallon and colleagues have developed a method of generating subject-expert-derived confidence scores. In this scenario, experts who have the same background and training as an operator perform the algorithm’s duties. They spend about 10 hours examining data that the machine learning algorithm has classified correctly and incorrectly in the same way the algorithm would undergo a learning phase, and the new information is incorporated into the model.

Then, that expert goes through a scoring phase, which involves sorting training data that the algorithm has not yet classified. The experts provide a score that reflects their confidence in the ML’s ability to classify that event.

From an initial, unpublished analysis, Fallon says the human does a better job of predicting ML performance than the algorithm’s, offering a glimpse at ML’s blind spots. Developers can use the results to improve the software in future iterations.

The PNNL group will take their human factors testing at the lab’s Electricity Infrastructure Operations Center – an environment that simulates an operator’s control room. By bringing in experts, Fallon and Jefferson can observe interactions between people and various ML tools in real time.

Says Fallon, “It’s going to take the human factors work that we’re doing here to the next level and allow us to hopefully make a difference in the technology design and integration of these tools.”

Wudan Yan

Share
Published by
Wudan Yan

Recent Posts

Scale-tamer

A recent program alum interweaves large and small scales in wind-energy and ocean models. Read More

October, 2024

Exploring electrons

At UC Berkeley, a fellow applies machine learning to sharpen microscopy. Read More

September, 2024

Subduing software surprises

A Cornell University fellowship recipient works on methods for ensuring software functions as expected. Read More

August, 2024

‘Crazy ideas’

A UCSD engineering professor and former DOE CSGF recipient combines curiosity and diverse research experiences… Read More

July, 2024

Star treatment

A UT Austin-based fellow blends physics and advanced computing to reveal cosmic rays’ role in… Read More

July, 2024

New home for science and tech talk

With the fifth season's first episode, a DOE CSGF-sponsored podcast launches a website. Read More

July, 2024