Each row presents a segmentation of left and right hemispheres of the human cerebral cortex computed by different methods. Row one: the HP-Concord (with persistent homology fine-tuning) technique produces segmentations that resemble those derived from neuroscience methods. In contrast, segmentations in the second and third rows, from other data-driven methods (middle, HP-Concord plus Louvain; bottom, thresholded sample covariance matrix plus Louvain), appear to lack fine detail. (Image: After a figure from “Communication-Avoiding Optimization Methods for Distributed Massive-Scale Sparse Inverse Covariance Estimation.” Penporn Koanantakool, Alnur Ali, Ariful Azad, Aydın Buluç, Dmitriy Morozov, Sang-Yun Oh, Leonid Oliker, and Katherine Yelick. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics, 2018.)

By the time Alnur Ali began his Department of Energy Computational Science Graduate Fellowship (DOE CSGF) in 2014, he’d already charted an odd path from mathematics to Microsoft before his Ph.D. studies at Carnegie Mellon University.

In high school, Ali’s mathematics teacher suggested he read Surely You’re Joking, Mr. Feynman, the best-selling book co-written by the impish Nobel-winning theoretical physicist who died in 1988. Richard Feynman made major discoveries in quantum electrodynamics and superfluidity but also was conversant in safecracking and bongo drumming.

“That sparked something in me,” Ali recalls. “(Feynman) was very mischievous and very curious, and I related to that on some level. From then on I got a lot more into math.” Also, “one day my dad brought home a computer. At first I just played games on it. Then I started wondering how I could make my own game. That led to trying to figure out how to program code.”

As an undergraduate, he focused on computer science and mathematics at the University of Southern California and took a summer’s worth of physics at the University of Cambridge in England.

But he graduated with “no real way to combine those three areas of studies,” Ali now recalls. Feeling divided, he spent 2004 to 2013 as a Microsoft software engineer, interrupting an academic quest for an industrial one but picking up new skills in the process.

At first he focused on important problems in information retrieval, like ranking and query rewriting to improve Microsoft’s Bing search engine. But the experience also sowed seeds for a future in machine learning. This growing field uses special algorithms that guide computers to teach themselves to search widely for hidden but related additional information, sometimes via neural networks modeled after the brain. Microsoft exposed him to “a ton of experts (who) showed me what it meant to be a machine-learning researcher early on, what kind of questions to ask and how to think about problems.”

That work combines strategies for making those models work in the real world and understanding from a mathematical point of view how those models behave, Ali explains. “I was hooked right away.”

Working at Microsoft offered him opportunities to collaborate with other Seattle researchers, including the statistics faculty at the University of Washington. But the pull back toward academic research was strong. “I always knew I wanted to go back to graduate school,” he says. Ali enrolled at Carnegie Mellon as a doctoral candidate in machine learning in 2013, and his research took him to Stanford University during 2014.

His decision bucks a trend. Software engineers are in high demand, so it’s unusual for students to start a Ph.D. in computer science after years away, says Zico Kolter, an assistant professor of computer science at Carnegie Mellon University.

“Alnur was indeed exceptional both in his drive to come back to grad school and in his ability to find a way to do research while being employed at a large company. My impression is he is drawn to understanding mathematical algorithms at a more fundamental level than one is typically exposed to in industry,” adds Kolter, who served as Ali’s first PhD advisor.

“Machine learning is everywhere these days,” Ali says. “So it’s important to theoretically understand the pros and cons of different methods.” Those include “their predictive accuracy, how long they take to run and how easy they are to use.”

“In part, that boils down to working a bunch of math to explain these tradeoffs. The other part is applying machine learning in new ways that can help people, which sometimes requires developing faster algorithms. As I’ve gone through my Ph.D., I’ve wanted to do more good for society at large.”

Two machine-learning methods Ali studied for his 2015 DOE CSGF practicum at DOE’s Lawrence Berkeley National Laboratory have influenced his approach.

The first, the inverse covariance matrix, is basically related to how correlated two objects are, he says. The second, pseudo-likelihood, is better at estimating correlations when you have a lot of measurements, like in big data.

He employed both methods for a 2017 paper presented at the International Conference on Artificial Intelligence and Statistics that focused on so-called high-dimensional, or big data, situations where the overall number variables can be much greater than the number of data samples. He and his collaborators, including practicum advisor Sang-Yun Oh, demonstrated how those methods apply to real-world situations such in finance and wind-power data.

With Kolter and Ryan Tibshirani – his second doctoral advisor – Ali used those techniques in 2016 to study multiyear spreads of influenza around the United States.

Ali is now working with Oh to characterize functional relationships between brain regions where a single pixel location within a functional magnetic resonance imaging brain scan is the smallest unit. Oh is now an assistant professor of statistics and applied probability at the University of California, Santa Barbara.

For the ongoing brain study and the 2017 paper, Oh and Ali have relied on Edison, Berkeley Lab’s massive, 2.57-petaflops supercomputer.

“Data-driven scientific discovery is an important trend in many research fields,” Oh says. “Alnur’s research interest and skill set is in the sweet spot of this trend.” Ali, he adds, also is “a kind and considerate person with grounded optimism.”

Ali is grateful for the fellowship’s financial support and for the knowledge and experiences it has provided. It has made him “more interested in doing work on the theoretical side (without) losing sight of the real-world applications. Also, I am more willing to take risks with my research.“

Tibshirani, a Carnegie Mellon associate professor of statistics who also focuses on machine learning, agrees that Ali is “passionate about doing research that has social value. He also has a way of speaking and describing his research that is easy for others outside his specialty to understand. That is something that is mature and hard to teach.”

A longer version of this article appears in the current issue of the print research annual DEIXIS.

Monte Basgall

Share
Published by
Monte Basgall

Recent Posts

Scale-tamer

A recent program alum interweaves large and small scales in wind-energy and ocean models. Read More

October, 2024

Exploring electrons

At UC Berkeley, a fellow applies machine learning to sharpen microscopy. Read More

September, 2024

Subduing software surprises

A Cornell University fellowship recipient works on methods for ensuring software functions as expected. Read More

August, 2024

‘Crazy ideas’

A UCSD engineering professor and former DOE CSGF recipient combines curiosity and diverse research experiences… Read More

July, 2024

Star treatment

A UT Austin-based fellow blends physics and advanced computing to reveal cosmic rays’ role in… Read More

July, 2024

New home for science and tech talk

With the fifth season's first episode, a DOE CSGF-sponsored podcast launches a website. Read More

July, 2024