Type a person’s name into the Google image search engine and the results are likely to vary wildly. You may find pictures of the person you’re seeking, but you’re also likely to see completely irrelevant images just because their name appears on the same web page.

You might have better luck if your computer could analyze a picture of the person you want, then search through millions of other images — even hours of videotape — to find someone who looks identical or similar. Ideally, the computer could match the faces regardless of whether the subject is in bright or low light, is only partially facing the camera or is near or far.

That’s exactly what two Pacific Northwest National Laboratory (PNNL) researchers have done. Their algorithms analyze millions of video frames, pluck out the faces and quantify them to create searchable databases for facial identification.

“We’re measuring the information content of a face much like Google” analyzes written web material, says Harold Trease, a PNNL computational physicist. “What they do for text searching we’re trying to do for video and image processing.”

A program that picks faces out of streaming or recorded video and identifies them regardless of conditions could be useful in many areas, but for Trease and Rob Farber, a PNNL senior research scientist, it’s just a test case.

“It doesn’t have to be webcams,” Farber adds. “This is ‘a first toe in the water’ work” to prove the concept on massive amounts of unstructured data and high-performance computers. The algorithms could be generalized to work with almost any set of digital images to identify a variety of objects, including hidden roadside bombs and tumors.

Face recognition is especially tricky conditions in which light levels, size and angles change constantly. For instance, humans typically have few problems recognizing people regardless of whether they’re close or somewhat distant, but computers aren’t as adept. So facial recognition algorithms must have “scale invariance” — the ability to pick a face out of video regardless of its distance from the camera.

Likewise, a successful algorithm must have a degree of “rotation invariance” — the ability to distinguish faces that aren’t facing the camera head-on. And it must have “translational invariance” — the ability to extract faces or other target objects in a video even if they’re moving within the frame.

The first part of the algorithm, largely Trease’s work, starts with a raw red-green-blue (RGB) format video frame and transforms it to concentrate on the qualities of hue, saturation and ntensity. The intensity parameter is discarded, allowing the algorithm to work regardless of lighting in the image.

(Visited 1,198 times, 1 visits today)

Page: 1 2

Thomas R. O'Donnell

Thomas R. O'Donnell is senior science writer at the Krell Institute and a frequent contributor to DEIXIS.

Published by
Thomas R. O'Donnell

Recent Posts

Scaling up metadata

A Los Alamos National Laboratory and Carnegie Mellon University exascale file system helps scientists sort… Read More

May, 2020

Pandemic view – plus privacy

A fellow helps guide an international volunteer effort to develop COVID Watch, a mobile telephone… Read More

April, 2020

Earth’s multimodel future

A PNNL-led team's mission: integrate climate models to reflect human impact on natural resources –… Read More

February, 2020

Beyond the tunnel

Stanford-led team turns to Argonne’s Mira to fine-tune a computational route around aircraft wind-tunnel testing. Read More

January, 2020

Efficiency surge

A DOE CSGF recipient at the University of Texas took on a hurricane-flooding simulation and… Read More

September, 2019

Robot whisperer

A DOE computational science fellow combines biology, technology and more to explore behavior, swarms and… Read More

July, 2019