Next the algorithm sifts out facial “blobs” based on hue. “It turns out that skin color occupies a very narrow band in the hue dimension” regardless of race or ethnicity, Trease says. The algorithms identify patches of skin pixels, apply edge-detection filters to separate overlapping faces, and compute the two-dimensional geometry of each blob.
Next, successive constraints based on Shannon entropy measures are applied to generate a 20 attribute signature characteristic of each face. The entropy measures calculate the images’ information content.

It’s impractical, however, to compare complex 20-dimensional signatures to identify faces or other objects extracted from a host of video frames. So the second part of the algorithm, largely based on Farber’s research, reduces those 20 dimensions to just three.

“We crunch down to low dimensions and are able to do that with high accuracy — we don’t introduce a lot of error between the different points,” Farber says. That makes it possible to use similarity metrics that can distinguish between like faces.

To do the job, the researchers start with principal component analysis (PCA), a technique for identifying the main directions and trends among a group of data points. “Principal components allows us to find the smallest set of linear combinations that lets us interpret the data,” Farber says.

The process forces the 20-dimensional vector through a “bottleneck” of three linear “neurons” to find the three principal components that can accurately reconstruct the original signature. Data passes through the bottleneck to a set of output neurons that reconstruct the data. The process is repeated, reducing the error each time as the bottleneck neurons learn the principal components.

The approach has proven to be extraordinarily accurate and efficient. In one test using 2,000 pictures with known identities, the algorithm correctly identified all but two faces — although Trease notes that accuracy varies with image quality, resolution and other factors. And because Farber combined the neural network with a massively parallel mapping technique he pioneered in the 1980s, the program achieves high throughput with near-linear scaling — the amount
of work the computer does rises in direct proportion to the number of processors employed.
Just because the algorithms run well on supercomputers doesn’t mean they can’t do as well on smaller machines, Farber notes. They also achieve near-linear scaling on inexpensive commodity hardware like NVIDIA graphics processing units (GPUs). Farber envisions low-power “smart sensors” that capture data, do initial data extraction and then compress the results and transmit them at low bandwidth for processing.

In fact, the potential applications are so wide he seems a little overwhelmed.

“We have many different directions we’re contemplating,” Farber says. “It depends on what’s going to be the best allocation of our available resources.”

Page: 1 2

Thomas R. O'Donnell

The author is a former Krell Institute science writer.

Share
Published by
Thomas R. O'Donnell

Recent Posts

Cutting carbon, blocking blooms

Besides bioplastics research, the LANL Biofuels and Bioproducts team is studying carbon neutrality and applying… Read More

June, 2023

Planet-friendly plastics

A Los Alamos team applies machine learning to find environmentally benign plastics. Read More

June, 2023

A split nanosecond

Sandia supercomputer simulations of atomic behavior under extreme conditions advances materials modeling. Read More

May, 2023

A colorful career

Argonne’s Joe Insley combines art and computer science to build intricate images and animations from… Read More

April, 2023

Mapping the metastable

An Argonne National Laboratory group uses supercomputers to model known and mysterious atomic arrangements, revealing… Read More

March, 2023

The human factor

A PNNL team works to improve AI and machine learning tools so that grid operators… Read More

January, 2023