The big face off

December 2009
Filed under: Pacific Northwest

Next the algorithm sifts out facial “blobs” based on hue. “It turns out that skin color occupies a very narrow band in the hue dimension” regardless of race or ethnicity, Trease says. The algorithms identify patches of skin pixels, apply edge-detection filters to separate overlapping faces, and compute the two-dimensional geometry of each blob.
Next, successive constraints based on Shannon entropy measures are applied to generate a 20 attribute signature characteristic of each face. The entropy measures calculate the images’ information content.

It’s impractical, however, to compare complex 20-dimensional signatures to identify faces or other objects extracted from a host of video frames. So the second part of the algorithm, largely based on Farber’s research, reduces those 20 dimensions to just three.

“We crunch down to low dimensions and are able to do that with high accuracy — we don’t introduce a lot of error between the different points,” Farber says. That makes it possible to use similarity metrics that can distinguish between like faces.

To do the job, the researchers start with principal component analysis (PCA), a technique for identifying the main directions and trends among a group of data points. “Principal components allows us to find the smallest set of linear combinations that lets us interpret the data,” Farber says.

The process forces the 20-dimensional vector through a “bottleneck” of three linear “neurons” to find the three principal components that can accurately reconstruct the original signature. Data passes through the bottleneck to a set of output neurons that reconstruct the data. The process is repeated, reducing the error each time as the bottleneck neurons learn the principal components.

The approach has proven to be extraordinarily accurate and efficient. In one test using 2,000 pictures with known identities, the algorithm correctly identified all but two faces — although Trease notes that accuracy varies with image quality, resolution and other factors. And because Farber combined the neural network with a massively parallel mapping technique he pioneered in the 1980s, the program achieves high throughput with near-linear scaling — the amount
of work the computer does rises in direct proportion to the number of processors employed.
Just because the algorithms run well on supercomputers doesn’t mean they can’t do as well on smaller machines, Farber notes. They also achieve near-linear scaling on inexpensive commodity hardware like NVIDIA graphics processing units (GPUs). Farber envisions low-power “smart sensors” that capture data, do initial data extraction and then compress the results and transmit them at low bandwidth for processing.

In fact, the potential applications are so wide he seems a little overwhelmed.

“We have many different directions we’re contemplating,” Farber says. “It depends on what’s going to be the best allocation of our available resources.”

(Visited 1,310 times, 1 visits today)

About the Author

The author is a former Krell Institute science writer.

Comments are closed.