Program may mean cutting the tags

December 2009
Filed under:

Image searches typically rely on tags – text humans have attached to the pictures to identify objects or people they depict. The algorithms PNNL scientists Rob Farber and Harold Trease have created could largely eliminate tags because they recognize content automatically in massive amount of data.

The application could make it as easy to index objects and people in hours of video as it is for search engines to find text on millions of World Wide Web pages.

So it’s no wonder that Farber and Trease have been in touch with Google and with YouTube, the video-sharing Web site Google acquired in 2006.

The PNNL codes could improve the sites’ image search functions, Trease says, but the companies are too busy to consider the prospect at present. “It’s all they can do to manage the text and the images,” which are growing exponentially.”

In the meantime, Farber and Trease are refining their algorithms to better identify objects – more difficult, in some ways, than extracting and identifying faces.

“Faces have a built-in context. Objects don’t,” Trease says. To demonstrate the difference, he’ll sometimes use the algorithms to search for round yellow objects in video archives. The results include everything from golf balls to the sun.

Trease also works on training the algorithms to recognize events in video by focusing on the specific signature each one creates. For instance, computer programs could recognize security risks like cars parking in restricted areas or making U-turns.

(Visited 761 times, 1 visits today)

About the Author

The author is a former Krell Institute science writer.

Leave a Comment

You must be logged in to post a comment.