Skip to content

Categories:

Google Talk Video on Android Stabilized with SRI Technology: What Comes Next?

Google talk 150x150“Mobile video is shaky by definition,” says Norman Winarsky, VP at SRI Ventures, part of Silcon Valley-based SRI International, a nonprofit performing sponsored R&D for governments, foundations and businesses. “A shaky image affects bandwidth and reduces the experience,” he explains.

But with the technology Google has licensed from SRI, image stabilization will no longer be a concern … at least on Android. Google is implementing the SRI tech in its Google Talk application, to deliver better video on Android 3.0+ devices. And that may be only the beginning of Google’s computer vision plans.

Sponsor

Image Stabilization in Google Talk

Google video chat

Image stabilization technology is over 20 years old, with initial applications built for defense use by DARPA. The technology was crucial to autonomous or semi-autonomous vehicles and robotics. Someone driving a tank, for example, would get nauseous in 2 minutes if it weren’t for stabilization technologies, Winarsky says.

But these days, the tech has made its way into more benign, consumer-facing efforts, like Google Talk, apparently. Here, the video chatting application captures the video from a device’s front-facing camera and compresses the data before transmission. In the compression algorithms, the amount of bandwidth used increases with the amount of motion in a scene.

By stabilizing the video, SRI’s software allows the compression to take up fewer bits. Simply put, it’s more efficient. It takes less work and fewer resources.

What Could Google Do, Post-Video Stabilization?

Although SRI can’t talk for Google or about its future plans in this area, saying only that it “fully hopes to work beyond this app with Google,” Winarsky was happy to talk in more general terms about where computer vision technologies are headed.

Once you have the stabilization down, he says, you can then work on things like tracking objects that appear in the frame, tracking the motion of objects and recognizing those objects. Head tracking, for example, was demonstrated at this year’s Google I/O where the stabilization technology was used in conjunction with a face-tracking API (application programming interface) that will arrive in a future version of Google’s mobile operating system Android.

Put it all together – stabilization, image tracking and image recognition – and you have “augmented reality” (AR), a term that describes technology that lets your device “see” the world in front of its camera lens and then act on that data in some way.

Google, of course, is already experimenting with AR to some extent through its “Google Goggles” application which lets you use pictures to search the Web. Google Goggles can currently identify things like landmarks, books, art, wine and logos, but has recently started recognizing text, too, in order to perform on-the-fly translations between languages.

Facial Recognition in Video?

Picasa logoThere are other things that stabilization can help to enable, says Winarsky. For example, facial recognition. Until an image is stabilized, such a thing would not be possible on video. After stabilization though, the same type of algorithms that currently work on still images could be applied to moving video.

Google already uses facial recognition in its online photo-sharing service Picasa (soon to be rebranded as “Google Photos”), so it’s not a big leap to assume that Google could introduce something like that to its video applications and services someday. Facial recognition in Google Talk? YouTube? Google Goggles? Who knows?

Case in point: earlier this year, Google denied that it has a facial recognition app in development, after CNN published a report to the contrary, including an on-record statement from a Google employee confirming its existence.  And Google recently rolled out a smart update to search that allows you to search for things using only an image. And guess what? It works for images of people, too.

So why not make people chatting with you on video, seen through your camera lens or those appearing in online videos “Googleable” objects? There’s only one reason not to: it’s a little creepy. But creepy/awesome is the line Google likes to toe. For the company, it’s not a matter of if something is possible – it’s only a matter of when is the right time to release it.

Stabilization, on its own, may seem like minor news, but it’s an important first step towards a future where the world itself, and all the people in it, are things you can Google just by looking at them.

Discuss


Posted in Uncategorized.


0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.



Some HTML is OK

or, reply to this post via trackback.