Lens uses sophisticated image recognition and tools like Google Translate to interpret what it sees into meaningful data for the user. When you use Google Lens — which they referred to as “Lensing it,” during the show — you simply point your phone at an object, such as an art print or a movie poster, and Lens will display the most relevant information it can find on Google.
Lens was originally announced at Google I/O earlier this year. At the time, it was described as being able to help the user understand what they were looking at through their phone’s camera. Today’s show gave more specific information on how Lens pulled data from the web to give the user the information they need.
Google Pixel owners will get an exclusive first look at Lens later this year, and the Pixel 2 will be the first phone to ship with Lens loaded onto it. Presumably other phones will have the Lens app some time after, but Google was tight-lipped on its timeframe.
This article was written by Rachel Kaser and originally appeared on The Next Web.