Google Lens has now been used more than a billion times to help connect digital information with things in the physical world, indexing those things in the same way that Google Search indexes the millions of websites online.
A demo revealed how Google Lens would soon be able to work with menus in restaurants to show you what are the most popular options among users, along with photos of the meals. You will also be able to point your phone at the bill to help work out how much the tip should be and how to split the bill.
Soon you will also be able to point your phone at a recipe to see it come to life with a unique visual experience.
So what exactly is Google Lens?
Google Lens is a smart camera app that can read and actually understand the information within your images. We’re not talking about the image metadata, but the places, names and even Wi-Fi passwords depicted in your photos. It can then offer up intelligent ways to deal with that information. Also see: How to reverse image search
What is Google Lens is perhaps best answered with some examples: take a photo of a router’s password sticker and you’ll automatically connect to that network; snap a picture of an unknown plant and automatically identify it in Google search results; photograph foreign text for a translation; take a photo of a theatre billboard and bring up the Google Assistant to book tickets; or take a screenshot of a phone number and quickly bring it up in the dialer to call it.
Integration with Google Photos means even more information is available after the event, for example it can match your image to online data to work out what is that landmark you photographed earlier.
There are also some clever context-aware editing tools: during the demo, Google showed how its new app could digitally remove the links of a chain fence to reveal a baseball player standing behind.