Google links image recognition to Assistant with ‘Lens’
Google has announced Google Lens. The new service ‘understands’ what the user is looking at and responds to it with suggestions and other information. The service comes first to Google Assistant and Photos.
Google announced Google Lens at its Google I/O 2017 conference. The company’s CEO, Sundar Pichai, gave an example of a user pointing his camera at a flower, after which the Assistant indicates what kind of flower it is and offers options to find nearby flower shops. Lens can also understand texts. Google’s example in this area was connecting to a router by pointing the camera at the ssid and the network password.
Google Lens can also use the location of the user, who can, for example, get more information about a place to eat by pointing the camera at a restaurant. According to Google, the service is possible due to advances in image recognition. According to the company, the margin of error of the image recognition algorithms is now lower than that of people themselves. In the near future, for example, this should ensure that objects in front of the subject of a photo, such as a fence, can be automatically removed, for an image as if the fence had not been there.
For now, Lens is only coming to Assistant and Photos, but other services should also receive support for Lens in the future. There are now 500 million active users of Photos, Google further announced. Those users upload 1.2 billion photos every day. In addition, there are more than 2 billion activated Android devices.