Google Lens actually shows how AI can make life easier

During the presentation of the Google I / O developer conference, artificial intelligence was once again the defining theme and guiding light of Google for the future. Currently, AI is intertwined with everything that Google does, and nowhere are the benefits of the first artificial intelligence approach of CEO Sundar Pichai more evident than with Google Lens.

The Lens platform combines the company's most advanced advances in machine vision and natural language processing with the power of Google Search. In doing so, Google presents a compelling argument as to why its way of developing artificial intelligence will generate more software of immediate use than its main rivals, such as Amazon and Facebook. It also gives AI's detractors an illustrative example of what technology can do for consumers, rather than just low-profile systems such as data centers and ad networks or more limited use of hardware such as smart speakers.

The lens is effectively the engine of Google to see, understand and increase the real world. Live in the viewer of the software camera powered by Google as Assistant and, after an announcement in I / O this year, within the native camera of the first level Android smartphones. For Google, anything a human can recognize is a fair game for Lens. That includes objects and environments, people and animals (or even pictures of animals) and any text fragment as it appears on posters, screens, restaurant menus and books. From there, Google uses the broad knowledge base of search to obtain useful information about the surface, such as product purchase links and Wikipedia descriptions of famous monuments. The objective is to provide users with context about their environments and all the objects within those environments.





Image: Google

The platform, announced for the first time at last year's I / O conference, is now being integrated directly into the Android camera on Google Pixel devices, as well as LG's flagship phones, Motorola, Xiaomi and others. In addition to that, Google announced that Lens now works in real time and can analyze the text as it appears in the real world. Google Lens can now recognize the style of clothing and furniture to drive a recommendation engine that the company calls Style Match, which is designed to help Lens users decorate their home and create matching sets.

The lens, which previously existed only in the Google Assistant, is also moving beyond the Assistant, Camera and Google Photos application. It also helps boost new features in adjacent products such as Google Maps. In a particularly revealing demonstration, Google showed how Lens can drive an augmented reality version of Street View calls to notable locations and landmarks with visual overlays.

In a live demonstration today on I / O, I had the opportunity to try some of the new features of Google Lens on an LG G7 ThinQ. The function now works in real time, as advertised, and was able to identify a series of different products, from shirts to books and paintings, with only a few understandable drawbacks.

For example, in one situation, Google Lens thought that a shoe was a Picasso painting, only because it momentarily got confused about the location of the objects. Approaching the desired object I wanted to recognize, the shoe in this case, solved the problem. Even when the camera was too close for Lens to identify the object, or if it was hard to find out what it was, he could touch the screen and Google would give him his best estimate with a short phrase like, "Is it … art?" Or "This looks like a painting."





Image: Google

The most impressive thing is the ability of Google Lens to analyze text and extract it from the real world. The fundamental basis for this has already been established with products like Google Translate that can convert a street sign or a restaurant menu into a foreign language in their native language by simply taking a picture. Now that those advances have been refined and built in Lens, you can do it in real time with dinner menu items or even large portions of text in a book.

In our demonstration, we scanned a page of Italian dishes to show photos of those items in Google Image Search, as well as YouTube videos on how to make the food as well. We could also translate the menu headings from Italian to English by simply selecting the part of the menu, an action that automatically transforms the text into a search format. From there, you can copy and paste that text anywhere on your phone and even translate it on the fly. This is where Google Lens really shines by fusing the company's strengths into a series of products simultaneously.

Unfortunately we could not try Style Match or Street View features that stood out during the presentation, the last of which is a more experimental feature without a specific date when it will really reach consumers. However, Google Lens is much more powerful a year after its existence, and Google ensures that it can live on as many devices, including those made by Apple, as in as many layers of those devices as possible. For a company that is betting its future on AI, there are some compelling examples for the future and for everyday consumers that Lens.

Leave a Reply