Google Lens image and text multi-search will soon be available in more languages
Tech giant Google has announced to make visual search more natural with multi-search, a new tool to search using images and text simultaneously. The company introduced multi-search earlier this year as a beta in the US, and will now expand it to more than 70 languages in the coming months.
With the new Lens translation update, people will now see translated text realistically overlaid onto the pictures underneath.
The company will start rolling “multi-search near me” out in English in the US this fall. People are using Google to translate text into images over 1 billion times a month, across more than 100 languages.
“We’re taking this capability even further with ‘multi-search near me,’ enabling you to take a picture of an unfamiliar item, such as a dish or plant, then find it at a local place nearby, like a restaurant or gardening shop,” said Prabhakar Raghavan, Senior Vice President, Google Search.
“We’re now able to blend translated text into the background image thanks to a machine learning technology called Generative Adversarial Networks (GANs),” Raghavan informed.
“Just as live traffic in navigation made Google Maps dramatically more helpful, we’re making another significant advancement in mapping by bringing helpful insights — like weather and how busy a place is — to life with an immersive view in Google Maps,” the company announced.
[With Inputs from IANS]
PGurus is now on Telegram. Click here to join our channel and stay updated with all the latest news and views
For all the latest updates, download PGurus App.