There have been numerous occasions when we are trying to look for something but just can’t come up with the right word to type in the Google Search bar. To solve this issue faced by many of us, Google has introduced a new Multisearch feature in Lens. This ability, which was announced initially last year, will help you search with both images and texts. Here’s how this will work.
Google Lens’ Multisearch feature will let you search for a particular thing you see by uploading its picture along with an accompanying query to find an answer even when you are unable to describe the question.
The company also mentions some use cases that include fashion and home decor and suggests that it works “best” with shopping searches. There’s another use case wherein you can attach an image of an object and get an answer to a related query. Google’s example involves the image of a Rosemary plant and the added query of how to take care of it.
This feature is a result of advancements in AI, although it isn’t based on the Multitask Unified Model. For those who don’t know, it allows for enhanced search by providing an image of an object. Google has detailed this too and has suggested that it will soon be introduced for users.