Google has launched multisearch for Lens which allows you to search for an image with accompanying text to help refine results. For example, Google says if you see an orange dress that you like but would prefer to find a store selling it in green, you can snap a picture of the orange dress and write ‘green’ in the text field to pull up results of similar dresses but in green, there are a wide range of uses for this tool but overall Google wants to make your results more relevant.
Other examples that Google suggests are:
- Snap a photo of your dining set and add the query “coffee table” to find a matching table.
- Take a picture of your rosemary plant and add the query “care instructions”.
To use the new multisearch feature, open up Google Lens on Android or iOS and tap the Lens camera icon and either search a screenshot or snap a picture. You can then swipe up and tap ‘+ Add to your search’, this will allow you to add some text to refine your search query. The company said that this feature is still in beta so it may not be perfect.
Commenting on the feature, Google said:
“All this is made possible by our latest advancements in artificial intelligence, which is making it easier to understand the world around you in more natural and intuitive ways. We’re also exploring ways in which this feature might be enhanced by MUM – our latest AI model in Search – to improve results for all the questions you could imagine asking.”
As the feature is in beta, it’s currently available only in English and in the United States and the best results that multisearch provides are for shopping searches. The company didn’t say when it will arrive in other areas or for other languages, hopefully, it won’t take too long.