Google today announced major improvements in how people can search using AI technologies. It has showcased a set of new capabilities that would change the search experience for its users.
Google has added video understanding to Google Lens, allowing users to ask questions about objects in video. In short, it means that by recording a video through the Lens app and asking a question verbally, the AI systems can analyze both the video and the question to surface relevant information.
Voice interaction is also now supported directly within Lens—users just have to hold down the shutter button while viewing a photo and then ask their questions out loud.
Open Lens in the Google app and hold down the shutter button to record while asking your question out loud, like, “why are they swimming together?” Our systems will make sense of the video and your question together to produce an AI Overview, along with helpful resources from across the web.
Shopping capabilities have also been upgraded. When applying visual product searches through Lens, it now opens detailed pages of reviews, pricing from retailers, and purchase options. The company says this feature has since rolled out to over 150 million Android devices beyond the Google app.
Google also uses AI to better organize search results. For things like recipe content, full pages of various content will be bubbled up from the web in a personalized format. In early testing, people found these AI-organized pages more helpful for open-ended queries.
Links to the supporting websites are also increasingly often included right within the text summaries of AI Overviews themselves, directly linking users to other points of view on the web. Some AI Overviews might also soon start displaying advertisements, which will highlight relevant products and services.