In a new wave of updates aimed at revolutionizing its search engine, Google has integrated advanced artificial intelligence (AI) features that allow users to interact with visual content using voice commands. Announced in early October 2024, the latest AI injection enables users to ask questions about images or videos they encounter, with responses generated through AI summaries. This marks a continuation of Google’s efforts to enhance its search capabilities with AI, following the introduction of “AI Overviews” earlier in 2023.
One of the key advancements includes the ability for users to query visual content in a conversational manner. Through Google Lens, users can now ask questions about objects in real-time via their camera or even ask about moving subjects in video clips—such as fish swimming in an aquarium. This feature is available to users testing the system in Google Labs.
These updates reflect Google’s broader goal of making search more intuitive and accessible. Rajan Patel, Google’s vice president of search engineering, noted that the company’s objective is to make search “simpler and more effortless,” allowing users to search in various ways—whether through text, voice, or visuals.
However, the implementation of AI has not been without issues. In previous updates, Google’s AI-driven summaries occasionally produced misleading or incorrect information, which raised concerns about the accuracy of search results. To address this, Google is introducing more external links within AI summaries to improve transparency and reliability, especially after past incidents where its AI suggested erroneous advice like putting glue on pizza.
This latest phase of Google’s AI evolution underscores the company’s commitment to staying competitive against emerging “answer engines” such as ChatGPT and Perplexity, which are attracting attention for their conversational AI search models?.