Google‘s newly launched AI-powered search feature, AI Overviews, is already drawing criticism after returning a wildly inaccurate answer: it stated that the current year is not 2025.
Key Points
The error quickly went viral, with users sharing screenshots on social media that showed the AI confidently making a mistake about something as fundamental as the current date. For a company like Google—which has spent years building its reputation on delivering fast and accurate information—the blunder is more than embarrassing. It raises fundamental questions about the reliability of AI-generated content and whether such features are truly ready for prime time.

AI Overviews are designed to appear at the top of certain search results, offering a short summary compiled from multiple sources. While the idea is to save users time, the implementation relies heavily on large language models, which are prone to a well-known issue: hallucinations. These are confident but incorrect responses generated by AI systems based on learned patterns rather than verified facts.
In this case, the hallucination wasn’t buried deep in a complex query—it was a basic, factual failure that undermines trust in the entire system.
Google has acknowledged that the feature may not always produce accurate information and has advised users to verify content using the citations provided in the summary. But that disclaimer may not be enough to reassure users who have grown accustomed to Google being a highly reliable search engine. When a platform that millions rely on can’t tell what year it is, the perception of accuracy takes a serious hit.
This incident also illustrates a broader problem facing the tech industry: the rush to integrate AI into everyday services before the technology is truly robust. While AI tools like chatbots and summarizers have shown impressive capabilities, they also make mistakes—sometimes very basic ones. In a search context, where users expect fast and correct answers, these lapses can do real damage.
Moreover, the mistake comes at a time when competition in AI-powered search is heating up, with Microsoft, OpenAI, and other players experimenting with new models and integrations. Google’s position as a trusted leader in search could be threatened if such errors continue to surface.
In the end, this isn’t just about one wrong answer. It’s a warning about overreliance on AI, and a reminder that even the most advanced systems still need human oversight. Until hallucinations can be effectively minimized, users—and tech companies—will need to approach AI summaries with caution.