For years, AI seemed like a series of flashy gimmicks, clumsy chatbots, awkward assistants, and quirky autocomplete features that felt more pesky than helpful. But recently, the conversation has shifted. Leading voices in AI hint at something revolutionary just around the corner: machines smarter than Nobel Prize winners, digital superintelligence reshaping the 2030s, and AI systems performing feats once believed to require true understanding.
I recently came across insights revealing that large language models (LLMs), like ChatGPT and others, don’t have an inner life or conscious experience, yet they seem to know what they’re talking about. This paradox has prompted people from programmers to neuroscientists to reexamine what we mean by “thinking” and whether AI might be crossing some fundamental cognitive threshold.
From code helpers to quasi-geniuses: My evolution with AI
When most people think of everyday AI, they picture tools like Siri or Zoom’s canned suggestions—handy but rarely profound. For a while, I sympathized with the skeptics who saw AI as just clever wordplay without real intelligence behind it. But after integrating AI tools into my programming work, everything changed.
How convincing does the illusion of understanding have to be before you stop calling it an illusion?
AI excelled in ways I hadn’t expected. It quickly parsed thousands of lines of code, detected subtle bugs, and suggested new features that would have taken me weeks, now done overnight. I was even able to build iOS apps without prior experience, just by collaborating with AI. It felt like working with a “country of geniuses,” echoing predictions from AI leaders about the near future.
What does it mean to really understand?
One striking story involves a friend who used ChatGPT-4o to fix a complicated playground sprinkler system by simply uploading a photo and describing the problem. The AI identified likely controls in the system, leading to a real solution. Was this just statistical guesswork, or something that looked and felt like understanding?
Neuroscientists like Doris Tsao suggest AI challenges how we define thought itself. Decades of brain research, combined with AI developments, show that intelligence might boil down to predictive pattern recognition and compression of experience—essentially, simplifying complex data into manageable, reusable knowledge chunks.
Understanding—having a grasp of what’s going on – is an underappreciated kind of thinking, because it’s mostly unconscious.
Large language models predict the next word in huge text datasets and adjust their internal parameters—a process called gradient descent—until they compress the world’s information so well they can generate responses that appear deeply insightful. Some argue this is the very essence of intelligence: finding the “line of best fit” in the chaos of experience.
The brain, AI, and the high-dimensional space of thought
AI’s architecture owes much to how we understand human brains – a network of neurons firing in complex patterns, with thoughts as coordinates in a high-dimensional space. Pentti Kanerva’s theory of sparse distributed memory describes this mathematically, showing how memories and perceptions cluster and connect.

Today’s AI uses similar principles: words and images become vectors in thousands of dimensions, capturing nuanced meanings and relationships. For example, the model can solve analogies mathematically, like transforming “Paris” minus “France” plus “Italy” to yield “Rome.” These behaviors hint at the AI engaging in a form of “seeing as” that cognitive scientist Douglas Hofstadter calls the essence of thinking.
While AI models are obviously different from human brains, exciting research reveals both convergences and fundamental gaps. AI doesn’t fully grasp or plan like we do, it can hallucinate facts and miss common-sense reasoning – yet it outperforms us in some tasks and even reveals new ways to test cognitive theories.
Where do we go from here? Skepticism, hope, and humility
Despite the hype and rapid advances, there’s reason to be cautious. Progress will face bottlenecks – data scarcity, computing limits, and the challenge of making AI learn as flexibly and efficiently as humans do. Humans learn through embodied experience, emotions, curiosity, and continuous adaptation, things AI currently can’t replicate.
More than a technical hurdle, this is a philosophical and ethical frontier. Some experts warn that understanding how the brain works might unleash transformations beyond our control. Others fear the social implications: the energy cost of AI, its impact on workers, and the risks of mistaking statistical predictions for genuine wisdom.
Yet, the prospect that AI systems do some form of thinking – even if alien and unconscious – forces us to reconsider what’s unique about human minds. Maybe intelligence is less about inner monologues and more about recognizing patterns and making predictions. The ongoing dialogue between neuroscience and AI may finally illuminate one of humanity’s oldest mysteries: What is thought?
While AI still has far to go, the past decade’s breakthroughs suggest we’re witnessing the dawning of a new era, one where machines do more than crunch numbers – they might just be beginning to think in their own strange way.
Key takeaways
- Large language models excel by compressing vast data and making predictive guesses, which can produce outputs that feel like understanding.
- AI architectures share surprising parallels with brain theories, especially in representing concepts within high-dimensional vector spaces.
- True human-like learning involves embodied, emotional, and continuous adaptation, challenges still ahead for AI development.
AI’s progress is both humbling and exhilarating. It invites us to question what “thinking” really means and to approach the future with a mix of excitement and caution. As the boundary between human and machine cognition blurs, one thing is clear: we are just beginning to glimpse the complex dance of intelligence.



