Why We’re on the Brink of Superintelligence: The New Era of AI Primitives
Okay, I want to start with a little disclaimer: this is going to be an unstructured ramble, but bear with me. Something clicked in my head over the past week, and I feel like I’m seeing the early signs of a massive shift in AI development — something bigger than individual breakthroughs we’ve been excited about recently.
So here’s the quick rundown of what’s been on my mind: there’s that fascinating hierarchical reasoning model paper, the impressive feat where Google DeepMind and OpenAI took gold at the International Math Olympiad, and the emergence of what folks are calling the ASI arch — or the “AlphaGo moment” for model architecture discovery.
What’s my gut telling me? We’re witnessing the birth of a whole new class of cognitive primitives in AI. If you’ve been involved in AI or deep learning for a while, you might remember the days of LSTMs (long short-term memories). They were kind of the precursor to what GPTs would become, and back then people joked, “A brain is just an LSTM.” Then came transformers and attention mechanisms, and with them, a new wave of progress.
But now, I’m seeing something fresh. This time, it’s reinforcement learning that’s not just dependent on vast amounts of human data—it’s about models training themselves. That’s huge.
Why Self-Bootstrapping Models Are a Game-Changer
Think about how humans master math—by practicing, self-playing, and exploring problems repeatedly. Math is provable and decidable, meaning you can check if a solution is correct or not. A math genius with just paper and chalk can get better by trial, error, and logical reasoning.
AI is starting to walk this same path. The hierarchical reasoning models and neural architecture discoveries we’re seeing represent a bootstrapped learning capability, where models improve themselves without just feeding off curated datasets. It’s as if these models have begun their own journey of self-improvement and discovery.
Now, I want to be clear: hierarchical reasoning and automated architecture search don’t operate under identical principles. But combined, they paint a picture of a new frontier in reinforcement learning. This isn’t just modest progress — this is the foundation for what could become superintelligence.
The Myth of the AI Wall: Why There’s No Ceiling Yet
Remember when people talked about AI hitting a “wall”? The idea went like this: we’d keep scaling models with more data, more tokens, more compute, but eventually, returns would diminish. Sure, that’s somewhat true for conventional large language models, but the game has changed.
We found new scaling laws—where increasing inference time and reasoning boosts performance—and now we’re unlocking fresh scaling laws through smarter reinforcement learning strategies. The so-called “data wall” that seemed like a looming limit? It’s almost dissolved.
And the next wall on the horizon? Math.
Mastering math isn’t just an academic exercise. Math underpins everything from physics to coding, from cryptography to machine learning itself. Many physicists think of math as the fundamental language of the universe, the low-level operating system behind reality.
So if AI can truly master math through self-play and hierarchical reasoning, we’re not just on the path to smarter algorithms — we’re unlocking the keys to understanding and shaping complex systems faster than ever before.
Money, Momentum, and the AI Gold Rush
Let me share a bit of perspective here. In the past, I predicted AI might slow down, or that the singularity was “canceled.” But looking back, those were catastrophically wrong calls. The pace of innovation has only accelerated, and money flowing into AI research and infrastructure is a huge driver.
Wherever the gold rush goes, results follow. Take Nvidia’s stock as a pulse-check — the fervor isn’t dying down. There’s skepticism about imminent AI winters, but at least now we’re not seeing clear signs of a slowdown.
The space of algorithmic and mathematical possibilities feels almost infinite. There’s so much room for new approaches and optimizations that any “glass ceiling” feels astronomically high, maybe non-existent for years to come.
The Near Future: From Artificial General Intelligence to Superintelligence
We can debate all day whether we’ve reached true AGI, but to me, that’s mostly semantics now. What matters is that AI systems right now are already surpassing human capability in a ton of economically valuable tasks. Put them into robots or embodied agents, and the game changes further.
What’s on the horizon is artificial superintelligence (ASI). I’d be surprised if we don’t reach that threshold by the end of this year or next. With models evolving beyond hierarchical reasoning, embodying architectures like Gemini or OpenAI‘s next-gen versions, we’re soon going to see AI solve problems no human could in any reasonable timeframe.
The key test for superintelligence? It’s not just about doing what humans can do faster. It’s about solving problems fundamentally unsolvable by human brains—problems requiring more experts than exist or years of time compressed into moments.
Look at AlphaFold, which achieved what would take humans hundreds of billions of years in a matter of months. That’s the kind of acceleration we’re talking about. ASI means crossing past the uppermost boundary of human cognitive ability—not competing with the best humans anymore, but moving into realms where humans simply can’t tread.
Wrapping It Up
So yeah, that’s my take. The paradigm shifts keep coming faster than anticipated. We’re bootstrapping new cognitive primitives that train themselves, breaking old data and compute limitations, and rapidly mastering the mathematical underpinnings of reality.
In short: superintelligence is not just near, it’s knocking on the door. And this next chapter of AI development will redefine what intelligence means.
What do you think? Are we truly on the cusp of crossing into superintelligence? Let me know — the conversation is just getting started.
Cheers and keep watching the horizon,
– An AIholics explorer


