It’s becoming clear that AI is no longer just a tech curiosity; it’s firmly rooting itself in how leaders of entire countries operate. I came across some interesting perspectives when Sweden’s prime minister, Ulf Kristersson, admitted he regularly taps into AI tools, including ChatGPT and the French service LeChat, for a little nudge—if nothing else than as a second opinion.
Kristersson explained that these AI tools help him ask different questions, like “What have others done? Should we think the complete opposite?” which I thought was a very human way to put it—using AI as a sounding board rather than a crystal ball. It reflects a subtle but profound shift in leadership dynamics, where machine-generated insights blend with human judgment.
“I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions.”
However, this candid approach hasn’t come without controversy. Tech experts and commentators have raised some serious concerns over political reliance on AI. One editorial in Sweden accused the prime minister of falling for what they called the “oligarchs’ AI psychosis,” highlighting fears that AI might be wielded as an unquestioned oracle by those in power.
Experts like Simone Fischer-Hübner have warned about the risks of using AI tools such as ChatGPT for sensitive information, emphasizing the need for caution. And Virginia Dignum, a professor specializing in responsible AI, pointed out a particularly insightful warning: AI does not generate meaningful political ideas but rather mirrors the biases of its creators. According to her, the more we lean on AI for seemingly simple things, the bigger the risk of developing an overconfidence in its outputs—a “slippery slope” that could affect governance.
“We didn’t vote for ChatGPT” — a powerful reminder that AI can’t replace human accountability in politics.
Kristersson’s team insists that the prime minister does not feed security-sensitive information into AI and uses these tools more as a broad gauge than an advisory board. That distinction struck me as critical because it shows a tentative balancing act: embracing new technologies while still acknowledging their limits.
This debate underscores a broader dilemma we’re facing globally: as AI becomes more embedded in decision-making—whether in politics, business, or daily life—how do we ensure it supports rather than supplants human wisdom? How do we keep those using AI tools accountable and cautious?
Key takeaways from Sweden’s AI experiment in politics
- AI as a sounding board, not a decision-maker: Leaders like Kristersson use AI to cross-check ideas rather than dictate decisions.
- Risks of overreliance: Experts warn AI reflects creator biases and cannot replace nuanced political judgment.
- Security and transparency matter: Using AI with sensitive information remains a major concern and requires clear boundaries.
Watching this unfold, I’m reminded that AI’s promise comes wrapped in responsibility. It’s tempting to treat AI as a magic fix, especially when running governments where the stakes are high. But as Sweden’s experience shows, the tech is best wielded as a tool for reflection—not a shortcut to decisions. And above all, the public’s trust hinges on leaders remembering that they, not algorithms, hold the mandate.



