Artificial intelligence is no longer just tech jargon or a far-off sci-fi idea—it’s already tangled up in the way we connect, trust, and communicate. I recently came across some insightful discussions by Yasmin Green, CEO of Google‘s Jigsaw, and Jillian Tet, a leading anthropologist at King’s College Cambridge, who unpack how AI is ushering in a new era of trust and social interaction. Their work challenges the notion that we live in a post-trust world and instead suggests trust is shifting shape in fascinating ways.
From eye-level trust to distributed trust — and now to AI
Trust, as these experts reveal, has evolved alongside human society. Back when we lived in small groups, trust was face-to-face—what they call “eye-level trust.” As societies grew, vertical trust emerged: trust in leaders and institutions who we couldn’t personally know. Then came the internet, shaking things up with something called distributed trust. Platforms like Airbnb and Uber let us trust strangers across the globe, doing things like hopping into a rideshare or renting a stranger’s home.
This era of distributed trust empowered a sort of peer-to-peer faith that was unprecedented. But AI, especially conversational AI like chatbots, is now creating yet another level of trust—an era where we interact with 24/7, personalized, private digital entities that mediate and shape our social connections.
AI as master, mate, mirror, or moderator
One of the most eye-opening ideas is how AI isn’t just something to trust or distrust in isolation. Instead, AI can play various roles in our social trust ecosystem. Yasmin describes AI’s role as potentially being a master (bossing us around), a mate (a companion), a mirror (reflecting ourselves back to us), or a moderator (facilitating conversations). This means AI could actually boost our ability to trust other humans, or at least navigate conversations and conflicts better.
An example that caught my attention was a project in Bowling Green, Kentucky. The town, set to double in size, used AI-powered virtual town halls to let 8,000 people participate—compared to fewer than 10 in traditional meetings. AI sifted through a million opinions and thousands of policy proposals to help leaders find common ground, with more than half the proposals showing near-universal agreement. This suggests AI could help bridge divides and make large-scale democratic conversations manageable.
What Gen Z teaches us about truth and trust
The discussion also surfaced a generational twist. Unlike older generations who seek truth mainly through expert validation, Gen Z gravitates toward authenticity and social affirmation. They trust individual journalists or influencers more than institutions, valuing voices that feel real and relatable over traditional authority. This creates tension between vertical trust and horizontal, peer-to-peer trust based on personal connection.
While this might alarm some institutions, it also means there’s a more democratic and diverse conversation happening—messy and chaotic as it may be. AI could play a role here by being a neutral party to help navigate these social complexities.
AI’s promise and peril: fighting misinformation and tribalism
Yes, AI has dangers, including amplifying misinformation and enabling tribal echo chambers. But there are also remarkable opportunities. I came across an MIT study where an AI chatbot successfully reduced conspiracy belief intensity by 20% after a few conversations. People found AI a safe, neutral space to question and understand their views, something they might not get from humans with agendas.
This glimpses AI’s potential as a trust builder rather than a trust eroder. Of course, technology’s impact is shaped by human behavior and social structures. Anthropologically, the internet lets us craft identities and choose tribes like never before, intensifying both connection and division. Artificial intelligence, therefore, is part of a river of evolving digital culture, not an isolated change.
AI may become a powerful tool to amplify our better nature—as “augmented intelligence” that bridges divides rather than deepening them.
Key takeaways for navigating the AI trust frontier
- Trust is not dead—it’s evolving. From face-to-face to distributed and now AI-mediated trust, we must understand these changing dynamics.
- AI plays multiple social roles. Thinking of AI as master, mate, mirror, or moderator helps us see how it can support human connection.
- Generational shifts matter. Authenticity and peer connection often outweigh traditional institutional trust—impacting how AI fits into our social fabric.
- AI can counter misinformation. Chatbots engaging conspiratorial beliefs show promise as neutral interlocutors who reduce entrenched falsehoods.
- Digital tribalism remains a challenge. AI won’t fix human tribal instincts overnight but may offer new ways to foster dialogue and understanding.
Reflecting on AI and our social future
The rapid rise of AI is shaking up how we trust, connect, and make sense of the world. This isn’t a simple story of technology replacing human bonds or creating dystopia. Instead, it’s a nuanced shift where AI tools potentially empower us with new kinds of social agility—if we use them wisely.
Rather than fearing AI as an alien “other,” maybe it’s time to start thinking of it as augmented intelligence—a complement to our humanity that, when mastered, could amplify the best parts of us: empathy, understanding, and genuine connection. Whether AI ultimately unites or divides us might depend less on the algorithms, and more on how we as people choose to engage with and govern these powerful new tools.
For anyone curious about the future of trust in our AI-infused world, these perspectives offer both caution and hope—a reminder that technology shapes us, but we also shape technology.



