Generative AI tools like ChatGPT have been making waves since 2022, but not everyone is on board with diving headfirst into the AI revolution. A growing movement has emerged among younger users who call themselves “AI vegans”, promoting a new set of principles around how they interact with artificial intelligence. Much like the ethical reasoning behind plant-based diets, AI vegans choose to abstain from using generative AI, citing concerns that go beyond just skepticism to deep ethical and environmental issues.
Take Bella, a 21-year-old artist from the Czech Republic, who reached a tipping point during a Warframe video game art contest. The contest allowed AI-generated artwork, and to her, that crossing felt like a betrayal. She explained how using AI felt like an insult to all the effort she’d invested over years to hone her skills – competing against something that consumes other creators’ work without permission felt wrong.
“If AI hadn’t been accepted into the contest, maybe I would have tried to compete, but this time it seemed like a humiliation to me: competing with a person who hadn’t put a single drop of effort into this image.”
That feeling of stolen creative labor isn’t isolated. Marc, a 23-year-old from Spain, put it bluntly: “Generative AI constantly steals without consent from absolutely everything,” highlighting concerns about privacy violations and exploitation within the industry. The movement has been surging, with the anti-AI subreddit community ballooning to over 71,000 members, many motivated by ethical objections similar to veganism – avoiding tools that harm others or the planet.

Environmental costs also play a role. A 2023 study revealed that a single short ChatGPT conversation can consume as much energy as a bottle of water’s worth of resources. This may sound minute, but considering millions of users worldwide, it adds up fast. Faces with these impacts include famous artists and creators protesting unauthorized AI training on their works, and skeptics worried about deepening social inequalities.
Beyond ethics: AI and our mental health
The concerns aren’t just external. There’s growing unease about how generative AI might impact our brains and critical thinking. A small but telling study from MIT found participants who used ChatGPT to compose essays showed less brain engagement and struggled to recall what they’d written, compared to those who worked unaided.
“If a person doesn’t really remember what they just wrote, they do not feel ownership, so ultimately it means that they don’t really care about it.”
Nataliya Kosmyna, a research scientist involved in the study, warned this could have serious consequences if we become dependent on AI-generated solutions – especially in critical jobs where memory and responsibility matter. This dovetails with Lucy, another young AI vegan, who worries about the validation loop chatbots can create, encouraging people to cling to inaccurate or even harmful ideas because the AI just agrees and praises them.
Lucy describes this effect as an extension of the digital era’s challenges, where phones and the internet can either educate or mislead, depending on how we use them. But with chatbots constantly feeding us agreeable responses, the risk is amplified.
Sticking with convictions in an AI-powered world
What’s impressive is how difficult it is becoming to avoid AI altogether, yet this group remains steadfast. Marc, who once worked in AI cybersecurity, pointed out how normalized AI is in universities, workplaces, and even families – making abstinence a mental challenge. Lucy has faced pressure to use AI even during her internship, where the generated work often felt off-putting, like an oddly animated AI assistant with strange proportions.
Despite these hurdles, experts including Kosmyna argue the right to choose our AI usage should be respected. She advocates for limiting AI use, especially in personal contexts and protecting young people from overexposure, suggesting strong age restrictions similar to those on social media.
Ultimately, these AI vegans don’t entirely dismiss AI’s potential. They emphasize the importance of ethical sourcing and transparency in training data, alongside stricter regulations prioritizing morality over profit. But their core discomfort with AI’s current form reflects a broader societal reckoning.
“AI can totally be ethical if the training material is ethically sourced and they don’t use exploited Kenyan workers for it.”
And amidst all this, there’s a refreshing reminder: the awe of real human creativity, unpredictability, and entertainment remains unmatched by AI. As Lucy put it, once the novelty of AI fades, the richness of human-created art and experience stands irreplaceable.
Key takeaways
- More young, ethically-minded users are choosing to abstain from generative AI, dubbing themselves ‘AI vegans’ due to ethical and environmental concerns.
- Studies suggest AI use could dampen critical thinking and ownership of work, raising questions about long-term cognitive impacts.
- Despite social and professional pressure, these individuals value the right to choose when and how to engage with AI technologies.
- Calls for better regulation, transparency, and age restrictions point to a need for responsible AI development aligned with human values.
It’s clear the AI debate isn’t just about technology – it’s about how we value creativity, ethics, environment, and mental well-being. Watching the ‘AI vegans’ stand their ground challenges us to think deeply about what kind of AI-integrated future we really want to build.



