Gen Z's AI Dilemma: Hooked on the Tool They Love to Hate
Gen Z is developing a toxic relationship with AI—loves the convenience, hates the vibes. A fresh Gallup survey dropped this week shows America's youngest workers are stuck in a situationship with generative AI they can't quit but definitely don't trust. The poll, running February 24 through March 4, was commissioned by the Walton Family Foundation, GSV Ventures, and Gallup, and snagged 1,572 Americans aged 14 to 29 for their hot takes.
About 51% still hit up generative AI at least weekly—up 4% from last year. Usage is climbing, but the love affair is dead. Excitement tanked 14 percentage points down to a measly 22%. Hopefulness? Down 9 points to 18%. Meanwhile, anger spiked 9 points to 31%. They're not just bored—they're getting annoyed.
These aren't tiny wobbles. Even the daily degens are turning sour. Among Gen Zers who main AI every single day, excitement nosedived 18 points year-over-year. The tool they probably use to write their essays and dodge their homework is losing its sparkle faster than a rug-pull in a bear market.
"In most of these cases, Gen Zers have become increasingly sceptic, increasingly negative—from a place where even last year, they weren't particularly positive about it," said Zach Hrynowski, a senior education researcher at Gallup. Translation: they were lukewarm before, now they're actively cold.
Eight in 10 Gen Zers think leaning on AI to speedrun their work will make them dumber in the long run. They're scared of getting dependent on a tool that solves today's problems while quietly销毁 tomorrow's skills. Very "eating the seed corn" energy.
This anxiety isn't just vibes—scientists have been ringing alarms about AI making you functionally retarded since 2024. The research verdict was uncomfortable: overreliance on ChatGPT and friends has been linked to epic procrastination and memory loss in students. Your brain on AI is basically just a router now.
Beyond the cognitive decline fears, users are also spiraling about creativity. Only 31% of Gen Z respondents think AI helps them brainstorm—down from 42% last year. Only 37% trust it for accurate info, down from 43%. The tool is becoming less helpful in their eyes by the day.
This lines up with other research showing generative AI is a creativity killer in disguise—output goes up, originality goes down. It's basically the pump-and-dump of creative work.
Workplace skepticism hits even harder. Nearly half of employed Gen Zers—48%—now say AI's risks outweigh its benefits at work, an 11-point jump from last year. Only 15% see it as a net positive for their careers. Fewer than 20% would pick AI over a human for tutoring, financial advice, or customer support. Trust in AI-assisted work sits at 28%, compared to 69% for exclusively human output. They don't want the machine helping with their job—they want the machine to stay in its lane.
Some of this is rational fear. AI is already eating white-collar jobs faster than predicted, and Gen Z is watching the slaughter unfold as they enter the workforce. Not exactly reassuring onboarding.
Sydney Gill, a 19-year-old freshman at Rice University, told the New York Times: "I feel like anything that I'm interested in has the potential of maybe getting replaced, even in the next few years." That's the existential dread talking, and she's not wrong to feel it.
A separate Gallup study found 42% of bachelor's degree students have reconsidered their college major because of AI. Nearly three-quarters of K-12 schools now have AI policies—up 23 points in a single year—but more rules haven't produced more trust. If anything, they've just made cheating feel more normalized: 41% of students think most of their classmates are using AI for schoolwork when they're not supposed to. The honor system is dead, long live the honor system.
"What we're seeing in the data is a generation that recognizes AI's utility but is increasingly concerned about its long-term impact on learning, trust and career readiness," said Stephanie Marken, senior partner at Gallup. "Their growing skepticism signals a need for more thoughtful integration of these tools in both school settings and the workplace." Basically: slow down, everyone.
Gen Z was supposed to be AI's proof-of-concept—the generation so native to digital tools that adoption would be frictionless and enthusiasm would be self-sustaining. Instead, the data shows a cohort that uses AI largely out of necessity, increasingly distrusts what it produces, and worries that the shortcut is making them worse at the long game. They're in too deep to quit, but getting more paranoid by the day.
Even elite scientists have started admitting AI does most of their thinking now—which might explain why Gen Z, watching this unfold, isn't particularly reassured. When the people who invented the stuff are already outsourced, what hope does a college freshman have?
The CIA recently used AI to generate an intelligence report without a human analyst driving it. Deputy Director Michael Ellis confirmed the milestone Thursday at a Special Competitive Studies Project event, marking a shift from quiet experimentation to a public declaration of ambition. Ellis said the agency ran more than 300 AI projects last year, Politico reports. Somewhere in that stack, a machine produced an intelligence product entirely on its own—a first in the agency's history. The spooks are now just as dependent as everyone else.
Elon Musk's artificial intelligence company, xAI, has filed a federal lawsuit seeking to block Colorado from enforcing a new law regulating high-risk AI systems. In court documents filed on Thursday, Musk's lawsuit targets Colorado Senate Bill 24-205, scheduled to take effect on June 30, which requires developers of AI systems to disclose risks and take steps to prevent algorithmic discrimination in areas such as employment, housing, healthcare, education, and financial services.regulation bad, actually.
U.S. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell reportedly convened a meeting with Wall Street bank CEOs earlier this week to warn about cybersecurity risks tied to a new artificial intelligence model from Anthropic. According to a report by Bloomberg, the meeting included executives from Citigroup, Bank of America, Wells Fargo, Morgan Stanley, and Goldman Sachs. Officials discussed Anthropic's new AI model Mythos, which has recently drawn broad concern over its approach. The suits are scared of the model they don't understand—and they understand it even less than Gen Z does.
Share Article
Quick Info
Disclaimer: This content is for information and entertainment purposes only. It does not constitute financial, investment, legal, or tax advice. Always do your own research and consult with qualified professionals before making any financial decisions.
See our Terms of Service, Privacy Policy, and Editorial Policy.