Why AI Makes You Work More, Not Less — and What to Do About It

Why AI Makes You Work More, Not Less — and What to Do About It

When AI tools first entered the mainstream workplace, the promise was clear: automation would handle the repetitive, time-consuming tasks, freeing humans to focus on what truly matters. Professionals imagined reclaiming hours each week, reducing stress, and finally achieving the elusive work-life balance. Reality, however, has turned out to be far more complicated — and for many workers, far more exhausting. Instead of lighter workloads, a growing body of research reveals that AI is intensifying work, not reducing it. Understanding why this happens, and what you can do about it, is one of the most important career skills of the decade.

THE PRODUCTIVITY PROMISE THAT DIDN'T DELIVER

The expectations surrounding AI productivity were enormous. Surveys conducted in 2024 and 2025 showed that 96% of C-suite executives anticipated AI tools would significantly enhance employee productivity. Technology companies marketed their AI solutions with bold claims: save hours every week, complete projects faster, do more with less. Workers, eager to stay competitive, adopted AI tools at an unprecedented pace.

But the reality employees experienced diverged sharply from the promise. According to a BCG study covering thousands of workers across industries, 77% of employees actively using AI reported a greater workload compared to before AI adoption. Nearly half said they were unsure how to translate AI's capabilities into the promised productivity gains. The technology was working — tasks were getting done faster — but the total volume of work had expanded to fill every hour saved, and then some.

The statistics are sobering when examined closely. A controlled study by METR involving 16 experienced software developers completing 246 real tasks in mature code repositories found that AI-assisted developers were actually 19% slower than those working without AI. The perception gap was staggering: developers expected AI to speed them up by 24% and, even after the study concluded, reported believing AI had helped them by roughly 20% — despite objective evidence to the contrary. This perception-reality gap is not unique to software development. Across knowledge work, workers consistently overestimate AI's contribution to their efficiency while underestimating the new cognitive demands the tools introduce.

This is not to say AI offers no productivity benefit. Research from the St. Louis Federal Reserve found workers saved an average of 5.4% of their work hours using AI tools. Anthropic's own research found that AI assistance can accelerate individual tasks by approximately 80%. The crucial problem is what happens to that saved time: rather than converting into genuine rest or creative thinking, it is immediately absorbed by an expanded scope of work.

WHY AI MAKES YOUR WORKLOAD GROW, NOT SHRINK

To understand why AI intensifies work rather than reducing it, we need to examine three distinct mechanisms that researchers at UC Berkeley Haas identified in their landmark study of AI adoption at a US technology company. Each mechanism operates differently, but together they create a powerful compounding effect on workload.

Mechanism One: Scope Creep

The first mechanism is scope creep — the gradual expansion of what counts as your job. Before AI, the boundaries of any role were partly defined by cognitive and time constraints. A marketing manager wrote the strategic briefs; the copywriting team handled the actual content. A product manager defined requirements; engineers handled the technical architecture. These divisions existed not only for specialization reasons but because there were only so many hours in the day.

AI changes this calculus fundamentally. When a marketing manager can generate a first draft of ten pieces of content in an hour, the question becomes: why not just do it yourself? When a product manager can use AI to sketch out a technical feasibility analysis, it feels natural to add that to the role. AI makes it cognitively and practically easy to absorb tasks that previously belonged to others or simply didn't get done at all. The result is a quietly expanding job description that no one officially approved but everyone implicitly accepted.

A project manager at a mid-size software company described the experience this way: "Six months into using AI tools, I realized I was doing the work of three people. Not faster — more. The AI made it feel effortless to take on one more thing, and then another." This pattern repeats across industries wherever AI tools lower the activation energy needed to start a task.

Mechanism Two: Boundary Dissolution

The second mechanism is boundary dissolution — the erasure of the natural stopping points that once structured the workday. Before AI, the rhythm of work contained built-in pauses. Waiting for a colleague to respond, thinking through a problem without a tool to help, the slight friction of starting a new task — these pauses, frustrating as they sometimes felt, served a regulatory function. They created cognitive recovery time.

AI eliminates most of this friction. Tasks that once required waiting — for information, for inspiration, for collaboration — can now be initiated instantly. A prompt to an AI assistant takes seconds. The result is that work bleeds into every available moment: lunch breaks, early mornings, the few minutes before a meeting begins, evenings when the mind would otherwise begin to wind down. Workers report sending prompts during dinner, drafting documents at midnight because the AI makes it feel productive rather than compulsive.

Research from TechCrunch's analysis of early AI adopter burnout found that workers who embrace AI most enthusiastically are also the first to report blurred work-life boundaries. The very trait that makes someone an effective AI user — a tendency to see every moment as an opportunity to be productive — becomes a liability when the friction that once created natural stopping points disappears.

Mechanism Three: Parallel Task Overload

The third mechanism is parallel task overload — the simultaneous management of multiple AI-driven work streams. Before AI, working on one complex task typically meant focusing on that task. The cognitive cost of context-switching was high enough to discourage excessive multitasking.

AI changes this because many AI processes run autonomously in the background. While reviewing a report, a worker might have an AI drafting a proposal, another AI researching a competitor, and a third AI preparing data visualizations. Each stream produces outputs that require human review, feedback, and integration. The worker is no longer doing one complex thing — they are supervising multiple complex things simultaneously.

This supervisory role is cognitively demanding in ways that are not immediately obvious. Reviewing, evaluating, and integrating AI outputs requires sustained attention and critical judgment. The tasks feel lighter individually but add up to a significant cognitive load. Workers describe a sense of never being fully present in any one task while simultaneously feeling responsible for all of them.

THE COGNITIVE TOLL: AI BRAIN FRY AND MENTAL FATIGUE

Researchers have begun documenting a specific form of cognitive fatigue emerging from intensive AI use, a pattern now referred to as AI brain fry. This is not ordinary tiredness. AI brain fry is a state of mental exhaustion produced by the combination of scope creep, boundary dissolution, and parallel task overload — compounded by the constant need to evaluate and supervise AI outputs.

Workers experiencing AI brain fry report a recognizable cluster of symptoms. Decision-making becomes labored; simple choices require disproportionate effort. Errors increase in frequency, often in areas where the worker previously felt competent. Concentration narrows and becomes harder to sustain. The desire to step away from work entirely — to quit tasks, disengage from projects, or even leave the job — intensifies. According to BCG's March 2026 research, workers experiencing AI brain fry reported significantly higher intention to quit their positions compared to colleagues not experiencing this pattern.

Five warning signs that you may be experiencing AI brain fry:

  1. You feel mentally exhausted at the end of the day despite having completed tasks that seemed manageable.
  2. You make errors in work that should be routine, especially in reviewing or editing AI outputs.
  3. You find yourself struggling to make straightforward decisions that would once have felt automatic.
  4. You feel a compulsive need to keep working — checking AI outputs, starting new tasks — even when you intended to stop.
  5. You notice a growing sense of detachment from your work, as if you are processing outputs rather than doing meaningful work.

Who Is Most at Risk?

Research consistently identifies high performers and early AI adopters as the most vulnerable group. The people most likely to experience AI brain fry are those who threw themselves most enthusiastically into using AI tools, who saw AI as a competitive advantage, and who expanded their work scope most aggressively. Paradoxically, the workers who got the most out of AI in the short term are the ones paying the highest price in cognitive and emotional capital.

There is a deeper dimension to this problem beyond fatigue: what researchers call cognitive debt. Studies examining the effect of AI on learning and memory — including work by Kosmyna et al. in 2025 — found measurable reductions in brain engagement when participants used AI for writing and analysis tasks. When we offload cognitive work to AI consistently, we reduce the mental exercise that keeps those capacities sharp. Over time, the human capabilities that AI was supposed to augment can quietly atrophy from disuse.

THE AI WORKLOAD TRAP: HOW A VICIOUS CYCLE FORMS

The three mechanisms described above do not operate in isolation. They interact to create a self-reinforcing cycle that is difficult to escape once it takes hold. Understanding the structure of this cycle is essential for breaking it.

The cycle begins with genuine productivity gains. AI enables a worker to accomplish more in less time. Tasks complete faster, outputs improve in quality, and the worker experiences a period of expanded capability and elevated performance. This phase feels good — energizing, even.

The problem is what comes next. Expanded capability invites expanded expectations. Managers notice the increased output and calibrate their expectations upward. Clients and colleagues come to rely on the higher throughput. The worker themselves begins to measure their value by the expanded volume of work they can handle. The scope that was once impressive becomes the new baseline.

Organizations, under competitive pressure to maximize AI-driven productivity gains, often respond to this expanded capacity not by reducing workload but by filling it with more work. Why hire additional staff when existing employees can now produce at twice the volume? The result is that productivity gains captured by the individual worker are largely redirected into organizational throughput rather than personal time savings.

The cycle then enters its most damaging phase. The workload is now structurally larger than it was before AI adoption. The worker, committed to maintaining performance, intensifies their AI use — which lowers the friction of starting yet more tasks, which expands scope further. Boundaries continue to dissolve. Cognitive fatigue accumulates. The vicious cycle tightens.

Recognizing the shift from energizing intensification to unsustainable intensification is critical. The early phase, where expanded capability feels exciting and manageable, can mask the gradual accumulation of cognitive and emotional strain. By the time the damage becomes obvious — in the form of errors, burnout symptoms, or declining quality of judgment — the pattern has typically been running for months.

BREAKING THE CYCLE: HOW TO USE AI WITHOUT LETTING IT USE YOU

The good news is that AI is not inherently a workload trap. The research is equally clear that workers and organizations who use AI intentionally — with deliberate structures around when, how, and how much AI is used — can capture genuine productivity benefits without falling into intensification patterns. Here are five evidence-backed strategies.

Strategy 1: Batch Your AI Work and Protect Deep Focus Blocks

Rather than using AI reactively throughout the day, designate specific time blocks for AI-assisted work. Outside of these blocks, do your most cognitively demanding work without AI assistance. This serves two purposes: it prevents the constant context-switching that drives parallel task overload, and it preserves cognitive engagement with tasks that benefit from your full, unmediated attention.

A practical approach is the morning sprint structure: the first 90 minutes of the workday are reserved for your highest-priority deep work, done without AI. After this block, you batch your AI-assisted tasks — research, drafting, analysis — into a focused two-hour session. The remaining day is used for meetings, review, and communication. This structure prevents AI from colonizing the entire day.

Strategy 2: Define What AI Should Not Do For You

Scope creep happens by default when AI makes everything possible. Counter it by explicitly deciding which tasks should remain fully human. These are typically the tasks that involve your most distinctive judgment, your relationships with others, or your creative identity as a professional. Protecting these tasks from AI delegation is not inefficiency — it is the preservation of the cognitive and professional skills that make you valuable over the long term.

Write down a list of three to five tasks that you will always do without AI assistance. Review this list quarterly. When organizational or social pressure pushes you to hand these tasks over to AI, use your list as a reference point for the intentional choice you made when thinking clearly.

Strategy 3: Constrain Your AI Tool Stack

Research consistently shows that more AI tools do not produce more productivity. In fact, workers using a larger number of AI tools report higher cognitive strain and lower productivity gains than those using a small, carefully selected set of tools. The cognitive overhead of learning, managing, and integrating multiple AI systems erodes the time savings they provide.

Choose two or three AI tools that address your most significant workflow bottlenecks. Use them deeply rather than broadly. When colleagues or managers suggest adding new tools, evaluate them against a strict criterion: does this tool eliminate a genuinely painful bottleneck, or is it adding capability without eliminating a constraint?

Strategy 4: Create Explicit Stopping Rules

One of the most underrated productivity strategies in an AI-enabled workplace is knowing when to stop. Establish a daily AI cutoff time — a point after which you do not initiate new AI-assisted work. This prevents the boundary dissolution that fuels late-night work sessions. It also trains the expectation, in yourself and others, that there is a definite end to the workday.

Additionally, create task-level stopping rules: each AI work session should begin with a defined output and a defined duration. When the output is produced or the time is up, the session ends — even if the AI has more to offer. This prevents the open-ended exploration that AI makes so seductive and so cognitively expensive.

Strategy 5: Build Active Recovery Into Your Schedule

Recovery is not a reward for completed work. It is a performance strategy. Schedule daily non-AI, offline time with the same commitment you give to work meetings. This means periods of genuine cognitive rest — walking, conversation, physical activity, reading that has nothing to do with your profession. Human social connection is particularly important; research shows that employees who maintain regular face-to-face interaction with colleagues report significantly lower rates of AI brain fry than those who work primarily through screens and AI interfaces.

Practical daily habits to protect your cognitive health in an AI-heavy workflow:

  • Start each morning with 30 minutes of offline time before touching any AI tool
  • Take a technology-free lunch break at least three days per week
  • Schedule one hour of unstructured thinking time each week — no agenda, no tools
  • End each workday with a two-minute review: what did I actually think through today, versus what did I delegate to AI?
  • Set a weekend AI usage limit — at minimum, designate one full day without professional AI use

THE ORGANIZATIONAL RESPONSIBILITY: BUILDING SUSTAINABLE AI CULTURE

Individual strategies are necessary but not sufficient. Many workers who attempt to set boundaries with AI use find themselves swimming against an organizational current that rewards maximum AI-driven output and implicitly penalizes restraint. Sustainable AI use requires organizational change alongside individual behavior change.

The most important thing managers can do is model intentional AI use. When leaders openly discuss the boundaries they set around AI — the tasks they protect from automation, the hours they disconnect, the norms they follow — it signals that sustainable use is valued, not just maximum throughput. Organizations where leaders had clearly articulated AI practices showed measurably lower rates of AI brain fry among their teams, according to the BCG research.

Structural changes that organizations can implement include: building intentional pauses into project timelines rather than using AI-expanded capacity to accelerate every deadline; coordinating AI tool adoption at the team level rather than leaving individual workers to manage an ever-expanding personal tool stack; creating explicit policies around after-hours AI use and response expectations; and investing in regular check-ins focused on workload quality, not just output quantity.

Companies that have gotten this right share a few culture markers: they treat cognitive sustainability as a genuine performance factor, not just a wellness nicety. They resist the temptation to redirect every AI efficiency gain into more work. They build human connection and collaboration time into the structure of work rather than treating it as an optional social extra. And they measure success in part by whether their people are growing in capability and judgment over time — not just whether output volume is increasing.

AI IS A LEVER, NOT A TREADMILL — THE CHOICE IS YOURS

AI is not the enemy of sustainable work. Used with intention, it is genuinely transformative — capable of amplifying human capability in ways that create more time, better outputs, and higher-quality thinking. The problem is not AI itself but the default relationship most people and organizations have developed with it: one of unlimited availability, endless expansion, and frictionless escalation.

The research is consistent: AI work intensification is real — AI makes you work more unless you actively, deliberately design it not to. That design work — choosing what to automate and what to protect, building stopping rules and recovery practices, resisting the expansion of scope that AI makes so easy — is not a minor operational detail. It is the central challenge of working well in an AI-saturated world.

The mindset shift required is from passive adoption to active governance. Not: how much can I do with AI? But: what should I actually do with AI, and what should remain mine?

Here is a concrete starting point: this week, conduct a brief audit of your AI use. Identify one task where AI is doing work you should reclaim — because it is cognitively important to you, because it is a relationship-critical task, or because you have noticed your own judgment in this area becoming less reliable. Then identify one task where you should hand over more completely to AI — where you are still doing things manually out of habit that AI could genuinely handle. Making both adjustments simultaneously keeps your workload stable while improving the quality of your cognitive engagement. That is what genuine AI leverage looks like.

Erik Brynjolfsson, Danielle Li, & Lindsey R. Raymond (2023).
Generative AI at Work. National Bureau of Economic Research Working Paper No. 31161.

Read more