Last week, Matt Shumer posted a 5,000-word essay on X called "Something Big Is Happening." It got over 70 million views. He compared the current AI moment to February 2020, right before COVID shut everything down. His argument: AI is about to eliminate most knowledge work, and the people who aren't paying attention are going to get blindsided the way we all got blindsided by a pandemic.
Within 48 hours, Fortune published a rebuttal from their AI editor, Jeremy Kahn. His counter: Shumer is extrapolating from coding, which is a special case. Software has compilers and unit tests. There are objective ways to measure whether code works. Most knowledge work doesn't have that. Law, medicine, finance, consulting? There's no compiler for a legal brief. No unit test for an investment memo. Kahn argues that full automation of these fields is much further out than Shumer implies.
I think they're both right. And I think they're both missing the actual story.
Shumer is right that something massive is happening, and most people are underestimating it. Kahn is right that the jump from "AI can write code autonomously" to "AI can do your job" is not as clean as it sounds. But the real risk is not the one either of them is focused on. The risk is not that a machine takes your job. The risk is that someone in your industry figures out how to manage AI within its actual capabilities, and suddenly they're producing three times your output at half the cost. You don't get replaced by software. You get replaced by a competitor who learned the new system before you did.
I've spent my career spotting these structural shifts early. Two companies built, now investing from the other side of the table. Every major disruption follows the same arc: the technology gets blamed, but it's the behavior change that reshapes the market. AI is no different. And the early data is starting to confirm what the pattern would predict.
The Evidence Is Already In
In an eight-month field study at a roughly 200-person US tech company, UC Berkeley Haas researchers tracked what actually happens when a workforce adopts AI tools. The result was not what the productivity optimists promised. AI did not lighten anyone's workload. It intensified it. People moved faster, took on broader scope, and worked longer hours, often without anyone asking them to. AI lowered the friction to start things, which made harder tasks feel achievable, which raised the ceiling on what people attempted. Expectations followed immediately.
Separately, a Harvard and BCG study put 758 management consultants through a controlled experiment with GPT-4. Within AI's reliable range, consultants completed about 12% more tasks, finished them 25% faster, and produced work rated over 40% higher in quality. But when they pushed the tool past what it could reliably handle, performance dropped. In some cases, it fell below the group that wasn't using AI at all. The researchers called this the "jagged technological frontier": a shifting, unpredictable boundary between what AI can do well and where it falls apart.
This is exactly the nuance Shumer's post is missing and the Fortune rebuttal is pointing at. AI is not uniformly capable. It has zones where it is extraordinary and zones where it confidently produces garbage. The person who can tell the difference is the one who wins. The person who can't is the one creating problems downstream.
AI does not reward usage. It rewards fluency. And fluency is not evenly distributed.
The Shoe Cobbler Problem
This pattern is not new. It plays out every time a productivity shock hits an industry.
When the Industrial Revolution brought factory manufacturing to shoes, most people looked at the machines and saw job loss. What actually happened was the opposite. Costs dropped. Quality became more consistent. And massive optionality showed up. People didn't stop buying shoes. They bought more shoes, and more types of shoes, because they could suddenly afford to. Demand exploded. The total number of jobs in the shoe industry grew.
But here is the part that gets left out of the optimistic version: the original cobbler, the one who did everything front to back, largely disappeared. The person who sourced the leather, cut it, stitched it, shaped it, and sold it out of their own shop? That job went away. It was replaced by a system of specialized roles inside a factory. Net jobs increased. But the old job, and the old way of working, did not survive the transition.
The winners were not the people who tried to be slightly better cobblers. The winners were the people who learned to move shoe-making to the factory.
AI is setting up the same pattern for knowledge work. Output is about to explode. Costs are about to collapse. And the people who refuse to learn the new system will get left behind, not by the technology itself, but by the people who figured it out first.
Most People Are Still Sampling
Here is where I think the current conversation goes wrong. Most people equate "using AI" with adoption. It is not. Asking AI to draft an email, write a business plan, or generate a job description is sampling. Real adoption is when AI plugs into your actual workflow with standards, review steps, and feedback loops that keep the output reliable.
That distinction matters because the consequences of getting it wrong are already showing up in the data. Researchers at BetterUp Labs and Stanford recently coined a term for the problem: "workslop." AI-generated work content that looks polished on the surface but lacks substance. Their survey of over 1,100 US workers found that 40% had received workslop from a colleague in the past month. Each instance costs nearly two hours in rework, confusion, and follow-up. For a 10,000-person company, that adds up to roughly $9 million in lost productivity each year.
Meanwhile, an MIT Media Lab report found that despite $30 to $40 billion in enterprise AI investment, 95% of organizations have seen no measurable return. The 5% that do see results share a common trait: they focused AI on specific, well-defined workflows rather than broad, unfocused experimentation.
This is the part Kahn's rebuttal gets right. Enterprises need reliability, governance, and auditability. The technology is genuinely capable, but capability without management just creates expensive noise. And it's the part Shumer skips past entirely. He describes a world where he tells AI what to build, and it just appears. That works when you're a technical founder building software you understand deeply. It does not work when you're a mid-level employee pushing AI-generated output into workflows you don't fully own.
The problem is not the technology. The problem is how people are managing it.
AI Is Not a Tool. It's an Employee.
Here is the mental model that changed how I think about all of this.
AI is not a tool you pick up when you need it. It is an employee. A very smart, PhD-level, ambitious, eager to please employee who never gets tired and almost never tells you "I don't know." That sounds like a superpower, and it is. But it is also the trap.
AI output is often wrong in quiet ways. It sounds right. It uses the right tone and structure. But it can be missing key constraints, built on invented facts, or structurally flawed in ways that only someone with real expertise would catch. When that slips through, it is not an AI failure. It is a management failure. This is exactly what Kahn means when he points out that as AI errors become less frequent, human reviewers become complacent. The output looks so good that people stop checking. And then the remaining errors, the subtle ones, cause the real damage.
I have experienced this firsthand. Like most people, I started using AI in basic ways. My thinking shifted when I stopped treating it like a search engine and started treating it like part of my org chart. I now use AI as an analyst, a model builder, and an editor. It builds my Excel models, drafts my memos, and pressure tests my thinking. But the 80% it gets you on the first pass is not where the value lives. The value is in the last 20%, which requires serious back and forth. Without that refinement, you are just shipping workslop with your name on it.
If you have never written a line of code, you will struggle to manage a team of engineers because you don't know what good looks like. AI works the same way. Without domain expertise, you cannot audit the output, and you end up creating problems for everyone around you. If you do have the expertise, AI becomes genuine leverage. You bring judgment and accountability. The model brings speed and throughput.
The Legal Industry: A Preview
The legal industry is a clear preview of where this is heading across services.
Most people have never hired a lawyer to read the terms and conditions they agree to every day. The cost-benefit has always been upside down. But if AI drives the cost of document review close to zero, behavior changes. More contracts get reviewed. More issues get flagged. More negotiation happens. The total volume of legal work does not shrink. It explodes.
Even if hourly rates hold steady for a while, the math is shifting fast. One hour of partner oversight sitting on top of a hundred hours of AI-enabled drafting, review, and issue spotting. The client gets dramatically more work product for roughly the old price.
That is deflation in practice. Cost per unit of output collapses, and demand grows into the space that opens up. This same dynamic is going to play out in finance, consulting, healthcare, and anywhere that high-cost expertise has historically limited access.
What Comes Next
Shumer is right about one thing above all else: the window to get fluent is closing. This is why the next few years will feel like a period of compression, not liberation. You are learning a new operating system while still doing your day job. The early gains will look incremental. Then the compounding kicks in, as models improve, as workflows stack, and as organizations figure out what one skilled operator can do with an army of AI assistants.
But the answer is not panic, and it is not dismissal. It is management. The people who win in this environment will not be the ones who simply "use AI." Everybody will use AI. The winners will be the people who can manage it. The ones who know what good looks like in their domain. Who builds narrow workflows that compound into real output. Who keeps quality high when the temptation is to ship fast and fix later.
The biggest career risk in the AI era is not the machine. It is the human who learned to run the machine before you did.