In 1440, a goldsmith in Mainz figured out how to make letters out of metal.
He was trying to print Bibles faster. He succeeded. Within a decade his press could produce a hundred times what a scribe could manage, and by 1500, twenty million books were circulating in a Europe that had almost none fifty years earlier.
What Johannes Gutenberg thought he was doing was making a better product. Bibles, in Latin, for monasteries. He was a businessman with a loan and absolutely no idea what he had actually built.
Eighty years later, a German monk named Martin Luther used the press to publish 95 complaints about the Catholic Church. Half a million copies in a decade. Luther became the first best-selling author in history, a job category that had not previously existed, by writing pamphlets attacking the Pope at a speed the Pope had no institutional machinery to counter. The Catholic Church, the single most powerful institution in Western civilization for a thousand years, lost its monopoly on religious truth in one human lifetime.
What followed was not a tidy reformation. What followed was a century and a half of religious warfare that killed, in parts of German-speaking Europe, as much as a third of the population. The same press produced the Malleus Maleficarum, the handbook that powered the witch hunts. It also produced Copernicus and Newton and eventually the pamphlets that started the French and American Revolutions.
Gutenberg wanted cheaper Bibles. He got the modern world.
This is the thing about transformative technology that almost nobody predicting its future internalizes: the first-order effects, the things the inventor meant to do, are usually the most talked about. But the things the technology is for are not the only things it does.
I want to take four patterns from the history of transformative technology, four times smart people tried to predict where one would lead us and got it wrong in specific, patterned ways, and ask what the patterns mean for AI.
I. The technology is not what you think it’s for
Gutenberg’s invention was a commercial venture, funded by loans he eventually defaulted on, designed to produce one product more cheaply than hand-copying. If you had asked him in 1455 what his invention would do to the world, he would have given you a sensible answer. Cheaper Bibles. Eventually cheaper other books. Literacy might spread. The Church would be pleased.
The Church was not pleased.
Within seventy years the press had published Luther, circulated vernacular scripture, and distributed pamphlets attacking the Papacy at speeds an institution optimized for hand-copied manuscripts could not counter. The Council of Trent tried to regulate printing. The Index Librorum Prohibitorum banned specific books starting in 1559. Secular rulers tried licensing, taxation, censorship. None of it worked. Print shops had sprung up in two hundred cities. Books could be reprinted in one jurisdiction and smuggled into another. The distribution network had escaped the control of any single authority before the authorities figured out they needed to control it.
The historian Elizabeth Eisenstein spent a career documenting this, and her finding is that almost every major transformation in early modern European history, the Reformation, the Scientific Revolution, the Enlightenment, the emergence of modern nation-states, is partly a printing press story.
The question dominating the AI conversation right now is: will AI take my job? It’s a real question. It will have real answers. Some jobs will go, new jobs will come, and the balance sheet will matter to the people on it. But historically, it is the wrong question. It is the Gutenberg-making-cheaper-Bibles question. It is the question the printer asked in 1450 and the factory owner asked in 1890 and the office manager asked in 1990, and in every case the answer arrived, and in every case the answer turned out not to be the thing that mattered.
So what is AI’s transformation that we aren’t able to see? What is the thing we cannot currently imagine, that we will in fifty years point at and say that was the thing AI did to us?
If I could tell you, it wouldn’t be an unintended consequence. But I can gesture at the shape. Previous communications technologies have always had their deepest effects in the same place. They change who gets to speak. They change what counts as authoritative knowledge. They change the relationship between the individual and the institution.
The printing press broke the Church’s monopoly on interpreting scripture. Radio broke the newspaper’s monopoly on the day’s events. The internet broke the broadcaster’s monopoly on who gets an audience.
And AI is a technology that changes the cost of producing plausible-sounding language to approximately zero.
Sit with how large a change that is. For the entirety of human history until about 2022, the production of text that sounded like it was written by a thoughtful person required a thoughtful person. That is no longer true. I don’t know what the consequences will be. Nobody does. But “some people will lose their copywriting jobs” is not going to be near the top of the list.
The top of the list is going to be something about trust. About how we know anything. About how we recognize other minds. About whether a letter from your doctor was written by your doctor. About whether the argument you’re reading right now was constructed by a human being at a desk or by a machine predicting the next token.
II. The curve bends where you can’t see
Here is a parlor trick that works on almost any intellectual. Find a smart person’s prediction about the future, from at least fifty years ago. Look at the curve they were extrapolating. Then ask: what did the curve do that they didn’t expect?
Almost always, the answer is: it bent. Because of a catalyst or a disruption they couldn’t have foreseen.
Much of what follows in this section is Thomas Piketty’s reading of economic history from Capital in the Twenty-First Century, applied to the present question. The pattern repeats so cleanly across cases that Piketty built his book around it.
Thomas Malthus, in 1798, argued that population growth would outstrip food production and mass starvation was structurally inevitable. He used the best data available. He just didn’t know that within a century, industrial nitrogen fixation, mechanized agriculture, selective breeding, and global trade would make food production grow faster than population for basically the first time in human history. Malthus didn’t fail because he was stupid. He failed because he was extrapolating from a curve that was about to bend.
Karl Marx, seventy years later, made a structurally identical mistake going the other direction. His prediction of infinite capital accumulation, wages squeezed toward subsistence, and eventual proletarian revolution, Piketty credits as containing a genuine insight about how wealth behaves. But the apocalyptic specifics didn’t happen. Wages rose. Living standards rose. Because Marx didn’t anticipate that sustained productivity growth from technology would let capitalists and workers get richer at the same time. He was extrapolating a zero-sum fight from conditions that were about to stop being zero-sum.
Each of them was the smartest person in the room. Each was working with the best data anyone had. Each was wrong because the room was about to change shape.
Now look at the current AI debate.
On one side, people who extrapolate capability curves of large language models and arrive at AGI in five to ten years. On the other, people who look at productivity statistics, notice AI hasn’t produced measurable GDP gains, and arrive at another overhyped technology. Both groups are smart. Both are using real data. Both are making the Malthus mistake. They are extrapolating from curves that are going to bend, because of the technology they are trying to predict the effects of.
What if AI is different this time? Possible. But this is the best tool we have. Pure first-principles reasoning about an unprecedented thing has a worse track record than analogy, by a lot. Every generation has believed their transformative technology was the one that would break the historical mold. Every generation has been mostly wrong about which parts would break.
Some parts of the AI story will be genuinely new. Most, I’d bet, will rhyme with things we’ve already seen. The honest move is to stay humble about which is which, and to notice when someone’s confidence about AI’s trajectory exceeds what the reference class of past predictors has earned. When a prediction sounds like “this time is different because look at the curve,” it is worth remembering that every failed prediction of the last two hundred years sounded exactly like that.
III. The forty-year lag
In 1882, Thomas Edison opened the Pearl Street Station in lower Manhattan and started selling electricity to customers. Let me tell you what happened next in American manufacturing productivity.
Nothing.
For forty years, nothing. From 1882 to roughly 1920, the entire span in which electricity went from laboratory curiosity to something you could buy from a wire in the wall, American factories got faster at making things at roughly the same rate they’d been getting faster before electricity existed. The technology that would eventually produce more economic transformation than anything since agriculture sat there for two generations, commercially available, widely adopted, economically inert.
There is a paper about this, written in 1990 by the Stanford economist Paul David, called “The Dynamo and the Computer.” It’s one of those papers where once you’ve read it, you can’t stop seeing the pattern everywhere.
Early factories ran on group drive. One enormous steam engine, usually in the basement, turning a central shaft that ran the length of the building. Belts came off the shaft at intervals to individual machines. The position of every lathe, the width of every aisle, the shape of the building itself, all determined by the geometry of that central shaft. If you’ve ever been inside a restored nineteenth-century textile mill you’ve seen this. The buildings are narrow and very long. They have to be.
When electric motors become commercially viable in the 1880s, what do factory managers do? They rip out the steam engine and put an electric motor in its place. Everything else stays the same. The central shaft, the belts, the pulleys, the long narrow building, the position of every single machine. The motor drives the shaft just like the steam engine did. You save on fuel. You don’t shovel coal. Nothing else changes.
This is the single funniest fact in economic history and almost nobody knows it. Factory managers had in their hands the technology that was about to remake civilization, and they used it to not shovel coal. They did this for thirty years, because it never occurred to them to do anything else.
It took a new generation of managers, specifically people who hadn’t spent their careers staring at central shafts, to notice the obvious thing the first generation had missed. The real advantage of electricity wasn’t that it replaced steam. It was that it disaggregated. You could put a small motor on every machine. Which meant you didn’t need a central shaft. Which meant the factory didn’t have to be long and narrow. Which meant machines could be arranged by the logic of the work instead of the logic of power transmission. Which meant the assembly line. Which meant Ford. Which meant the 1920s.
Forty years between the technology and the transformation. Because the transformation required throwing out the factory and building a different one.
In 1987, the MIT economist Robert Solow wrote a book review in which he said you can see the computer age everywhere except in the productivity statistics. Computers had been in widespread business use for twenty years. Productivity numbers were flat. Computers were sitting inside companies the way electric motors had sat inside factories. Powering the old shaft. Doing the thing that already existed, slightly faster.
The productivity gains didn’t show up until the late 1990s. Thirty years after commercial availability. And they showed up when companies stopped using computers to speed up existing processes and started doing something no business had ever been able to do before. Walmart did not get rich by using computers. Walmart got rich by using computers to run a continent-spanning logistics network in real time, which was a thing that had never existed, could not have existed, and was not even thinkable as a thing you might want, until the technology made it possible to want it.
Electricity: forty years. Computers: thirty years. AI: we are in year four.
Every company currently integrating AI into their existing workflow, every “we’re adding Copilot to our document editor,” every earnings call mentioning AI-powered productivity gains, is the 1895 factory manager. They will see modest gains. They will be disappointed relative to the hype. Some will go on podcasts in 2028 and confidently declare AI was overpromised. They will be wrong in exactly the way their ancestors were wrong.
The gains, if the pattern holds, will not come from companies currently racing to adopt AI. They will come from companies that don’t exist yet, run by people whose names you don’t know, doing things for which we do not currently have words. From the inside it will look like a disappointing decade followed by a sudden, inexplicable transformation. From the outside, a hundred years from now, it will look like the twentieth century looks to us, as if it could only have gone one way.
This is simultaneously the best and worst news in this essay. Best, because almost every alarmist five-year prediction about AI is probably making the first-generation factory manager’s mistake in reverse. Worst, because when the transformation comes it will come fast, and the winners will be the ones who figured out, years in advance, what the new organizational forms need to look like.
The economy of 2050 will not be the economy of 2025 with AI bolted on. It will be something else. The winners of the AI transition will not be the people who adopted AI first. They will be the people who asked, sooner than everyone else, the question that in 1895 nobody was asking: what does this technology actually let me do that I couldn’t do before? What needs to be thrown out? What needs to be built from scratch?
Most people didn’t ask it in 1895 either. The ones who did got Ford.
IV. The good parts were fought for
In 1954, Simon Kuznets gave a presidential address to the American Economic Association proposing what became the Kuznets Curve. The argument, in one sentence: as an economy industrializes, inequality first rises, then falls. You end up more equal than when you started.
The Kuznets Curve became one of the most influential ideas in twentieth-century economics. It underpinned the basic assumption of post-war American politics, that growth eventually lifts everyone. Kennedy’s “a rising tide lifts all boats” is Kuznets in political costume.
Piketty showed it was almost entirely wrong.
Kuznets looked at US inequality data between 1913 and 1948 and noticed it had fallen dramatically. He concluded this was the natural arc of mature capitalism. What he didn’t notice, or perhaps didn’t want to notice in the middle of the Cold War, was that the drop was not natural at all. It was caused by two world wars, the Great Depression, progressive taxation, the rise of labor unions. Political choices made specifically to reduce inequality.
It wasn’t capitalism self-correcting. It was democracies, under pressure, forcing capitalism to behave.
When those pressures faded, starting in the 1970s, inequality went back up. In terms of wealth concentration, we are now roughly back to where we were in the 1890s. The Kuznets Curve didn’t fail because Kuznets miscounted. It failed because he mistook a political outcome for a natural law.
This is the pattern that matters most in this essay.
Every time we look at a previous technological transition and tell the story as one of natural adjustment, we tell it wrong. The Industrial Revolution did not work out because there is some internal mechanism in capitalism that produces good outcomes. The Industrial Revolution almost destroyed the working class. For its first seventy-five years, wages stagnated, capital owners captured almost all the productivity gains, and life expectancy in industrial cities actually fell.
Life expectancy in industrial cities fell. During the Industrial Revolution.
The eventual outcomes, rising wages, broader education, a stable middle class, democratic politics, were not automatic. They were the product of specific political fights. Labor organizing, which was illegal in most industrializing countries. Universal public schooling. Progressive taxation. Antitrust law. Workplace safety regulation. The eight-hour day. The weekend. Child labor laws. Each was contested. Each was won.
The good outcomes of the last major technological transition were made, not inherited.
Which brings us to Keynes, because Keynes is the subtler version of this mistake.
In 1930, in the middle of the Great Depression, Keynes wrote Economic Possibilities for Our Grandchildren, predicting that by 2030, productivity gains from technology would be so enormous that standards of living would rise four to eightfold and people would work fifteen hours a week. The economic problem, he said, would essentially be solved.
On the economics, he nailed it. GDP per capita in the United States has risen roughly six to sevenfold since 1930, right in his predicted band. The productivity gains arrived. The technology delivered.
Nobody works fifteen hours a week. The average American workweek is close to forty. Which is to say, almost exactly what it was in 1930. Keynes got the numbers right and the humans wrong.
What he didn’t foresee was that as wealth rose, our definition of a minimum decent life rose alongside it. A house with air conditioning. A car. College for the kids. Internet. Smartphones. We did not take the productivity gains as leisure. We took them as stuff.
Even when the economic prediction is exactly right, the anthropological prediction can be wrong. Even when the curve of material progress bends up as expected, the curve of human wanting bends up right alongside it.
The optimistic story about AI, that it will make everyone more productive, that the gains will spread broadly, that we will all end up better off, is possibly true. But it is not true automatically. It will happen, if it happens, because we build institutions that distribute the gains. Because we fund education that lets people retrain. Because we regulate concentrations of power. Because we notice, in time, what the technology is doing to the labor market and respond with policy rather than shrugs.
And even if we do all of that correctly, expect the gains to be absorbed into a rising floor of what counts as a normal life, rather than into leisure or happiness. We will not get fifteen-hour workweeks. We will get more expensive normal.
The 2030 normal is measurably better than the 1930 normal on almost every dimension. But it means the utopian visions of AI abundance are going to feel, from the inside, exactly like the present does. Full of striving. Full of wanting. Full of work.
This is where cautious optimism lives. The optimism is that we can, as we have before, engineer good outcomes from transformative technology. The caution is that engineering good outcomes is hard political work. It does not happen by itself.
The cautiously optimistic thesis
The case for pessimism is real. AI might concentrate power in a handful of companies in ways previous technologies didn’t. It might collapse trust in shared information in ways we don’t recover from. It might automate not just jobs but the activities, thinking, deciding, creating, we built our sense of human meaning around.
The case for naive optimism is also real, and wrong. AI will not automatically make us richer. It will not automatically solve cancer. It will not automatically free us.
What’s defensible is this. The historical pattern, which has held across the printing press, the steam engine, electricity, the computer, and so far the early stages of AI, is that transformative technology produces deep, unpredictable, second-order changes on long time horizons. The eventual outcomes are shaped far more by the political and institutional choices made during the transition than by the properties of the technology itself.
Societies that made good choices during the Industrial Revolution got the middle class. Democracy. Public health. Modern education. Societies that made bad choices got authoritarianism, catastrophic inequality, in some cases world wars. The technology didn’t decide. The choices did.
We are at the beginning of the choices about AI. The choices will be made over the next twenty to forty years, not the next two. We have more time than the doomers think and less certainty than the utopians promise. The levers that matter, labor policy, antitrust, education, how we distribute productivity gains, how we regulate the information environment, how we maintain human agency over high-stakes decisions, are levers humans have pulled before. In transitions that looked similarly uncertain at the time. In some cases we pulled them well. In others, badly.
The history is not reassuring. But it is legible.
We can read it. We can learn from it. That’s what cautious optimism looks like when it is earned rather than asserted. Not it’ll be fine. But it can be fine, and whether it is depends on us, and we have done this before, and we mostly figured it out, and we have no guarantee of figuring it out again except that we have the same tool our ancestors had, which is the ability to look at what previous generations got wrong and try to do something different.