Towards a Human-Powered AI
Part 2. AI, Take the Wheel
This is part of a series of meditations on the crisis and opportunities presented by AI in higher education and beyond. I use AI. I am fascinated by it. I also believe that, allowed to work its way deeper into our lives along the lines advocated for by the likes of Marc Andreeson, it represents an existential threat. Not the sci-fi threats posed by AI-doomers, but the very real threat that this will constitute a final stage in our subservience to the machine whose only real purpose for Silicon Valley is to do to the mind of 21st-century workers what two centuries of technological progress did for the skills and physical labor of 19th century labor. I will write more about Andreeson and his ilk later. For now, I let his words represent the character of the men—and they are overwhelmingly men—asking us to let AI take the wheel.
We believe we are poised for an intelligence takeoff that will expand our capabilities to unimagined heights.
We believe Artificial Intelligence is our alchemy, our Philosopher’s Stone...
--Marc Andreeson, "The Techno-optimist Manifesto" (October 16, 2023)
It is October 21 2025, and "AI" as we know it is coming up on its third birthday. It is far too early to predict with any confidence the impact LLMs will have on society, but that has not stopped people from prognosticating.
In his May 2023 testimony before Congress, OpenAI's Sam Altman described GPT 3.5's release as "a printing press moment." That same year Tyler Cowen wrote of being "reminded of the advent of the printing press, after Gutenberg." In his April 2024 letter to investors, JP Morgan's Jamie Dimon, went furthest of all, comparing LLM's to pretty much every transformative technology he could think of:
We are completely convinced the consequences will be extraordinary and possibly as transformational as some of the major technological inventions of the past several hundred years. Think the printing press, the steam engine, electricity, computing and the Internet, among others.
It is interesting to contrast these with another historical analogy frequently invoked in response to concerns expressed by educators and parents. Randi Weingarten, President of the American Federation of Teachers, suggested that the panic about AI is similar to earlier concerns about graphing calculators, stating that ChatGPT "is to English and to writing like the calculator is to math." A professor of Education at Calgary compared the"moral panic and technological panic" among educators "to arguments we heard about the introduction of calculators back when I was a kid.” An editorial in Scientific American in April 2024 promised "AI can better education, not threaten it, if we learn some lessons from the adoption of the calculator into the classroom."
Certainly no one could reasonably liken the impact of the Gutenberg press—which led to, among other historical hits, the Reformation, the Enlightenment, and liberal democracy—to that of the pocket calculator more than 500 years later. Nor does anyone try, even as the same folks deploy the same analogies in different contexts and to different audiences—to hype or assuage, as the case requires.
It should go without saying that both are designed to manipulate emotions and that neither contributes to meaningful engagement with the history or present of technological change. Although it appeals to the fantasies of would-be investors or politicians, AI is not the printing press. Or electricity. Or the Internet. This is not to dismiss its potential impact, which will be considerable—potentially seismic. But one cannot analogize AI's impact to the very transformative technologies on which AI fundamentally depends for its existence. Nor can you compare a technology that has upended generations of pedagogy in less than three years with the carefully managed incorporation of the hand calculator into math and science classes over the course of a generation.
In the end, I believe the closest historical analogy through which to think through the introduction of LLM-based AI lies far from the realm of education and communication, with the invention of the automobile—or, more precisely, with the introduction in 1909 of the Model T Ford, the first automobile that ordinary people could afford.
ChatGPT did not invent the Internet or the printing press, of course. Indeed its existence is entirely predicated on, among other inventions not its own, the Internet as a repository for the wholesale scraping of the printing press's vast historical output. Similarly, the automobile did not invent roads, or driving. What the car did do, however, was build on a range of earlier transformative technologies—especially Watts' steam engine and the fantasy that
we could free ourselves from all limits imposed by nature.
Building a Better Malthusian Trap

We believe that since human wants and needs are infinite, economic demand is infinite, and job growth can continue forever.
We believe markets are generative, not exploitative; positive sum, not zero sum. Participants in markets build on one another’s work and output. ... Markets are the ultimate infinite game.
--Marc Andreeson, "The Techno-optimist Manifesto" (October 16, 2023)
In 1798, in his Essay on the Principle of Population, Thomas Malthus theorized what came to be known as the "Malthusian trap," or the limit beyond which population could no longer grow without outstripping the capacity of the land's production to sustain the population. As Malthus argued, population grows exponentially, while calorie production proceeds arithmetically, bounded by climate, acreage of arable land, and the labor and technology available to make the land "productive." Yes, advancements in farming techniques and technology could temporarily allow food production to catch up with population growth. But the effect would be another burst of population growth that would soon outstrip the bonus calories realized through those advances. Twenty years after Watts' earliest steam engine had been introduced commercially, Malthus cautioned that there would be no permanent technological escape from these limits.
When I asked Claude's most recent model, Sonnet 4.5, for a brief on the Malthusian trap, it produced a useful enough distillation, winding up as follows:
The Grim Implication
Humanity was condemned to hover near subsistence level. Any prosperity would be temporary—more food → more people → less food per person → back to poverty. The working classes could never permanently improve their lot.
19th Century Influence
This theory was enormously influential in Victorian Britain:Used to argue against poor relief (would just encourage more population growth)Influenced Darwin's theory of natural selectionShaped economic policy and debates about wagesCreated a pessimistic view of human progress
But then it offered a rather breathtakingly wrong conclusion:
The Irony
Even as Malthus's ideas dominated 19th-century thinking, the Industrial Revolution was proving him spectacularly wrong—productivity gains were outpacing population growth for the first time in human history, though this wasn't fully apparent until later in the century.
I was not surprised that an LLM does not understand irony. I was, however, surprised that it went out of its way to offer this commentary that made it sound like... well, like a Silicon Valley tech bro. Malthus's discussion of the limit had nothing to say about "productivity gains" in English industry. The relationship between population and yards of textile produced was never the crux of the problem he was describing. It was the food required to keep alive the laborers needed to produce the textile and the consumers needed to buy it.
When I pressed Claude on this, it regrouped, with its usual apologies. It turned out, after all, that "Malthus was actually not wrong about agriculture." There was, for example, that detail of the Irish Potato famine (1845-52), in which over one million died and an equal number were forced to leave their homeland.
Here is Claude's revised answer:
Food Production in 19th Century Britain:Domestic agricultural productivity did improve (crop rotation, enclosure, some mechanization)But it did not keep pace with population growth through agricultural innovation aloneBritain's population roughly quadrupled (9 million → 36+ million, 1801-1901)Domestic food production certainly did not quadruple
Britain's Actual Solution: Import FoodThe repeal of the Corn Laws (1846) marked Britain's acceptance that it would import grainBy mid-century, Britain was massively dependent on imported food: American wheat, colonial products, etc.They were "exporting" the Malthusian problem—relying on less densely populated regions' agricultural surplus
As the new answer acknowledges, there was no commensurate leap in agricultural productivity during the period in which the steam engine transformed industry in Britain. And of course, industrial production did nothing to increase the food supply needed to feed a rapidly expanding population during the 19th century. Instead, it was the trade of industrial goods in an emerging global marketplace anchored by British imperial rule. To paraphrase Claude's second, decidedly unironic conclusion: Britain exported the consequences of the Malthusian trap to its colonies, memorialized in the devastating famines that punctuated two centuries of British rule in India. Tens of millions would die over the course of a dozen famines, with the last one—costing the lives of up to three million—occurring just four years before India achieved independence in 1947.[^1]
I don't want to give Claude too much of a hard time for its first response. After all, just as LLMs are only as good as their training data, so too are the "insights" they arrive at only as good as the causal connections they have been "fine-tuned" to privilege. And deep in the genome of AI and the numerous technological innovations upon which it is built lies the fantasy of perpetual growth through technological progress. The trap can be kicked forever down the road, so long as technology is not constrained by regulation, ethics, or pesky concerns about what happens when we run, inevitably, out of road.
Like all the technological innovations that drove the industrial revolution, the invention of the car at the end of the 19th century was motivated by what came to be newly perceived as a bottleneck to progress—in this case, rapidly densifying modern cities plagued by horsecarts, street vendors, and hoards of human beings who were clogging up the works. The human labor and fossil fuels that had been concentrated over the course of the 19th century into urban centers had become—as pesky humans would continue to do—at once a problem to be solved and a new market to be exploited.
In the 21st century, AI is similarly presented to us as a solution to a bottleneck (spoiler alert: it is us, once again) and a market to be exploited (spoiler alert: ditto). Before tinkering with the historical analogy between the introduction of the car and the introduction of AI, however, it is worth a look back at the story of how we got started down this road in the first place.
Hand, Horse, Water, Coal

We believe a market sets wages as a function of the marginal productivity of the worker. Therefore technology – which raises productivity – drives wages up, not down. This is perhaps the most counterintuitive idea in all of economics, but it’s true, and we have 300 years of history that prove it.
--Marc Andreeson, "The Techno-optimist Manifesto" (October 16, 2023)
Once upon a time, most products in England were manufactured in the home or in small workshops. And the primary source of power driving this manufacture was human. Labor was specialized and often highly skilled, requiring in most cases an extended apprenticeship that was effectively indenture. The 16th-century Statute of Artificers mandated that no one could practice a trade without a period of seven years' apprenticeship. The apprentice did not get paid for their labor, as the training they received served as their compensation. Upon completing their apprenticeship, most then undertook a period as a journeyman working at various workshops to develop specialized skills while looking for a place hang up their own shingle.
This was a slow process, one perhaps familiar to graduate students seeking to enter one of the last industries where training still moves at a preindustrial pace. But until the second half of the 18th century, this slowness was not perceived as a problem. Manufacture itself was slow, trade was almost exclusively domestic, and national life remained predominantly rooted in rural communities, with only around 20% of the population of England living in cities. Many of the trades were seasonal, operating in rhythm with the farming that occurred in the same communities. In this way, the system of apprenticeship's slowness was a feature, not a bug, designed to control the pace by which new tradesmen entered the marketplace, thus maintaining an equilibrium of wages and prices, supply and demand.
It worked, until it didn't. Pressures started to pile up over the course of the 18th century. England was undergoing a consumer revolution that was increasing the demand for products such as ceramics, metalware, and, especially, fabrics such as cottons and printed calicoes. A new fashion industry began to accelerate the demand for new styles, patterns, and materials to the point where small workshops and the putting-out system could not keep pace with this engineered demand.
As would be repeated over the next three hundred years, a bottleneck was identified in the limits nature placed on manufacture—in this case human limits of speed and endurance. Despite years spent honing their craft, there was only so fast an individual could work, so much they could produce even in 60 hour weeks.
As the story is told, the industrial revolution was born of the "need" to break such bottlenecks in order to meet the "demands" of the market (it is worth noting the ways in which we came quickly to anthropomorphize the Market much as we today anthropomorphize AI). In the textile industry the first major breakthrough was Kay’s flying shuttle (1733) which doubled the output of a skilled weaver. But inevitably a yarn shortage quickly materialized, as spinning continued to be done in the home by hand in the putting-out system that had been practiced for generations. And once identified as an obstacle, an inefficiency, the women working in their cottages were no longer invaluable components of England's most important industry, but anachronistic parts that needed replacing. Fast. By the end of the 18th century, the historical cottage industry of spinning—which gives us a word that would later become a pejorative, "spinsters"—was all but extinct.
As water power began to replace human power as the primary energy source for manufacture in the second half of the 18th century, spinning moved to new factories spread out along the rivers and streams that dotted the English countryside. Workers were forced to leave their homes or workshops for new factory settings, where a new automated spinning jenny rendered hundreds of thousands of women jobless. When similar mechanization soon followed in the companion trade of weaving—often performed by the men of the household while the women spun—the writing was on the wall. The Luddite movement of the 1810s targeted the machinery that could have served to augment their skills and craft but which instead sought to render their skills obsolete and their jobs deskilled and devalued.
All of this was before the steam engine and its fuel—coal—came to dominate industry. By the 1820s and 30s, two new obstacles to industry growth had emerged that needed to be cleared away. First, of course, was a nascent labor movement, which emerged from the Luddites' relatively brief but powerful display of the potential of organizing. Moving labor out of the cottage and workshop and into the factory increased the ability of owners to manage and discipline the bodies they employed. But it also provided new opportunities for previously isolated workers to compare grievances and organize to demand their redress.
And then there was the water itself, which quickly went from solution to a bottleneck of its own. A waterway was a natural entity, with variable strength that could not be controlled. And waterways belonged to the commons and could not be owned. In addition, there were limits to the number of waterways sufficient for industrial power. Mills required fast-flowing streams with sufficient vertical drop to generate power, concentrating the best sites in a handful of regions. Even these sites had finite capacity, with 40-60 horsepower on the upper end, which itself would be constrained as new mills were built further upstream.
In Fossil Capital (2016), economic and environmental historian Andreas Malm tells the story of the dramatic transition to coal power that took place in England across the 19th century. As Malm convincingly demonstrates, this transition was not initially motivated by cost-savings or even increased productivity. After all, much like the bottomless money pit that is AI in the 2020s, the upfront investment in the steam engine a century earlier was massive. Machines had to be purchased and the water-powered mills converted—including building an engine house with reinforced foundations, a tall chimney for smoke dispersal, and new drive shafts throughout the factory. Because water was now being displaced by coal as the primary source of power, a whole new coal infrastructure had to be established, including storage facilities and access roads, canal connections, and (later) rail connections for coal delivery. This was especially costly since these mills were predominantly located in rural valleys near waterways.
Malm argues that it was the fantasies of being liberated from the demands of an increasingly organized labor force and from a limited and communal water source that made these investments attractive. The early steam engines in fact provided increases in efficiencies that at best barely compensated for the investments in transforming and, soon, moving the factories. But the ability to drive down labor costs by moving out of sparsely populated rural communities to competitive urban centers and by deskilling the labor required would ultimately make the move profitable even before the improvements in the steam engine increased productivity.
By the late 19th century, factories had become concentrated in urban centers where unskilled labor was cheap and plentiful and coal was close at hand. The major cities in the countries where automobiles would first come to life at the end of the century had grown exponentially as a result of the fossil-fueled automation of human labor. New York grew from 33,000 in 1790 to 3.4 million in 1900; Berlin from 140,000 in 1780 to 1.8 million in 1900; and London, already the largest city in Europe before the 19th century with around 750,000 residents, would house over 6 million by 1900.
Horse-drawn carts and carriages had been the primary mode of transportation for millennia, but in the modern city, as one New York paper reported in 1908, horses had become “an economic burden, an affront to cleanliness, and a terrible tax upon human life." The city was also becoming a dangerous burden on the horses themselves, who were increasingly without access to the shelter, water, and food needed to survive. The "horseless carriage" emerged in these cities first as electrified vehicles, capable of moving people and goods around the short distances urban density allowed.
What we think of today as cars were largely for the exceedingly rich for the first decade or so of the "horseless carriage," until the Model T—building upon the factory system of a century earlier that had fueled the modern city itself—brought car ownership within reach of middle-class consumers. In 1910 there were roughly 140,000 cars in the United States, or roughly 1 car for every 660 people. Just 15 years later there were well over 17 million cars on the road, roughly 1 car for every 6 people in the U.S.
The first decade of marketing sold the car as a replacement for the horse. "Dispense with the horse," an 1898 ad for the Winton Motor Carriage suggested, "and save the expense, care and anxiety of keeping it." A 1905 Oldsmobile ad described the car as a kind of swiss army knife, capable of serving as a streetcar, a race horse, or a workhorse as the situation required. And invariably the horsepower was advertised prominently.
Within a decade, however, the horse was long forgotten, as, apparently, were limits. In 1911, the editor of the Manufacturers' Record enthused, "No man can study the limitless possibilities connected with the motor car without being somewhat staggered at the future of this industry and its influence upon civilization." Marketing shifted away from measuring a car's power in relation to the horse it had replaced. "Think of the unlimited and reserve power and flexibility of eight silent sliding sleeve-valve cylinders,"a 1917 ad opined. "It seems to be propelled by air ... In it you lose all sense of being driven."
Leaving behind the horse, the earth, and all limits, the consumer could now buy in to the fantasy that had previously been the privilege of industrial elites: progress without limit.
Driving without a License

Intelligent machines augment intelligent humans, driving a geometric expansion of what humans can do.
We believe Augmented Intelligence drives marginal productivity which drives wage growth which drives demand which drives the creation of new supply… with no upper bound.
—Marc Andreeson, "The Techno-optimist Manifesto" (October 16, 2023)
I suggested up front that the introduction of the consumer car might be a better historical analogy than the printing press or calculator when considering the introduction of consumer AI, but it fails in one crucial respect. The automobile developed slowly, with cars out of reach for most for well over a decade and running at limited speed and on limited roads for another decade after. AI, meanwhile, is suddenly everywhere—in our search engines, email, and interaction with online customer service, and of course in the freely available and increasingly powerful models such as ChatGPT and Gemini.
One survey in March found that over half of Americans are using LLMs regularly, with more than a third using them daily. That same month Sam Altman reported 500 million weekly users of ChatGPT, only to suggest a few weeks later that this number had doubled.
More urgent for educators are studies demonstrating how quickly students have adopted this technology. An August 2024 global survey of college students found 86% self-reported using AI in their coursework, with 54% using it at least weekly. A more recent survey puts the number of US students using AI at 90%.
No technology in history has spread as quickly and with greater impact as have LLMs. And it is here that any attempt to find a counterpart in the history of technology from which to learn breaks down. The closest we can come is by imagining an alternate timeline in which the automobile was introduced already capable of driving speeds over 100 miles-per-hour, in a landscape with few paved roads, and no traffic signs, speed limits, driver's tests, or age requirements. Let's call this timeline Earth 2.0.
The Model T of our timeline transformed car ownership by bringing it down to the equivalent of a year’s average pay for a manufacturing worker in 1910. In Earth 2.0, however, young people can get behind the wheel of any car they happen upon. Making matters more daunting for the adults trying to figure out how to manage their transformed world, it's the young people who are doing most of the driving. It is here, less than three years after the kids got the keys to the cars, that the adults, most of whom have not sat behind the wheel themselves, are told it is time to give driving lessons.
Abandoning Earth 2.0 and the already overburdened analogy, we can return to our Earth 1.0 and the situation in which educators find themselves in late 2025: tasked with teaching responsible and productive use of this new technology which runs roughshod over every well-worn path and fundamental principle upon which education has developed over the course of centuries. Every field of study, we are told, must figure this out for themselves, setting their own rules of the road and teaching their students to be responsible users of this transformative technology. If you have never used AI before, worry not: there will be a series of workshops in which "fellows" from the companies that make and promote AI will show you what you’ve been missing while you were engaged in atavistic exercises such as grading papers and reading books.
Meanwhile our students have been exploring AI for three years, even carrying with them experience with LLM's from high school. Some have already decided that AI is not for them, for any range of reasons, including concerns about environmental impact, privacy, or LLM's foundational appropriation of intellectual and creative property. On the other extreme, there are students who have already determined that learning how to write, research, or read without AI's assistance is pointless now that the genie is out of the bottle. Most are somewhere in between, figuring it out day by day, but quite certain that they will be able to do so without our help. There is no way to hide from them how unprepared we are—or how out of sync are our pedagogies, expectations, and assessment tools with the landscape AI has summoned into existence.
Probably quite rightly, students assume they have little to learn from faculty who are themselves just now securing their learners permits. What they rightly fear they will get from us is the equivalent of the abstinence-only education some of them received in high school, or, at best, the Internet-safety lectures they sat through in middleschool whose vague warnings were repeated so often as to become meaningless. Worse still, they fear the "cool" professor who embraces the moral nihilism that Silicon Valley's seeks to normalize, where worrying about such trivialities as theft, cheating, climate change, the exploitation of precarious labor, or their own future job prospects is soooo 20th century. To return one last time to my parallel timeline: if cars are driving so fast as to approach the speed of light, does it really matter if we never learn how to walk?
Meanwhile, accidents are happening all around us, although many of their effects won't be visible for years to come. These include the short term accidents of LLM “hallucinations” making their way into an essay or homework answer. But they also involve the more consequential losses of opportunities to acquire crucial practical and cognitive skills which will be more necessary than ever if young people are to defend themselves in a world poised to sacrifice liberal democracy— and even, if the Silicon Valley moguls have their way, humanity itself.
Fortunately, we have a cornerstone of education that has dedicated itself for centuries to teaching us how we might define and defend what it means to be human, what in the context of the 1880s German research university Wilhelm Dilthey termed Geisteswissenschaften. Often today understood to be synonymous with the academic Humanities, in the context of American higher education such a definition is too narrow. Dilthey's Geisteswissenschaften—literally, science of the spirit or mind—included not only the fields defined as Humanities in the US academy, but also law, and psychology, economics, cultural anthropology, and sociology.
There are many who will tell us that the Humanities are already doomed, that it is too late. As Paul Reitter and Chad Wellman argue in Permanent Crisis: The Humanities in a Disenchanted Age (2021), the Humanities in the modern university were born out of crisis and have periodically reasserted themselves in relationship to crisis. Reitter and Wellman argue that the Humanities should abandon this perpetual crisis mode and the resulting overblown claims to its own role as spiritual or moral and/or political savior, instead focusing on the things the humanities does exceptionally well. Among these are close reading, interpretation, historical analysis, and the ability to reflect on human practices, values, and culture—to ask, in effect, "what does it mean to be human?”
Sadly, in the five years since they finished the book, the case for the Humanities being in an actual crisis is pretty convincing—especially after the inauguration of the current administration and the wholesale assault on higher education in general and the humanities in particular. Any regular reader of Inside Higher Education has witnessed the growing body count of shuttered Humanities programs. A recent report finds declines from 2017 to 2022 in the number of institutions awarding humanities degrees, ranging from a 4% drop in English programs to 17% percent in religion and American studies. Even before the full impact of the pandemic was realized, the foreign languages were losing programs at an alarming rate, an MLA report finding 961 programs lost, 8.2%, between 2016 and 2021.
So, yeah, we have a crisis on our hands. That doesn't mean the solution is to drum up the old jeremiads and promises of redemption that were stock and trade in generations past (although to be honest I would welcome a bit of that if only to reassure me that my colleagues have not, as appears often the case, given up entirely). Like Reitter and Wellman, I believe we can focus on what it is we do and for the first time perhaps ever I believe we are approaching a window in which what the Humanities has to offer can make its own case clearly and convincingly. Unlike them, however, I think we can and should frame those more realistic claims on our own behalf in the most apocalyptic terms. Because that window of opportunity will be brief, and even our most modest claims on behalf of what we have to offer could well represent the difference between remaining human agents and becoming minders, feeding our humanity into the limitless demands of the machine.
Next: Reclaiming our Humanities
Subscribe to our newsletter.
Become a subscriber receive the latest updates in your inbox.
Member discussion