The robots who predict the future

ZanoBD

February 18, 2026

To be human is, fundamentally, to be a forecaster. Occasionally a pretty good one. Trying to see the future, whether through the lens of past experience or the logic of cause and effect, has helped us hunt, avoid being hunted, plant crops, forge social bonds, and in general survive in a world that does not prioritize our survival. Indeed, as the tools of divination have changed over the centuries, from tea leaves to data sets, our conviction that the future can be known (and therefore controlled) has only grown stronger. 

Today, we are awash in a sea of predictions so vast and unrelenting that most of us barely even register them. As I write this sentence, algorithms on some remote server are busy trying to guess my next word based on those I have already typed. If you’re reading this online, a separate set of algorithms has likely already served you an ad deemed to be one you are most likely to click. (To the die-hards reading this story on paper, congratulations! You have escaped the algorithms … for now.)

If the thought of a ubiquitous, mostly invisible predictive layer secretly grafted onto your life by a bunch of profit-hungry corporations makes you uneasy … well, same here.

So how did all this happen? People’s desire for reliable forecasting is understandable. Still, nobody signed up for an omnipresent, algorithmic oracle mediating every aspect of their life. A trio of new books tries to make sense of our future-­focused world—how we got here, and what this change means. Each has its own prescriptions for navigating this new reality, but they all agree on one thing: Predictions are ultimately about power and control.

cover of The Means of Prediction
The Means of Prediction: How AI Really Works (and Who Benefits)
Maximilian Kasy
UNIVERSITY OF CHICAGO PRESS, 2025

In The Means of Prediction: How AI Really Works (and Who Benefits), the Oxford economist Maximilian Kasy explains how most predictions in our lives are based on the statistical analysis of patterns in large, labeled data sets—what’s known in AI circles as supervised learning. Once “trained” on such data sets, algorithms for supervised learning can be presented with all kinds of new information and then deliver their best guess as to some specific future outcome. Will you violate your parole, pay off your mortgage, get promoted if hired, perform well on your college exams, be in your home when it gets bombed? More and more, our lives are shaped (and, yes, occasionally shortened) by a machine’s answer to these questions.

If the thought of a ubiquitous, mostly invisible predictive layer secretly grafted onto your life by a bunch of profit-hungry corporations makes you uneasy … well, same here. This arrangement is leading to a crueler, blander, more instrumentalized world, one where life’s possibilities are foreclosed, age-old prejudices are entrenched, and everyone’s brain seems to be actively turning into goo. It’s an outcome, according to Kasy, that was entirely predictable. 

AI adherents might frame those consequences as “unintended,” or mere problems of optimization and alignment. Kasy, on the other hand, argues that they represent the system working as intended. “If an algorithm selecting what you see on social media promotes outrage, thereby maximizing engagement and ad clicks,” he writes, “that’s because promoting outrage is good for profits from ad sales.” The same holds true for an algorithm that nixes job candidates “who are likely to have family-care responsibilities outside the workplace,” and the ones that “screen out people who are likely to develop chronic health problems or disabilities.” What’s good for a company’s bottom line may not be good for your job-hunting prospects or life expectancy.

Where Kasy differs from other critics is that he doesn’t think working to create less biased, more equitable algorithms will fix any of this. Trying to rebalance the scales can’t change the fact that predictive algorithms rely on past data that’s often racist, sexist, and flawed in countless other ways. And, he says, the incentives for profit will always trump attempts to eliminate harm. The only way to counter this is with broad democratic control over what Kasy calls “the means of prediction”: data, computational infrastructure, technical expertise, and energy.  

A little more than half of The Means of Prediction is devoted to explaining how this might be accomplished—through mechanisms including “data trusts” (collective public bodies that make decisions about how to process and use data on behalf of their contributors) and corporate taxing schemes that try to account for the social harm AI inflicts. There’s a lot of economist talk along the way, about how “agents of change” might help achieve “value alignment” in order to “maximize social welfare.” Reasonable, I guess, though a skeptic might point out that Kasy’s rigorous, systematic approach to building new public-serving institutions comes at a time when public trust in institutions has never been lower. Also, there’s the brain goo problem. 

To his credit, Kasy is a realist here. He doesn’t presume that any of these proposals will be easy to implement. Or that it will happen overnight, or even in the near future. The troubling question at the end his book is: Do we have that kind of time?

Reading Kasy’s blueprint for seizing control of the means of prediction raises another pressing question. How on earth did we reach a point where machine-mediated prediction is more or less inescapable? Capitalism, might be Marx’s pithy response. Fine, as far as it goes, but that doesn’t explain why the same kinds of algorithms that currently model climate change are for some reason also deciding whether you get a new kidney or I get a car loan.

""
The Irrational Decision: How We Gave Computers the Power to Choose for Us
Benjamin Recht
PRINCETON UNIVERSITY PRESS, 2026

If you ask Benjamin Recht, author of The Irrational Decision: How We Gave Computers the Power to Choose for Us, he’d likely tell you our current predicament has a lot to do with the idea and ideology of decision theory—or what economists call rational choice theory. Recht, a polymathic professor in UC Berkeley’s Department of Electrical Engineering and Computer Science, prefers the term “mathematical rationality” to describe the narrow, statistical conception that stoked the desire to build computers, informed how they would eventually work, and influenced the kinds of problems they would be good at solving. 

This belief system goes all the way back to the Enlightenment, but in Recht’s telling, it truly took hold at the tail end of World War II. Nothing focuses the mind on risk and quick decision-making like war, and the mathematical models that proved especially useful in the fight against the Axis powers convinced a select group of scientists and statisticians that they might also be a logical basis for designing the first computers. Thus was born the idea of a computer as an ideal rational agent, a machine capable of making optimal decisions by quantifying uncertainty and maximizing utility.

Intuition, experience, and judgment gave way, says Recht, to optimization, game theory, and statistical prediction. “The core algorithms developed in this period drive the automated decisions of our modern world, whether it be in managing supply chains, scheduling flight times, or placing advertisements on your social media feeds,” he writes. In this optimization-­driven reality, “every life decision is posed as if it were a round at an imaginary casino, and every argument can be reduced to costs and benefits, means and ends.”

Today, mathematical rationality (wearing its human skin) is best represented by the likes of the pollster Nate Silver, the Harvard psychologist Steven Pinker, and an assortment of Silicon Valley oligarchs, says Recht. These are people who fundamentally believe the world would be a better place if more of us adopted their analytic mindset and learned to weigh costs and benefits, estimate risks, and plan optimally. In other words, these are people who believe we should all make decisions like computers. 

How might we demonstrate that (unquantifiable) human intuition, morality, and judgment are better ways of addressing some of the world’s most important and vexing problems?

It’s a ridiculous idea for multiple reasons, he says. To name just one, it’s not as if humans couldn’t make evidence-based decisions before automation. “Advances in clean water, antibiotics, and public health brought life expectancy from under 40 in the 1850s to 70 by 1950,” Recht writes. “From the late 1800s to the early 1900s, we had world-changing scientific breakthroughs in physics, including new theories of thermodynamics, quantum mechanics, and relativity.” We also managed to build cars and airplanes without a formal system of rationality and somehow came up with societal innovations like modern democracy without optimal decision theory. 

So how might we convince the Pinkers and Silvers of the world that most decisions we face in life are not in fact grist for the unrelenting mill of mathematical rationality? Moreover, how might we demonstrate that (unquantifiable) human intuition, morality, and judgment might be better ways of addressing some of the world’s most important and vexing problems?

cover of Prophecy
Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI
Carissa Véliz
DOUBLEDAY, 2026

One might start by reminding the rationalists that any prediction, computational or otherwise, is really just a wish—but one with a powerful tendency to self-fulfill. This idea animates Carissa Véliz’s wonderfully wide-ranging polemic Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI

A philosopher at the University of Oxford, Véliz sees a prediction as “a magnet that bends reality toward itself.” She writes, “When the force of the magnet is strong enough, the prediction becomes the cause of its becoming true.” 

Take Gordon Moore. While he doesn’t come up in Prophecy, he does figure somewhat prominently in Recht’s history of mathematical rationality. A cofounder of the tech giant Intel, Moore is famous for his 1965 prediction that the density of transistors in integrated circuits would double every two years. “Moore’s Law” turned out to be true, and remains true today, although it does seem to be running out of steam thanks to the physical size limits of the silicon atom.

One story you can tell yourself about Moore’s Law is that Gordon was just a prescient guy. His now-classic 1965 opinion piece “Cramming More Components onto Integrated Circuits,” for Electronics magazine, simply extrapolated what computing trends might mean for the future of the semiconductor industry. 

Another story—the one I’m guessing Véliz might tell—is that Moore put an informed prediction out into the world, and an entire industry had a collective interest in making it come true. As Recht makes clear, there were and remain obvious financial incentives for companies to make faster and smaller computer chips. And while the industry has likely spent billions of dollars trying to keep Moore’s Law alive, it’s undoubtedly profited even more from it. Moore’s Law was a helluva strong magnet. 

Predictions don’t just have a habit of making themselves come true, says Véliz. They can also distract us from the challenges of the here and now. When an AI boomer promises that artificial general intelligence will be the last problem humanity needs to solve, it not only shapes how we think about AI’s role in our lives; it also shifts our attention away from the very real and very pressing problems of the present day—problems that in many cases AI is causing.

In this sense, the questions around predictions (Who’s making them? Who has the right to make them?) are also fundamentally about power. It’s no accident, Véliz says, that the societies that rely most heavily on prediction are also the ones that tend toward oppression and authoritarianism. Predictions are “veiled prescriptive assertions—they tell us how to act,” she writes. “They are what philosophers call speech acts. When we believe a prediction and act in accordance with it, it’s akin to obeying an order.”

As much as tech companies would like us to believe otherwise, technology is not destiny. Humans make it and choose how to use it … or not use it. Maybe the most appropriate (and human) thing we can do in the face of all the uninvited daily predictions in our lives is to simply defy them. 

Bryan Gardiner is a writer based in Oakland, California.

Leave a Comment