Originally published on “virtual reality” – Google News on 2020 08 10 by https://www.scientificamerican.com/article/could-we-force-the-universe-to-crash/
The proposition that the world is a sham is not new; it’s been cropping up for thousands of years across different cultures, from China to ancient Greece, advocated by thinkers like Descartes with his mind-body dualism. But this more recent version, based around computation—or at least artificial reconstruction—bubbled up around 2003 with the publication of a paper titled “Are You Living in a Computer Simulation?” by the philosopher Nick Bostrom. In essence Bostrom makes the argument that if any extremely advanced civilizations develop the capacity to run “ancestor simulations” (to learn about their own pasts) the simulated ancestral entities would likely far outnumber actual sentient entities in the universe. With a little probabilistic hand-waving it is then possible to argue that we are most likely simulated.
All of which is good fun if you’ve had a few beers or spent a few too many hours cowering under your bedclothes. But while you might love or hate this hypothesis, the simple fact is that before judging it we should really apply the criteria we use for assessing any hypothesis, and the first step in that process is to ask whether it can be assessed in any reasonable way.
Intriguingly, the simulation hypothesis might be testable, under certain assumptions. For example, we might suppose that a simulation has its limitations. The most obvious one, extrapolating from the current state of digital computation, is simply that a simulation will have to make approximations to save on information storage and calculation overheads. In other words: it would have limits on accuracy and precision.
One way that those limits could manifest themselves is in the discretization of the world, perhaps showing up in spatial and temporal resolution barriers. Although we do think that there are some absolute limits in what constitutes meaningful small distances or time intervals—the Planck scale and Planck time—that has to do with the limits of our current understanding of physics rather than the kind of resolution limits on your pixelated screen. Nonetheless, recent research suggests that the true limit of meaningful intervals of time might be orders of magnitude larger than the traditional Planck time (which itself is 10-43 seconds). Perhaps future physics experiments could reveal an unexpected chunkiness to time and space.
But the neatest test of the hypothesis would be to crash the system that runs our simulation. Naturally, that sounds a bit ill-advised, but if we’re all virtual entities anyway does it really matter? Presumably a quick reboot and restore might bring us back online as if nothing had happened, but possibly we’d be able to tell, or at very least have a few microseconds of triumph just before it all shuts down.
The question is: how do you bring down a simulation of reality from inside it? The most obvious strategy would be to try to cause the equivalent of a stack overflow—asking for more space in the active memory of a program than is available—by creating an infinitely, or at least excessively, recursive process. And the way to do that would be to build our own simulated realities, designed so that within those virtual worlds are entities creating their version of a simulated reality, which is in turn doing the same, and so on all the way down the rabbit hole. If all of this worked, the universe as we know it might crash, revealing itself as a mirage just as we winked out of existence.
You could argue that any species capable of simulating a reality (likely similar to its own) would surely anticipate this eventuality and build in some safeguards to prevent it happening. For instance, we might discover that it is strangely and inexplicably impossible to actually make simulated universes of our own, no matter how powerful our computational systems are—whether generalized quantum computers or otherwise. That in itself could be a sign that we already exist inside a simulation. Of course, the original programmers might have anticipated that scenario too and found some way to trick us, perhaps just streaming us information from other simulation runs rather than letting us run our own.
But interventions like this risk undermining the reason for a species running such simulations in the first place, which would be to learn something deep about their own nature. Perhaps letting it all crash is simply the price to pay for the integrity of the results. Or perhaps they’re simply running the simulation containing us to find out whether they themselves are within a fake reality.