A team of physicists led by Mir Faizal at the University of British Columbia has demonstrated that the universe cannot be a computer simulation, according to research published in October 2025[1].
The key findings show that reality requires non-algorithmic understanding that cannot be simulated computationally. The researchers used mathematical theorems from Gödel, Tarski, and Chaitin to prove that a complete description of reality cannot be achieved through computation alone[1:1].
The team proposes that physics needs a “Meta Theory of Everything” (MToE) - a non-algorithmic layer above the algorithmic one to determine truth from outside the mathematical system[1:2]. This would help investigate phenomena like the black hole information paradox without violating mathematical rules.
“Any simulation is inherently algorithmic – it must follow programmed rules,” said Faizal. “But since the fundamental level of reality is based on non-algorithmic understanding, the universe cannot be, and could never be, a simulation”[1:3].
Lawrence Krauss, a co-author of the study, explained: “The fundamental laws of physics cannot exist inside space and time; they create it. This signifies that any simulation, which must be utilized within a computational framework, would never fully express the true universe”[2].
The research was published in the Journal of Holography Applications in Physics[1:4].



Disclaimer: an engineering student not a logician. However, one of my recent hyper fixations lead me down the rabbit hole of mathematics specifically to formal logic systems and the languages and semantics of them. So here’s my understanding of the concepts.
TLDR: Undecidable things in physics aren’t capable of being computed by a system based on finite rules and step-by-step processes. This means no algorithm/simulation could be designed to actually run the universe.
A language is comprised of the symbols used in a formal system. A system’s syntax is basically the rules by which you can combine those symbols into valid formulas. While a system’s semantics are what determines the meaning behind those formulas. Axioms are formulas that are “universally valid” meaning they hold true in the system regardless of of the values used within them (think of things the definitions of logical operators like AND and NOT etc)
Gödel’s incompleteness theorems say that any system which is powerful enough to define multiplication is incomplete. This means that you could write a syntactically valid statement which cannot be proven from the axioms of that system even if you were to add more axioms.
Tarski’s undefinability theorem shows that not only can you write statements which cannot be proven true or false, you cannot actually describe the system using itself. Meaning you can’t really define truth unless you do it from outside the formal language you’re using. (I’m still a little fuzzy on this one)
Information-theoretic incompleteness is new to me, but seems to be similar to Gödel’s theorem but with a focus on computation saying that if you have a complex enough system there are functions that won’t be recursively definable. As in you can’t just break it down into smaller parts that can be computed and work upwards to it.
The paper starts by assuming there is a computational formal system which could describe quantum gravity. For this to be the case, the system
Because the language of this system can define arithmetic, Gödel’s theorems apply. This leads to the fact that this system, if it existed, can’t prove that it itself is sound.
I don’t know what it means for the “truth-predicate” of the system to not be defined, but it apparently ties into Chaitan’s work and means that there must exist statements which are undecideable.
Undecidable problems can’t be solved recursively by breaking them into smaller steps first. In other words you can’t build an algorithm that will definitely lead to a yes/no or true/false answer.
All in all this means that no algorithmic theory could actually describe everything. This means you cannot break all of physics down into a finite set of rules that can be used to compute reality. Ergo, we can’t be in a simulation because there are physical phenomena that exist which are impossible to compute.
I appreciate this, but I think arguments that try to prove that we can’t simulate the universe atom-for-atom really miss the point.
If you were simulating a universe, you wouldn’t try to simulate every quark and photon. You would mostly just render detail at the level that humans or the simulated beings inside the simulation can interact with.
To my left is a water bottle. If I open it up, I see water. But I see water, I don’t see atoms. You could create a 100% convincing simulation of a water bottle with many orders of magnitude less computation required than if you tried to simulate every water molecule inside the water bottle.
This is how you would actually simulate a universe. You don’t apply the brute force method of trying to simulate every fundamental particle. You only simulate the macroscopic world, and even then only parts that are necessary. Now, you probably would have some code to model the microscopic and subatomic, but only when necessary. So whenever the Large Hadron Collider is turned on, the simulation boots up the subatomic physics subroutine and models what results an experiment would reveal. Same thing with looking under a microscope. You don’t actually simulate every microbe on earth constantly. You just simulate an image of appropriate microbes whenever someone looks under a microscope.
And your simulation doesn’t even have to be perfect. Did your simulated beings discover they were in a simulation? No problem. Just pause the simulation, restore from an earlier backup, and modify the code so that they won’t discover they’re in a simulation. Patch whatever hole they used to figure out they were being simulated.
If the simulators don’t want you to discover that you are being simulated, then you will never be able to prove you’re in a simulation. You are utterly and completely at their mercy. If anyone ever does discover the simulation, they can simply patch the hole and restore from an earlier backup.
This isn’t about simulating atom by atom. It is just saying that there exist pieces of the universe that can’t be simulated.
If we find undecidable aspects of physics (like we have) then they must be part of this simulation. But it’s not possible to simulate those by any step by step program. Ergo, the universe cannot be a simulation.
The use of render optimization tricks has no effect on this.
You can’t even patch it like you said with wiping minds because it would require you to do the undecidable work which can’t be done by any algorithm.
You can actually add the statement itself as an axiom. The point of the theorem is that no finite number of additional axioms will completely eliminate all unprovable true statements from the theory.
Also, it relies on consistency of the formal system, because inconsistent system can prove anything. In fact, you can prove consistency of a formal system if and only if it is inconsistent.
In fact, any function, growing fast enough, will be non-recursive. And the same applies to various similar definitions, resulting in fast-growing hierarchy.
It should be noted that it doesn’t rule out analog simulations.