All right, all right…. Sounds a bit religious, isn’t it?
Well…
Did You ever wrote any simulation software? If You did You would know that the whole idea of simulation moves around a state vector and differential equations.
For an example if we take a single particle moving in space we will define it’s primary state as:
x V = [ y ] z
where x,y,z are coordinates in space and V
represents the state vector.
This state vector is obviously not complete. It just says where the particle is, but not how fast it goes. So let us extend it a bit:
x y z P V = [ x' ] = [ ] y' P' z'
where
dx x' = ----- dt
that is the x'
symbol denotes first order derivative of coordinate x
over the time, what is a very complex way to say: “speed”.
And by the way, the P
is position vector made of x,y,z components.
Is this state vector complete?
Of course no. We do say where the particle is and how fast it goes, but we are not saying if it is accelerating or not. We need more derivatives.
In theory we can move it at infinitum adding second, third and so on derivatives, but in practice we just bind them in a form of additional state variables and differential equations.
For an example we say that:
P'' = F / m P'' = dP' / dt = dP2 / dt2
what means: “acceleration equals to force divided by mass” and “acceleration is a first derivative of speed over the time” or “second derivative of position over the time“.
Solving differential equations
The method of solving differential equations is quite simple. The entire idea is to use the schema like below:
P''(t) = F(t) / m P'(t) = P'(t-Δt) + P''(t)*Δt P(t) = P(t-Δt)+P'(t)*Δt
Mathematically speaking what we do is an integration over the time. In this example I used the simplest method: “square integration” which assumes that all parameters are fixed during the integration time step Δt
.
Non mathematically speaking we take the force and compute acceleration. Knowing acceleration, previous speed and integration time step Δt we calculate a new velocity. And knowing velocity, previous position and integration time step Δt we calculate a new position.
Integration time Δt
And this is the source of all problems. If integration time step Δt is too small we do have accurate calculations, but they will require a lot of computing power. If the time step Δt is too large we will observe bizarre effects.
For an example a “tunneling”.
Bizarre simulated tunneling
Now imagine the said above particle moving at a constant speed right into the wall:
As You can see it just hit the wall as expected. To be exact it not just hit the wall, but at certain time moment it did exist inside the wall. What is a definition of hitting it, right?
But what would have happen if we would make an integration time step Δt significantly larger?
It just passed through the wall as if the wall did not exist. To be precise at certain time the particle existed in front of the wall and in next time quanta it existed behind the wall but never inside of it.
Why? Because integration time step Δt was too large.
You simply simulated it wrong!
Of course I did.
In most elementary method of fixing it we would just assume that there is a certain distance quanta Δx, that is a minimum wall thickness and we would automatically adjust Δt to be smaller if any of simulated particles moved fast enough to pass more Δx during the Δt time.
Notice, I specifically underlined the world any. If any particle moves more than Δx we need to roll back the time Δt, guess next Δt2 < Δt and re do simulation step again. Messy, but this is what most simulation software must do and on what they get stuck with “time quanta too small” error message.
Alternatively we could use travel path collision to detect if particle hit the wall:
In this approach the particle “exists” along the entire path it moves during the simulation step. This is a correct solution, but imagine how much complexity it adds to computations!
In first approach we just needed to check if the center of particle is inside the wall or not farther that half of particle diameter outside the wall. Now we have to check if a 3D cylinder with round caps described by the path of particle do collide with the wall. It is at lest two orders of magnitude more complex.
And the “tunneling” do exist in a real world
That’s right. That silly simulation effect do really exist. We use it everyday in “tunneling diode” in our electronics equipment and on a less daily basis in “tunneling microscope”.
Of course this is not the simulation effect. This is due to Schrodinger waves theory. This theory basically says, that there is no exact, precise definition of “existence”. In fact everything what exists is described by certain “waves of probability”. Those waves are sinus like equations which describe how probable is that a certain reaction will appear at certain place in certain time. And since the sole definition of existence is: “to be able to interact” it describes if particle exists there or not.
In certain conditions, usually related with large energies and small distances, those equations have close to zero values in certain locations and non-zero in others. Like zero in wall, non-zero in front of it and non-zero behind it.
Heisenberg uncertainty
The next alike effect is the Heisenberg uncertainty principle.
It basically says, that if something moves at exact velocity it may be anywhere and if something is in exact place it can move at any speed.
Of course the “it is” should be taken in huge double quotes.
The “is there” means it is required to be there for the reaction to take place and “moves exact velocity” means that it is required to have an exact energy for the reaction to take place.
For an example if a chlorophyll do require that photon is exactly “green” then it doesn’t really matter if the photon hits the chlorophyll or not. The required accuracy of perfect energy makes the chlorophyll particle to be virtually bigger. Virtually, because just from the point of view of incoming photon.
What if world would be a simulation?
…and the God would have a crappy low end CPU to run it on.
What then?
What would we do if we need to run some model and we are really, really constrained on computing resources.
We would optimize it. Simplify it. But we will be always bound with the physics of what we do simulate.
What if world is just a “game of life”?
But what if we would be writing not a physical simulation but just a “gave of life”?
Note: The “game of life” was a simple program, so called 2D “cellular automata” which appeared to behave like a living colony of bacteria. It was made to illustrate how simple rules may create overly complex behaviors.
With “game of life” type of simulation we are not bound by the physics of simulated world. We do define it to our liking.
So if would have to create such a program and we would have significant constraints on the processing power we would…
Simplification of physics
….agree to Δt tunneling.
In fact what is wrong with it?
The next problem which costs us a lot in terms of computation power is collision detection. The Δt tunneling allows us to use simplest possible algorithms, but we still have to detect intersection of complex shapes and in many cases would produce unacceptable artifacts.
But if we introduce a kind of Heisenberg Principle we may easily escape those effects. The fast particle becomes larger, slow is smaller. That’s may be all when it comes to simulation and it may solve a hell lot of simulation inconsistencies without requiring any additional computing power.
Summary
I think it is enough or this techno-religious mumble.
We could also introduce the Schrodinger’s cat into equation which could be demonstrated to be just a by product of a “lazy solving on demand”. It is used in simulation of well isolated clusters or Non-Player-Characters lives in games – both do not need to be computed at all until they are needed. When they are needed they are computed “on demand” with the entire history.
As the Schrodinger cat NPCs are both dead and alive until they meet The Player.
We could introduce many of such things.
I honestly think that if our world would be just a simulation then the quantum physics would be and effect of optimization of a model used to conserve computing power.
How do You think?