Does quantum physics proof that world is just a simulation?

All right, all right…. Sounds a bit religious, isn’t it?

Well…

Did You ever wrote any simulation software? If You did You would know that the whole idea of simulation moves around a state vector and differential equations.

For an example if we take a single particle moving in space we will define it’s primary state as:

       x
 V = [ y ]
       z

where x,y,z are coordinates in space and V represents the state vector.

This state vector is obviously not complete. It just says where the particle is, but not how fast it goes. So let us extend it a bit:

       x
       y
       z        P
 V = [ x' ] = [    ]
       y'       P'
       z'      

where

        dx
  x' = ----- 
        dt

that is the x' symbol denotes first order derivative of coordinate x over the time, what is a very complex way to say: “speed”.

And by the way, the P is position vector made of x,y,z components.

Is this state vector complete?

Of course no. We do say where the particle is and how fast it goes, but we are not saying if it is accelerating or not. We need more derivatives.

In theory we can move it at infinitum adding second, third and so on derivatives, but in practice we just bind them in a form of additional state variables and differential equations.

For an example we say that:

   P''  = F / m
   P''  = dP' / dt = dP2 / dt2

what means: “acceleration equals to force divided by mass” and “acceleration is a first derivative of speed over the time” or “second derivative of position over the time“.

Solving differential equations

The method of solving differential equations is quite simple. The entire idea is to use the schema like below:

   
   P''(t) = F(t) / m
   P'(t) = P'(t-Δt) + P''(t)*Δt
   P(t) = P(t-Δt)+P'(t)*Δt

Mathematically speaking what we do is an integration over the time. In this example I used the simplest method: “square integration” which assumes that all parameters are fixed during the integration time step Δt.

Non mathematically speaking we take the force and compute acceleration. Knowing acceleration, previous speed and integration time step Δt we calculate a new velocity. And knowing velocity, previous position and integration time step Δt we calculate a new position.

Integration time Δt

And this is the source of all problems. If integration time step Δt is too small we do have accurate calculations, but they will require a lot of computing power. If the time step Δt is too large we will observe bizarre effects.

For an example a “tunneling”.

Bizarre simulated tunneling

Now imagine the said above particle moving at a constant speed right into the wall:

As You can see it just hit the wall as expected. To be exact it not just hit the wall, but at certain time moment it did exist inside the wall. What is a definition of hitting it, right?

But what would have happen if we would make an integration time step Δt significantly larger?

It just passed through the wall as if the wall did not exist. To be precise at certain time the particle existed in front of the wall and in next time quanta it existed behind the wall but never inside of it.

Why? Because integration time step Δt was too large.

You simply simulated it wrong!

Of course I did.

In most elementary method of fixing it we would just assume that there is a certain distance quanta Δx, that is a minimum wall thickness and we would automatically adjust Δt to be smaller if any of simulated particles moved fast enough to pass more Δx during the Δt time.

Notice, I specifically underlined the world any. If any particle moves more than Δx we need to roll back the time Δt, guess next Δt2 < Δt and re do simulation step again. Messy, but this is what most simulation software must do and on what they get stuck with “time quanta too small” error message.

Alternatively we could use travel path collision to detect if particle hit the wall:

In this approach the particle “exists” along the entire path it moves during the simulation step. This is a correct solution, but imagine how much complexity it adds to computations!

In first approach we just needed to check if the center of particle is inside the wall or not farther that half of particle diameter outside the wall. Now we have to check if a 3D cylinder with round caps described by the path of particle do collide with the wall. It is at lest two orders of magnitude more complex.

And the “tunneling” do exist in a real world

That’s right. That silly simulation effect do really exist. We use it everyday in “tunneling diode” in our electronics equipment and on a less daily basis in “tunneling microscope”.

Of course this is not the simulation effect. This is due to Schrodinger waves theory. This theory basically says, that there is no exact, precise definition of “existence”. In fact everything what exists is described by certain “waves of probability”. Those waves are sinus like equations which describe how probable is that a certain reaction will appear at certain place in certain time. And since the sole definition of existence is: “to be able to interact” it describes if particle exists there or not.

In certain conditions, usually related with large energies and small distances, those equations have close to zero values in certain locations and non-zero in others. Like zero in wall, non-zero in front of it and non-zero behind it.

Heisenberg uncertainty

The next alike effect is the Heisenberg uncertainty principle.

It basically says, that if something moves at exact velocity it may be anywhere and if something is in exact place it can move at any speed.

Of course the “it is” should be taken in huge double quotes.

The “is there” means it is required to be there for the reaction to take place and “moves exact velocity” means that it is required to have an exact energy for the reaction to take place.

For an example if a chlorophyll do require that photon is exactly “green” then it doesn’t really matter if the photon hits the chlorophyll or not. The required accuracy of perfect energy makes the chlorophyll particle to be virtually bigger. Virtually, because just from the point of view of incoming photon.

What if world would be a simulation?

…and the God would have a crappy low end CPU to run it on.

What then?

What would we do if we need to run some model and we are really, really constrained on computing resources.

We would optimize it. Simplify it. But we will be always bound with the physics of what we do simulate.

What if world is just a “game of life”?

But what if we would be writing not a physical simulation but just a “gave of life”?

Note: The “game of life” was a simple program, so called 2D “cellular automata” which appeared to behave like a living colony of bacteria. It was made to illustrate how simple rules may create overly complex behaviors.

With “game of life” type of simulation we are not bound by the physics of simulated world. We do define it to our liking.

So if would have to create such a program and we would have significant constraints on the processing power we would…

Simplification of physics

….agree to Δt tunneling.

In fact what is wrong with it?

The next problem which costs us a lot in terms of computation power is collision detection. The Δt tunneling allows us to use simplest possible algorithms, but we still have to detect intersection of complex shapes and in many cases would produce unacceptable artifacts.

But if we introduce a kind of Heisenberg Principle we may easily escape those effects. The fast particle becomes larger, slow is smaller. That’s may be all when it comes to simulation and it may solve a hell lot of simulation inconsistencies without requiring any additional computing power.

Summary

I think it is enough or this techno-religious mumble.

We could also introduce the Schrodinger’s cat into equation which could be demonstrated to be just a by product of a “lazy solving on demand”. It is used in simulation of well isolated clusters or Non-Player-Characters lives in games – both do not need to be computed at all until they are needed. When they are needed they are computed “on demand” with the entire history.

As the Schrodinger cat NPCs are both dead and alive until they meet The Player.

We could introduce many of such things.

I honestly think that if our world would be just a simulation then the quantum physics would be and effect of optimization of a model used to conserve computing power.

How do You think?

Simulation in practical engineering

Today I will write about use of simulation software in practical process of designing products, processes and algorithms.

Simulation is great!

Yes, simulation is a great tool. Really, really great and practically the one of very few aspects of computers use which is directly bringing a profit.

This is because simulation is cheap.

Simulation is bad

This is also true.

Simulation is just a digital modeling. It can’t give You an answer to a question: “how to do it”. It can just answer to the question: “if I will do it, what would be the outcome?”. Simulation does not bring You rules which You can directly follow.

Even worse, simulation is prone to numeric instability.

Anyone who used SPICE to simulate circuits have seen spikes of current and voltage exceeding by few order of magnitudes what is really possible. And any one have seen a dreaded Time step too small message.

So You can’t fully trust Your simulation.

Math is better

A classic algebraic expression is the best possible way of modeling things.

This is because it can be solved against any variable and condition and can be used to answer not only the question “what would be the outcome?” but also answer: “how to do it?”.

This is great. If You can have algebraic model of Your problem go for it.

Math is useless!

Again this is also true.

In my practice I have found, that You either get in touch with math so simple, that a school kid can solve it without sticking it’s tongue out too much, or such complex, that solving it in a symbolic way is almost impossible.

Or that a result You get is a problem to compute in itself.

The first time it hit me was when I was trying to lower the frequency of computations needed for tracking the location of a simple cart based on reading of it’s wheel encoders. Normally You do it by reacting on each tick of an encoder and computing small motion it resulted in. If done properly it does not accumulate an error and You are just incrementing some fixed-point counters by pre-computed values. The price of it is that You have to do it on every “tick” of an encoder, what may mean tens of thousands of ticks per second.

My idea then was to just accumulate the ticks of both wheels in hardware during some periods, assume a linear acceleration model, and solve the motion equation not on every tick, but every 5ms. It did not look so complex, right?

Well…

I failed to solve it on paper, so I put it into a stolen copy (remember, it was in 1990-ties, so in Poland it was legal) of Mathematica and…. I failed to understand the answer. The solved equation contained one of few “non-algebraic symbols” like gamma function which has to be computed numerically.

So in this case math gave a solution, but the solution was useless.

I was confronted with alike problems many, many times since then.

Math is good approximation

So what is the solution to it?

You must do what physicist who wrote Your physics books have done: simplify the model.

If You will carefully look at Your physics book You will find that almost any problem which is solved there is either idealized or simplified. “Assuming that…..”,”Having two infinitely long….” and etc. etc. etc.

Most frequently when math must give a useful symbolic results the model must be strongly simplified.

In practice there is no much use of such extremely simplified models, except…

Simplified models do train Your intuition

Even tough a simplified model can’t give You an answer to Your current design problem, it sill can help You understand rules which are driving it. With seeing those rules You may move a great step forward.

For an example if You count the frequency of randomly incoming pulses You would normally count all of them over the certain period of time. But if You will check Your math for this specific kind of problem, You would notice, that the math says that the certain equation represents the density of probability, that the pulse is observed during a certain period of observation. If Your read it carefully You will notice, that not only You don’t have to count all pulses, but You also don’t have to observe the process continuously!

Without understanding of this very simplified math model You would not be able to figure it out. With understanding it You may select a path which is an order of magnitude less expensive in hardware and computing power.

Simulation is hellishly accurate

Since a solvable algebraic model is too simple we need to go for a model which is complex but cannot be solved using symbolic operations.

We need numeric modeling.

Two paths of numeric modeling

There are two paths of numeric modeling.

One just takes a devilish complex math (usually differential equations) and applies numeric math to solve it.

Second path takes a large number of simplified models and combines them into a complex conglomerate.

Both can give excellent results.

Shit on input, shit on…

And both are prone to good old saying: “shit on input, shit on output”.

One of my colleagues was struggling with designing an optical path for certain application. So after a brief period of trials and errors he gave a try to the software dedicated for simulating optics.

And guess what?

He failed.

Not enough data

Why he failed?

Because the simulation model was too accurate. You know that light can reflect, bend, be attenuated, scatter in different ways, even split to different colors. The light source may have certain angular characteristics, emit certain wave lengths with different intensity and etc, and etc. Plenty of those effects will be non-linear, so many coefficients are necessary.

And the only information he had was: “this is the LED you have as a light source” and “this is poly-carbonate You should use to make the lens”.

Feeding the program having only such vague data required tons of guess work and did not produce any usable results.

It is too precise!

And now we come to the point:

Math is oversimplified, while numeric solution is only just accurate as accurate are data You put into it.

So in practice many engineers decide to not use simulation at all. If it can’t be trusted and if it can’t give precise results then what is the use of it? They have to make experiments.

Experiments and money

Experiments are expensive.

A certain problem I had to solve, or to be precise, I had to design something to meet required characteristics, had such nature, that a single experimental step required about 20 work-hours and involved people at production floor, lab, and an experimenter. Since each person had also other things to do this 20-work hours load was resulting in my personnel performing one experimental step each two weeks.

Since I am an experienced engineer I was predicting, that I will need about ten steps to get acceptable result. That means 20-weeks. Five months. That is, if there will be no special problems.

Simulation and money

Simulation on the other hand involves very different resources:

  1. Input data;
  2. Computing power;
  3. A digital experimenter person.

If You will take care that the experimenter person can adjust models he/she simulates without involving other people (ie. You will teach that person to use CAD software), then the only problem is computing power and input data.

Computing power is easy. In my experience I have never encountered a problem which could not be simulated by a decent desktop PC within ~20 hours.

What does it mean?

That not only digital simulation do require less resources, what cuts on “work-station synchronization” delays, but also has much faster turn around. At worst You will be able to run one experiment each work day, what means, that planed ten runs will take two weeks instead of five months…

If not the “shit on input”….

Compensating Your simulation

So the only problem which stops us is the lack of accurate input data.

We need to deal with it in stages.

First we need to prevent simulation from requiring so many input data we don’t have.

So we need to disable simulation of all non-primary effects. In the example with the LED lens You don’t need volumetric dispersion and color splitting.

Then put some constraints on Your design. This time they come not from technical realm (ie. “You can’t produce that”), but from computational. For an example it is easier to polish the surface of the test lens than give it a known and controlled scattering coefficient.

Third step is to design both numeric and physical experiment in the same way. You must be able to run physical experiment exactly as the digital experiment was run.

Now You are almost ready to excavate Your input data.

Excavating input data

Run both experiments, digital and a real one. Compare results.

Expected result is: “Simulation gave a shit.”

This is a good result. Let us take a look at it:

The green curve represents the observed light intensity (directional emission characteristics of our LED+lens set) in a real world experiment. The blue line is the same characteristics, in the same experiment, but when we simulated it. The red dotted line is our goal.

The blue and green do differ so much, that if we would have made digital experiment to reach the goal the real experimental result would be faulty. We need to bring it to agreement with real life experiment.

You can correct it in two ways:

  • by playing with input data;
  • by applying correcting characteristics.

Roll-back experiment

Playing with input data is in fact a “roll-back” experiment. You do run both physical and digital experiment and You deduce the input data from the difference.

In the LED + lens case the obvious choice is to play with emission characteristics of a LED.

If You would try to sketch some math for it, it might look like this:

 
 Brightness_at ( alpha ) = 
       Emission_at(alpha ) * simulation_magic_at(alpha)

This is approximately true if there is no lens. If there would be lens it would be more like:

 
 Brightness_at ( alpha ) =
     sum(i=-a...+a) { 
        Emission_at(alpha + i ) * simulation_magic_at(alpha + i) 
                    }

what is much harder to “roll back”, requires a trial-and-error approach and still may give You input data which are correct just for this setup and not for all setups.

This is why it is very important to understand it and to correctly create the “roll back” experiment. For a LED experiment it would be best to not use any lens at all.

When rolling back is impossible

In many cases rolling back is impossible. This may be due to:

  • an inability to create such an experimental setup which isolates Your input characteristics well enough;
  • an inability to measure, isolate or control effects which are simulated and do influence the result, but are too hard to control or remove;
  • missing some significant effects in simulation;

In such case You can’t roll back to all necessary input data. But don’t worry, it won’t be such a big problem at all.

All You should do is to apply the “correction” in a simplest possible model:

 
 Brightness_at ( alpha ) = 
    Emission_at(alpha ) * simulation_magic_at(alpha)
 Digital_brightness_at (alpha ) = 
    Correction_at(alpha) * Brightness_at ( alpha ) 
 Digital_brightness_at (alpha ) == Experimental_brightness_at (alpha)

and find Correction_at(alpha).

Using corrected simulation

Regardless if You “rolled back” Your input or applied “correction” to simulation results You have now a simulation which agrees in this specific experiment with a reality.

Of course applying the same simulation to other experimental setup is risky.

I dare to say, that “correction” is much more risky than “rolling back”, because rolled back data are put on input of simulation, thous are fully processed, while correction just blindly moves the output. I would always recommend to aim for “roll-back”. On other hand “rolling back” requires a dedicated experiment, so it adds to the cost. It is your choice.

Since we are not scientists but engineers this should be enough for us, because regardless if results are fully correct or not, the changes in results of digital experiments should reflect what would be the change in a real life.

Ten steps forward, one step back

All right, so You do have Your inaccurate compensated simulation. What to do?

Use it. Play with it. Make Your intuition to work.

However once You will move to some stage at which You start thinking: “I am going the right way, I just need to tune it” please do stop.

Remember, Your simulation is not fully correct. The input data You “rolled back” or the correction coefficients may be not applicable in changed conditions. Do not try to get perfect simulated results! It will be just waste of time and money.

This is the right place to run next reference physical experiment.

Run it and compare results.

You will be again at the same spot: experiment will show that simulation is incorrect.

Don’t worry, You are closer to Your goal anyway. This just means, that You need to apply next correction. This time no rolling-back to input for sure! A regular correcting function should be enough. Once You find it all You need is to make some more digital experiments making just slight changes.

Repeat till success

And repeat.

Most probably You will run close to the hundred of of digital experiments and just three or four physical ones.

Summary

After reading this blog entry You should know how to efficiently use digital simulation in design process even in cases when You lack input data or precise math models.

You should notice, that this approach should allow You to run so many low cost experiments that after running them You should have developed quite a good “intuition” about it.

You should also notice, that running “reference experiments” let You understand what effects are simulated and what are not and how to include them in Your “intuition”.

And last but not least, You should have saved tones of money.

Providing that You were not stupid enough to pay 100’000$ for stupidly precise simulation software when You had no stupidly precise input data to feed it with.