Assuming our simulation is not designed to auto-scale (and our Admins don’t know how to download more RAM), what kind of side effects could we see in the world if the underlying system hosting our simulation began running out of resources?

  • social2@social2.williamyam.com
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    This is a tricky question to answer. To answer this question requires assumptions about how perspectives emerge, if at all, from computation, a theory of time, interpretations of quantum mechanics, and persistence of identity.

    Of course, we can start at the simplest possible interpretation, that we live in a “Matrix” style simulation, where we actually have real bodies in the “real world”. This sidesteps the question of how to get sentient beings to emerge in a simulation and what that would entail. In this case, running out of RAM would have immediate consequences, since our sense of time in the simulated world would be in 1:1 correspondence with the “real world”. We would experience all the possible glitches running out of RAM entails. Imagine taking an Apple Vision Pro and scaling it out. These are your conventional computer glitches. At the point of running out of RAM, you could immediately tell you were in a simulation.

    Lets take the next level of interpretation though. Let’s assume we live in a “OpenAI Sora” type of simulation. In this simulation, the beings as well as the environment are generated on the fly “randomly”. At this point, I am just assuming that subjective perspectives can emerge just as they do in our world, where they are tied to beings that look very much like ourselves. In this case, the subjective time of the simulated beings is entirely uncorrelated with our own time. In a sense, we are just opening a “window” into another universe, like playing back a movie, but the beings themselves would exist whether or not we stumbled upon their particular sequence of bits. The problem of asking what the beings in this type of simulation would experience becomes obvious when you realize that multiple simulators can simulate the exact same simulation with exactly the same sequence of bits. The question then becomes, are the two simulations actually equivalent to each other? From the simulated beings perspective, they could not tell which simulator is simulating them based on their experience, since each simulator can simulate exactly the same bit sequence.

    Now this comes to the question of self-locating uncertainty, of being uncertain about which simulator is simulating your own existence. If there were only two simulators in the “real world” simulating your own existence, it would seem to be most reasonable to assign 50% probability that you are being simulated by either simulator. Then the question of what happens when the simulator runs out of RAM turns into the question of which simulator is running out of RAM? If only one simulator runs out of RAM, then from a naive estimate, you would only experience a 50% chance of some sort of “glitch” happening in your world. But of course, we have no way of knowing how many simulators are running this exact sequence of bits. It could very well be infinite. The question then becomes what is the probability distribution over all such simulators running out of RAM? This question seems impossible to answer from the simulated being’s point of view.

    I haven’t even touched upon the question of continuity of identity, of what happens to your perspective when the simulation “crashes” or is paused. This really comes to the question of how conscious awareness supervenes on sequences of bits, or how our perspective gets tied to one sequence of events over another. In other words, this is similar in spirit to the question in the many worlds interpretation of quantum mechanics as to which branch your particular perspective gets tied to when the universe “splits” into different branches. In many worlds quantum mechanics, if there is one branch where the simulator runs out of RAM, there is still the possibility of other branches where your perspective continues unabated. You can see then that this question isn’t really a question about simulations or quantum mechanics per se, but of how consciousness decides what perspective comes next.

    I suspect the answer is already hidden in the data that we see already. You see, in quantum mechanics there is this notion of “no cloning” where the exact quantum state of a system cannot be cloned, or this would violate the uncertainty principle. I suspect that the solution to the problem of running out of RAM lies in the fact that our own conscious perspective cannot be cloned exactly. In other words, our own conscious experience as we experience it now, might be thought of in the following way. We cannot know what is generating our experience, so we naively assign a probability distribution over all such possible generators of our experience, including those of simulators of our own existence. Some of this probability mass includes situations where our own existence just fluctuates out of the vacuum, but this is vanishingly small. But then there is some other probability mass that is assigned to situations where our existence continues “normally”. I suspect the conglomeration of all possible configurations that lead to the particular quantum state that specifies our particular perspective is actually the probability distribution as specified by quantum mechanics. That is, the origin of the probability distribution of quantum mechanics lies entirely in the fact that our own conscious experience can be generated by various possible simulators of various types that converge onto the fixed point probability distribution that is specified by the laws of quantum mechanics.

    In this sense, then it is obvious why you cannot clone a quantum state, because a quantum state is a conglomeration of all possible “classical” sequences that have been simulated to such a sufficient degree to be called the same quantum state. In other words, you cannot clone a quantum state because a quantum state is the set of all possible clones that are indistinguishable from each other. Quantum mechanics is the end result of the fact that all possible clones have been carried out on every sequence of bitstrings.

    Now the question then arises is why does quantum mechanics seem to obey probability amplitudes and not distributions, that is it utilizes complex numbers instead of ordinary numbers. I suspect this has to do with the fact that quantum mechanics has a certain timeless quality to it, and it is this “time travel” quality that causes the probabilities to be complex valued rather than real valued. You see, if we just assigned classical probabilities to every event, we would just have statistical mechanics instead of quantum mechanics. But statistical mechanics assumes that there is a singular direction of time. I suspect if you relax the notion of a single valued time, you get quantum mechanics.

    Thus, simulating a reality, is akin to building a time machine.