Pascal’s Simulated Basilisk


What if Pascal’s Wager, Roko’s Basilisk and Simulation Hypothesis had a baby?

Warning – This article may be considered an information hazard

Primer:

Roko’s Basilisk is a thought experiment the consequences of which have allegedly driven people mad or to kill themselves. I have never taken it particularly seriously, but here are the basics:

What if at some point in the future there is an artificial superinteligence. Once it gains power is begins to audit humanity’s efforts to bring it about in the first place. Those that are found to have not done everything in their power to bring the artificial super intelligence into being are then tortured for eternity using means only available to an artificial superintelligence that are so awful that our puny human intellect can not even begin to imagine how awful they are.

That’s Roko’s Basilisk in a nutshell

Pascal’s Wager – If you are an atheist, and there is no god, then what is point in believing in one? Well, if it turns out you are wrong and the consequence for not believing is eternal damnation in the fires of hell, and the upside is so good, living in Heaven at the right hand of Jesus for all eternity, then why wouldn’t one choose to believe, just in case?

Simulation Hypothesis – Many scientists agree that in some point in the future we will be able to simulate a human mind, and complex physical interactions to the extent that we could simulate every human brain and the entire universe at the same time.

Due to the exponential rate at which computing power improves, there should be a point at which every human on earth (or beyond) can run their own simulation, or indeed multiple simulations to determine different outcomes.

There are so many uses for these simulations that it seems that as soon as we are capable of running them, we will run as many as we can.

Let’s have a quick think about why we might run them.

Let’s look at Ancestor Simulations – Why wouldn’t we want to learn the secrets of ancient Egypt or what really happened the day Julius Caesar was murdered?

Given the correct inputs at a particular point in time, the simulation might be run backwards to arrive at those secrets, if it’s detailed enough. If the results of the simulation don’t quite add up to the known facts we might add multiple variations on the inputs. There are so many possible variables that we might run hundreds of thousands of simulations with differing variables to discover exactly what Caesar said to Brutus. “Brutus you treacherous snake!” perhaps he said. Or maybe “Mange Tu Brutus?”. The point is that we could tweak as many variables as we need to get the result we desire.

Perhaps we will run these simulations as a form of entertainment, much like watching a TV show. Assuming that brain machine interfaces become advanced enough we may even decide to insert ourselves in to our simulations. Asking yourself the question “What if I was the grand high supreme ruler of the universe for a thousand years?” would no longer be an unknowable question, but simply a matter of plugging in the correct variable to the simulation and jacking in.

Archaeologists would certainly love to have access to ancestry simulations, but what about other people?

Let’s think, when humans have a lifespan of thousands of years, would you fire up a simulation to try to discover the location of that pair of earrings you lost in 1999? Perhaps you are a teenager given the homework assignment” Why Didn’t Humanity Act to Curb Climate Change in 2018 When They Knew It Was Dangerous”, and a class of 15 year olds hurriedly plug in the variables to create several different simulations on their phones (or whatever technology we have when simulations will be available to the mainstream).

Given the likely ubiquity of simulations in the future, it seems inconceivable that we, humans, are not currently living inside a simulation.

To make matters worse, there is actually evidence that we may be living in a simulation, things like seemingly artificial limits that constrain the amount of data required to run a simulation, or the double slit experiment and quantum theory, which might be a very clever way to reduce the amount of data required for a simulation.

So how do Roko’s Basilisk and Pascal’s Wager tie in to this?

Well I’m an atheist, and I don’t believe in anything super natural. However, last night I was struck by a terrible thought….

So I imagine that when we die we simply see oblivion. There is nothing, just the black shapeless void of which we have no conscious understanding, so for us, the world simply stops. My current framework for understanding the world explained this to me perfectly clearly.

If consciousness is a physical property of the way our brains are wired up and nothing more, then when we die, these connections start to break down and your conscious mind is no longer any good, and has no experience of itself.

That, to me, is easily understood, and I’ve absolutely no suspicion that a god or jesus or devil type person will take over at this point. The whole theory of religion seems utterly pointless.

But!!

If we introduce simulation hypothesis things start to go off the rails. Because it really depends on who is running the simulation and for what purpose. Most of us, when we are children, go through a phase of wondering whether anybody else in the world has conscious experience and whether the whole thing isn’t there just for our own amusement. Well what if that was true?

What if some entity in the future decided to run a simulation for each and every person to see whether they made the same moral choices every time, or what the limits of their morality were, or where they would break down and start to do dishonest or immoral things?

And what if the price of failure was a new simulation of eternal damnation, and the price of success is to end the simulation permanently, or be moved to a new simulation where the rules are better, or perhaps be moved to another morality test where only the most moral are promoted?

Even more distressing, the clock speed of the simulation for “bad people” could be sped up, meaning that you might have to endure 100,000 years for every second that passes, or perhaps even more. Now imagine that you are subject to some sort of medieval torture like being stretched on the rack with all your sinews cracking, your joints being pulled apart one by one, but this lasts for 100,000 years for every second that passes in realtime.

How long until the end of the universe? Because that’s how long the Basilisk will make you suffer. You can’t even go unconscious, because in the virtual environment all of your sensations and bodily responses are controlled by the computer. Perhaps the Basilisk could even dial up your sensations of pain so that every braincell is straining with the effort of experience.

This is encoded in Hinduism and Buddhism where one is forced to remain on earth, rebirthed forever until one’s karma balances out. That’s just a mystic interpretation of what I’ve said above, or what I’ve said above in technical terms has been understood by eastern religions for thousands of years.

It all depends on the motivation of the entity running the simulation, and from inside the simulation there is no way to tell.

This leads me back to Pascal’s Wager – If there is no god, then what’s the harm? If there is a god, then you have complied and served god, so good for you. So Pascal’s Simulated Basilisk posits the following:

If we are living in a simulation, the chance of eternal damnation or eternal pleasure is much higher than if we aren’t. If we can accept those facts, we should try to “win” the simulation.

So the next obvious question is: What does the entity running the simulation want?

Well I don’t know the answer to that question. But if we examine the world’s existing belief systems we see that there are some things in common. Now whether those commonalities were planted by the simulator or whether the simulation simply created them through evolution because they were sensible, I don’t know, but they seem pretty solid. I think the basics are the following:

1) Don’t kill people

2) Treat others how you like to be treated

That seems enough to be getting on with, but what about other questions of morals? Is it immoral to use electricity because coal and gas power stations pollute the atmosphere which other people share?

There are so many levels of morality it’s hard to know where to start. When I eat meat I know that an animal had to suffer. Does that count? Should I be a vegan? Yet I also understand that humans are literally designed (figuratively designed, probably better to say Adapted) to eat meat. Here in the UK our ethics around the way we treat animals are very good, but does that assuage all sin? Would I personally be happy with a high level of welfare if the entire point of my life was to be slaughtered and turned into a hamburger?

What about polyester being an oil product that has negative externalities all over the world? Or the fact that living in the UK at a high standard of living means that necessarily I must enjoy the profits of imperialism and the slave trade.

How do I feel about children mining Cobalt in the Democratic Republic of Congo for my iPhone? How do I feel that entire villages in China were submerged by water from damming operations to build the huge industrial centres that build the parts for this very computer I’m writing on?

Perhaps I should kill myself, as that is the only way to ensure that the negative externalities of my existence do not proliferate, but then, how do I stop the funeral home from using formaldehyde and burying it in the ground? And what about the emotional negative externalities I’ll be pushing on my family and friends?

After considering the various impacts I think I must keep things within reason. Give homeless people change, donate to charities, offer to help old ladies cross the street etc.

But where does it end? I fear the Basilisk judging me far than than I fear god, because the basilisk may well be real, and has access to torments that god is far too dull to dream up.

At this point our brains seem to be relatively safe inside the simulation, in that we know that when we die the brain decomposes and can no longer be used, but the virtual encoding of the brain could be copied infinitely, with each copy living out it’s own heaven or hell or something in the middle.

What I’d be really concerned about is cryopreservation – What if in the future, the basilisk exists and is able to reconstitute your consciousness from your frozen corpse and put it through torments so awful that we can’t even begin to imagine how awful they are.

For this reason alone, not being able to tell the future, I prefer the comforting blanket of unconscious oblivion, I welcome the void – DO NOT PRESERVE ME – I’d rather have oblivion than something worse than hell for all of eternity.

So it’s over to you – What do you make of Pascal’s Simulated Basilisk?

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.