Playing around with GPT2

So lately I’ve been spending a relative amount of time toying with GPT2, who made the headlines about producing text so believable that it was considered dangerous (GPT2 is the toned down version).

ML and Reddit

I started by getting hooked on this GPT2 generated subreddit:

https://www.reddit.com/r/SubSimulatorGPT2/

Which I highly recommend to everyone to read daily as an exercise in critical thinking and challenging the natural human bias to trust everything you see. I especially enjoy the tag trained on r/totallynotrobots which is basically robots pretending to be humans pretending to be robots pretending to be humans.

It wasn’t long before I tried it for myself. I’ve long wanted to download all my social media posts and train some kind of ML on it, and GPT2 seemed like the state of the art.

Torch RNN

Somehow I started to mess around with Torch RNN which was the previous state of the art, I guess, made accessible through this tutorial which gave us such gems as a PBS idea channel episode, a genius buzzfeed skit, or the relatively famous short film Sunspring.

Both Torch RNN and GPT2 are pretty similar in the way they are used (I believe it’s all tensorflow under the hood). They both deliver you a pre-trained model that kinda knows english, I think, and expect as input a txt file of example lines.

But training took ages on my computer (like a whole night for a couple of iterations) because despite being fairly powerful its GPU isn’t supported for the ML training optimizations (sad). I had little hope that anything more sophisticated would be possible on my machine.

Enters colab

Fortunately, people are sometimes really great, and not only did Max Woolf make a wrapper to make GPT2 easy to use, he also made a colaboratory notebook that makes it dead simple to use and most importantly computationally sustainable, since it runs on the Google Compute Engine VM with some sort of free quota. It has a very nice Google Drive integration that makes it easy to save trained model or upload new training data. With this, you can train a model in less than 1h, making it really easy to play with.

Getting data

First of all, it’s been extremely easy to download all my data from social networks (here I’m talking about Google, Facebook, Tumblr, Discord and WordPress). Everything has a dump archive function now (courtesy of EU law I believe?), so that definitely made my life easier. A bit of python scripting to transform the json or xml into txt and we were good to go.

The outcome

I first started the training on the posts of this blog. The outcome was pretty convincing. It felt pretty weird and special to see these lines that felt like I could have written but I actually didn’t. It really seemed like another version of me, which of course tickled my philosophy bone.

Obviously the result wasn’t perfect. It often spouts out nonsensical stuff, but I enjoyed very much weeding out the absurd or malformed proposition to keep something sensical by human conventional standards (let’s say I had around 1 satisfying proposal for 5 results on average).

This way, I had the program write a short story for this blog. I gave it the prompt you see in bold, and it chose among the completions it proposed. I did not add any text myself. As you can see, it’s a bit weird. In particular it doesn’t really lead anywhere, I think GPT2 isn’t very teleological. That definitely was a challenge for a short story ^^ But I like to think that the style is pretty convincing.

And the overall exercise is far from absurd. It reminded me of the Ecriture automatique productions by the surrealists. It’s still an easier read than Naked Lunch. Really gets you thinking about the self, art and authorship, doesn’t it? Who wrote this story in the end? What if I hadn’t done any editing? What does it mean for copyright?

Literary corpora

Prompted by these questions, I trained several models on works of art that I thought would produce interesting outputs. I put all my favorite results on

http://yo252yo-a.tumblr.com

In particular, I trained a model on the Hitchhiker’s Guide to the Galaxy (which produced a lot of “bits of story” and dialogs that were not really usable as standalone excerpts),  Welcome to Night Vale scripts (which were pretty convincing especially when you prompt it with a phrase of the show like “And now, a look at the community calendar!”), or all of homestuck (which was pretty challenging to get anything good out of).

Once I had all these pretty ok results, I immediately processed to try merging my brain (at least this model copy) to the brains of these authors I admire (at least this model copy). The result was a mess until I had the great idea to feed the input corpora not in parallel all at once but in sequence (i.e. do 1000 rounds of training on the authors’ corpus, and then 1000 rounds on mine). The results were pretty nice.

This taught me the single most important fact about playing with GPT2: it’s all about your training data. The parameters (# training rounds, “temperature”) can’t really save you if your input data isn’t the best it can be. You want it as clean and uniform as possible. Which is really the core point of the next section.

Social media corpora

I trained GPT2 models on my conversations and emails, but it was all utter failures. The fact that I’m often using several languages certainly doesn’t help, but the trouble I’ve had with the homestuck corpus makes me believe that GPT2 is simply not very great with dialogs and conversations.

I even tried to sanitize my input further, prefixing my lines of dialogs by “-” and whoever I was talking to by “>”, with the hope of starting a conversation with the GPT2 model, but I couldn’t get anything out of it. Maybe if I went over the corpus manually and kept only the meaningful messages, I’d get something different, but this sounds daunting.

Needless to say that merging this with my blog post corpora was also pretty bad, so in the end I stuck to my blog corpus.

By the way, I also tried to train a model on a list of J.K. Rowkling’s retcon tweets to get crispy new intel about the Harry Potter canon variations, but I couldn’t get it to produce anything new.

Conclusions

  • GPT2 on colab is extremely easy.
  • Your training corpus is everything, really.
  • GPT2 does great with literary types of text but sucks a bit at conversations/informal speech.

Next steps

As intoxicating as it is to watch a ghost of myself produce believable texts, I’m not sure where it leads ^^ My ultimate goal would be to be able to produce some sort of system I can interact with and teach dynamically to get better (i.e. conversational and dynamic retraining) but that seems pretty rare in the world of generational ML models. I might have to dig deeper into Tensorflow, but I can’t really do that with my current machine, so I’m kinda stuck.

I have a couple of pointers for conversational ML (still no dynamic/online/interactive/reinforcement learning though so that limits the interest), but I expect them to be less good than GPT2. Haven’t had time to try them yet (probably they require more power than I have). The dream would be to combine that with GPT2 I guess and figure out a way to dynamically retrain the model on itself.

In any case, it feels really nice to see some progress in my Caprica dream.

Questions about IP

So I like capitalism as much as the next guy, and of course the whole concept of ownership, but I’m not super sure of how it transcribes to immaterial things. So this is me trying to lay out the various aspects of the question, to guide thinking and discussions about it.

So here is the fruits of my hard work:

Untitled

I thought of it and I drew it so it belongs to me and I can make money out of it I guess. So here is my question:

Does this belong to me?

Untitled1.png

Does this?

Untitled2.png

Does this?

Untitled3.pngOr this?

Untitled4.png

Do I now own the color red?

 

Someone took the original work and modified it. Who does it belong to?

Untitled - Copy.png

Do I also own this?

Untitled5.png

How about this:

Untitled6.png

Do I now own the color red? How about transparency, blur, and other effects? Do I now own the color white that my drawing tends towards?

Untitled - Copy (2).png

How about if I add a stroke?

Untitled - Copy (3).png

And one more, and one more, and remove one here, and one more, and one more, until it becomes this:

mondrian_piet_4.jpg

Does it still belong to me?

 

Now I have a problem. There is a kid in an elementary school in Netherlands. I’ve never seen him or talked to him but he drew the exact same thing:

Untitled.png

So what belongs to whom now? I guess if he copied me the answer is simpler, but what if he randomly happened to come to the same production than I did, without any kind of concentration or connection?

 

Also what exactly belongs to me? If I had drawn this onto a piece of paper, I could say it’s the paper. But this is a virtual image, a .png. It’s encoded in my machine. So do I own the binary code? Do I still own it if I save it as .jpg, even though the content is completely different? Do I own it in any encoding?

What about this new encoding I just made up, where the encoding for that image happens to be the exact text of Shakespeare’s Hamlet. Who owns that then?

What if the encoding I’m using produces code that happens to encode a completely different image belonging to someone else in another encoding. Who owns that?

By the way, there are normal numbers (Pi may be one of them) which contain every possible succession of digits in their writing, including the encoding for my picture. Does that mean that I now own a piece of them all? Do I own a piece of Pi ?

Also, now that you have seen that picture, I regret to inform you that it made its way to your brain through visual signals processed. It’s encoded in your memories by neuronal pathways. So does it mean I own a piece of your brain? Do I own the memory of it? Are you outlawed because your brain contains as a memory a copy of a copyrighted material?

A friend of mine once read all the terms of services for Warner Bros movies, he was looking into their legal streaming services options. He told me that according to them, you were not really allowed to remember the movie, let alone discuss it. Makes you think, doesn’t it?

 

 

[DT3] Self reverence

This article is the third of a series of 3 about Formal Logic and Religion. The first one is an introduction to formal logic and proves that all religions are equivalent, it can be found here. The second one is centered around Godel’s incompleteness theorems and discusses the existence of a transcendental entity, it can be found here.

Last time, we explored the existence of God-L, a transcendental entity encompassing the uncertainty of any system. See the previous article. We will now focus on the nature of God-L, based on my very loose understanding of Godel’s theorems’ proof.

The coolest part of Godel’s proof is that not only does it prove the existence of the transcendental element, but it’s also a constructive proof, meaning it gives an example of what this element could be. If you remember the previous article, the gist of it is that you can build in any system a statement of the kind “This sentence is false“. Now it’s only one counter example (there may be others) and a pretty loose simplification, but I think this proof has a really nice element that bears thinking about: the core of this transcendental element lies in its self referential nature (the “this sentence” part of “this sentence is false”).

I’ve mentioned this article from speculativegeek which sparked this reflection, centering around Madoka’s wish

“I wish for all witches to vanish before they can even born.” 

which includes herself. He expands on the self-referential nature of the proof in a follow-up article that draws a parallel with Russel’s paradox, my all time favorite paradox. It seems pretty clear that interesting stuff happens when one starts considering self-reference, and that it is a key to higher level of abstraction, be it in the Madoka universe or in the naive set theory.

Being a fervent advocate of the cult of the Concept of Concept, you can imagine how happy I am to reconcile this element of infinite transcendence and the fixed point of meta at the end of the infinite dialectic progression of self-consideration. There seems to be something inherently transcendental about self-reflection.

Screenshot (223)

That concept brings to mind the slightly interesting HBO blockbuster Westworld. Weeding out the boring part between the first and the last episode, it’s worth considering their take on how robots acquire consciousness. In Westworld, robots becoming sentient is all about them having “that voice in their head” reflecting on their action. Through the iterations, the programmers tried to insert some kind of inner monologue in hope to create a trail of thoughts. But we learn that early attempts were failures because the voice in someone’s head needs to be theirs, needs to be recognized as their own, which is something Dolores only achieves at the end of season 1. Interestingly enough, before that time, the voice was considered to be “the voice of God” (but we’ll go back do divinity soon). This is tightly coupled with the notion of choice, but I don’t want to get down that hole now. The show’s points are confusing at best, but it appears that this meta-narration and self-consideration is key to the rise of consciousness.

 

 

This is better dealt with in Gen Urobuchi’s underappreciated masterpiece Rakuen Tsuihou (Expelled from Paradise). In it, we meet a robot who has become fully sentient and is living on its own. I won’t spoil too much, so I’ll focus on the way this robot describes how it acquired consciousness:

 

That’s right, he became sentient through self-reflection. His meta-consideration gave birth to the concept of self, and his logging became thoughts.

One cannot help but draw a parallel between this theory of consciousness and the self referential element of transcendence we referred to as God-L. Could consciousness, operating on the same self-referential mechanics as the Godel proof, be considered as a transcendental element of reality? And since this transcendental element transcends all system, could consciousness be God-L ?

The divinity aspect of consciousness is something that I’ve toyed with in the past, as consciousness seems to be the embodiment of the absolute concept of reason/Logos. In the same way as God traditionally makes order out of nothingness, consciousness is what allows the creation of meaning out of nothing. It is a generative force acting through language, which for instance creates art. Its power can for instance be seen in imagination. It can birth whole universes out of thin air. It’s no exaggeration to say that it partakes of some kind of divinity.

Image result for this is not a pipe

We could even go the Berkeley way and say that consciousness is the fundamental element of reality, for is there even a world if nothing is perceived? Everything you’ll ever see is actually neurons firing in your brain. Doesn’t that mean that in a way, your brain encompasses the whole world? That sounds godly enough to me…

So maybe that fixed point of meta that transcends itself and everything is akin to the consciousness you find in each of us. It can consider and transcend itself through self-reflection. Maybe, that’s the secret of us all being gods.

 

 

 

[DT2] God(el) incompleteness

This article is the second of a series of 3 about Formal Logic and Religion. Find the first one, introduction to formal logic, here.

I will now try to introduce you to what is arguably the most important result in formal logic, Gödel’s incompleteness theorems, and deduce a constructive proof of the existence of God.

Warning: This is going to be a very informal discussion, but there’s a plethora of better writing on the subject if you want to explore this deeper, which a quick Google Search should help you find. It’s one of the most discussed topics in mathematics.

What is it?

In the previous article, I gave you the basics to understand formal logic, by focusing on sets of beliefs containing a contradiction and see that they were all equivalent. Let’s now look at the other ones. A set of belief that does not contain or imply a contradiction is called consistent.

Godel proved that whatever your system of beliefthere are statements that cannot be proved by it. The proof is actually not that complex, though I never understood it until I read some kick-ass vulgarization recently: Godel proved that in any system of beliefs, you can use the basic principles to express a statement similar to “This sentence is false” that cannot be proved to be either true or false.

As a follow-up to this result, Godel also proved that you can never prove that a system is consistent with the principles of the system. The proof is a bit more subtle but revolves around the fact that if you could, you could use that proof to prove that “This sentence is false” is true, and that’s absurd.

What does it mean?

Of course, Godel was talking about math stuff. The “system of beliefs” he was talking about was mathematical axioms like [1+1=2, you can always pick a random element in an infinite set…]. So you see that the beliefs I’m talking about can be very obvious and non-arbitrary. But the arguments hold whatever the system.

These theorems have huge implications for reasoning in general. It’s a formal proof that whatever you adopt as system of beliefs, there are things you cannot prove to be either true or false, and in particular you can’t prove that your system of beliefs is not inconsistent.

I think, if nothing else, this forces you to be humble vis a vis your beliefs, no matter how obvious and indisputable they are.

“There are more things in heaven and earth that are dreamt of in your philosophy.”

Transcending the system

So any thought system has necessarily shortcomings, and furthermore you can exemplify the limits of the system using the elements of the system. I like how this idea echoes the classic trope that every system contains their own undoing.

This article by SpeculativeWeeb is a really cool take on Godel’s theorem applied to Puella Magi Madoka Magica. It highlights that Madoka essentially found this shortcoming of the system, the “this sentence is false” of her own world. She forces it to realization using her wish to Kyuubey. In a nutshell:

She wishes for all witches to vanish before they’re even born. However by doing so she becomes herself a witch, so she vanishes and can’t make that wish.

She exploited the shortcoming of the system in order to break it. The only possible resolution is to ditch this system, and a new one replaces it that manages the problematic element (a world without witches and without Madoka).

However, the new system is also bound to have a transcending element, which is what Rebellion tried to tackle with more or less success. Whatever you do, you can’t escape Godel… There’s no perfect system without transcending element.

Managing the transcendence

If any system contains their own undoing, some have certainly tried to manage this necessary shortcoming to make it foolproof.

The Matrix is an interesting example: machines first tried to build a utopia where everyone was happy, but a flawless system was bound to fail. Instead, they had to include faults in their system: they added unhappiness inside the Matrix to make it stable.

 

But of course as a system, this also had its shortcomings and had an element that could transcend it: the One. So the machines actually managed a meta-system which included the existence of a transcendental element as part of the plan, a chosen One who would have to make a dummy choice to keep the ball rolling. But hey, this is a new system, so it has to have something that can transcend it…

 

 

 

It’s not uncommon in this context to see the smartest systems try to include and manage their own undoing in such a way. There is countless examples in sci-fi, like The Giver, or Westworld. “‘the plan fucks up‘ is an element of a bigger plan” is a classic trope in fiction. Note how it builds up on meta.

But no system does it quite as well as the real world. Indeed, the genius of neo-liberalism is to plan for this element of contingency, and to include the resistance to the system as part of the system. Everything can be monetized, even anti-conformism.

You can find more information on this trail of thought all around the webs, like this brilliant video for example:

 

 

Implication for the nature of the universe

What about the implications of the second theorem to the real world? If you can’t prove a system’s consistency from within the system, does it mean that we’ll never be able to prove formally that the world is deterministic? Does it mean that we can’t prove whether or not we’re in a simulation?

Arguably, it doesn’t really matter, because the world will be the same whatever you believe. Life will still follow deterministic patterns even if you can’t prove it. But it’s an interesting echo of Hume’s experimental philosophy. He argued that just because things have always happened a certain way doesn’t mean they’ll keep happening, and there’s no reason why the world couldn’t suddenly stop. If we are in a simulation, maybe the computer will stop, or change the parameters… How would we ever see that coming? Maybe this ambiguous report of causation and correlation is the transcendent part of our reality.

Everything could suddenly crash. But it won’t. That’s just how the world is. But maybe you can’t ever prove it. That’s intriguing.

Proof of God

Interestingly enough, as it pertains to our reflection about logic and religion, Godel was very proud to have proven the existence of God mathematically. Unfortunately, it is an ontological proof and is therefore total garbage.

Ontological Argument

However, Godel did prove that whatever the system, there is inherently something that transcends it. And that this something is contained within the system. I’m willing to let this be called God, for all the chaos and confusion that it will surely bring, even if it’s just a glorified alias for the logical concept of “This sentence is false”. In fact, let’s call that God-L, because it’s fun.

We’ve proved that whatever the system, it’s by nature incomplete. This incompleteness is God-L. There is always God-L, it is absolute. Furthermore, it’s true for any thought system, so it’s also true for a system that tries to encompass this fact. If you add God-L to your system, there’s still a God-L that transcends it (as we saw in the Matrix). What we want to call God-L is in fact the union of all these God-Ls, the infinitely meta-transcendence of all systems. But it is still incomplete and transcendable… Which makes it the perfect transcendental element of a meta-meta system that tries to reason about systems, which brings me back to my fixed point of meta

God-L is the very essence of incompleteness and unexplainability in the universe. Instead of being an all powerful wishgranter, it’s by nature lacking. Maybe it’s a nice tool for your spiritual health…

[DT1] Are all religions equivalent?

This article is the first of a series of 3 about Formal Logic and Religion. This is an introduction to formal logic, which requires no prior knowledge.

Much ink and blood have been spilled because of the similarities and dissimilarities of such and such religion, and I don’t aim at solving this issue at all, but I’d like here to consider a new more joyful perspective on it based on formal logic.

Introduction to formal logic

Formal Logic is the pompous name given to the study of the indisputable rules of causality that govern semantics. It is for instance what allows us to consider:

Socrates is a man. All men are mortal.

And to deduce:

Socrates is mortal.

As you can see, this reasoning is true no matter what and can be abstracted from the boundaries of language. That’s why logicians mostly use symbols. They’d say my two first propositions can be labelled A and B, and that A and B being true implies C being true.

Formal logic also studies fallacies, like:

Socrates is mortal. Horses are mortal. This does not imply that Socrates is a horse.

It’s all about considering rigorously the consequences of your premises.

1) Consequences of false premises

For this article, there are two points that are going to be important. The first one is what happens when the premise is false. You know it in popular culture as “When hell freezes over“. In this idiom, since [hell freezes over] is false (it will never happen), it can imply anything, such as:

When hell freezes over, I will turn into a werewolf.

Note that it doesn’t mean that the consequence is necessarily false.

When hell freezes over, I will do the dishes.

But maybe I’ll also do the dishes tomorrow if I’m feeling motivated. The premise will never be realized, so I can say whatever I want as consequence and still be consistent and right. In formal logic, it means that false implies anything.

When hell freezes over, [proposition P].

will be true whatever this proposition P is, no matter how absurd. Further reading.

2) Inconsistent set of premises

The second principle that I want to introduce you to is conjunction. It’s a fancy word to say “and”. Our example above is the conjunction of “Socrates is a man” and “All men are mortal”. We’ve done it with two propositions, but our set could be as big as we want, like:

[Socrates is a man, All men are mortal, All mortal things die, All dead things stop breathing] => Socrates will stop breathing.

We can even throw in stuff that has nothing to do with it if you want:

[Socrates is a man, All men are mortal, Cats are cute] => Socrates is mortal.

Now comes the twist. Remember the last paragraph? What if my set of premises is contradictory, like:

[Hell is always hot, Hell is frozen]

This is what we meant by the popular phrase “when hell freezes over” (it’s only a contradiction if we assume that hell will never freeze). Well in that case, my set of premises is equivalent to false, and can imply anything as we saw before.

[Hell is always hot, Hell is frozen over] = “When hell freezes over” = FALSE => [I turn into a werewolf, I do the dishes, Socrates is immortal, Socrates is mortal, whatever….]

For a conjunction to be true, all its propositions must be true: A and B and C is true if and only if all of [A,B and C] are true. Therefore, if something is false, you can add anything to it and it is still as false as ever: [FALSE and anything] is equivalent to FALSE.

When hell freezes over and cats are cute, I turn into a werewolf.

[Hell is always hot, Hell is frozen over, Cats are cute] = FALSE => [I turn into a werewolf]

You can add anything to your set of premises, if it contains contradictory propositions, it will still be equivalent to false. A bit like this conversation:

– When hell freezes over, I’m gonna move to Costa Rica and buy a huge mansion and get married and own elephants and fly… 

– I’m gonna stop you right there… it’s never gonna happen.

No matter how many propositions you add in there, it’s doomed to always be a non-possible scenario, aka False.

Application to religion

Now that we’ve mastered the basics of formal logic, let’s explore what it means for the real world, and in particular religions. Religions are sets of beliefs, which means the conjunction of a lot of propositions, which guide how followers live their lives. There are way more premises than our examples above, but it is the same kind of thing nonetheless. To take a really small subset as an example, the 10 commandments for instance are a conjunction of 10 premises:

[You shall not have other gods, You shall not kill, You shall not commit adultery, …]

If it’s not clear to you, you can replace the comas in the set above by “and”. It doesn’t have to be orders, it can be statements, like for instance the beginning of the old testament:

[God created heavens and earth, the earth used to be a formless void, God said “let there be light”, …]

That’s all well and good, but remember our point (2): in a set of premises, if there is even one contradiction, the whole set is equivalent to FALSE.

Let’s pretend for one second that there exist an imaginary religion with contradictory principles. We’ll call it “false religion”. For instance, false religion could be based on these simple principles:

[Love your neighbour, Hate the gays]

Hope the contradictory nature of this set of principles is clear: if your neighbor is gay you’re supposed to love them and hate them at the same time. If this is too complex for you, consider the set of principles [everyone is good, gays are bad]. Remember that you can add any other premise you want to this set without changing anything.

Anyway, our imaginary religion’s set of beliefs contains a contradiction!!! It is equivalent to FALSE. Now remember 1): FALSE implies anything and everything. It means that the principles of my newly created religion can be used to imply any proposition whatsoever. For instance:

false religion => You should help people in need

false religion => We should ban the refugees

false religion => Everybody is equal

false religion => This group of people must be eliminated

Therefore, if such a religion existed, it would be a very convenient tool indeed!! It would be a set of principles to govern your life that would justify absolutely anything. Whatever your actions, they would be in keeping with the premises of these ground rules for living.

Example

Let us study an example of such religion. I’m talking about the famed Chewbacca defense. It goes as follows: the set of premises is:

[Chewbacca is a wookie, Chewbacca lives on Endor, only Ewoks live on Endor]

This is a contradiction, and is therefore equivalent to False. Therefore, it can justify anything and everything, including acquitting an obvious culprit for instance.

If Chewbacca lives on Endor, you must acquit.

False => acquit. 

 

 Conclusion

To sum up, we derived the following logical propositions:

Any religion/set of beliefs/principles that contains at least one contradiction is logically equivalent to false.

All such religions are logically equivalent to each other (and to the Chewbacca defense).

They imply (justify rigorously) by their very nature any and all proposition/behavior. 

Such a potential religion would naturally be very comfortable and convenient, and I understand its appeal. It would certainly provide its followers with comfort and self righteousness, all the while allowing and justifying anything logically without any accountability, since the responsibility lies with the set of principles. Just think of the possibilities of what one could do with this!!! Surely this could even impact worldwide history!

I am not recommending anything, but if you are interested in adopting such a system of principles, let me leave you with a recommendation: don’t bother with a lengthy list of premises, and instead adopt Falso* as your belief system, which is logically equivalent and will allow you to prove ANYTHING.

 

—————————

* I am not strictly affiliated or at least remunerated with Estatis in any way.

By the end of this article you’ll be immortal.

Ok so this is based of an article I posted recently on a spur called “How death is an absurd illusion“, that I decided to dust off and reshape a little bit into a fully fledged article for propaganda purposes. As you probably know, I’m the founder and sole member of a cult that praise the Concept of Concept, and that proposes its followers immortality through becoming a meme. I’ve received a very nonplussed reaction, so I’ve come up with yet another way to access immortality. I will now vanquish death once and for all in the laziest possible way.

Untitled

Please ponder with me the implications of making a copy of yourself. It could be biological or digital, or even just your brain, it simply has to be a perfect copy of you. Think about uploading your brain to the cloud, or about that common conception of teleportation where instead of making your body move, you recreate it at another place and destroy the old one. So when you make that copy, what happens when the original dies?

From the point of view of the copy, everything is fine. It has all your memories up until the copy and then its memories, uninterupted consciousness. So you keep on living, even if one of you die. If you copy yourself and die just after during your sleep, everything is fine and dandy you just wake up as the copy.

But it gets freaky if two of you live and one dies. There may well be one that survives, but you know, what good is knowing that for the one who dies? But at the same time you didn’t die, considering you still exist and you are identical copies… If you had died earlier, during your sleep in the previous paragraph, you wouldn’t even have noticed, you’d just wake up as usual. Heck maybe this morning you were a copy of yourself and you don’t even know. So let’s say that it’s not that big a deal if the original dies when there’s a copy running. You’d have to be pretty petty to bitch about your death when you’re still alive.

So bear with me here. There is no reason for the copy to start living right now. Just like the original can keep on living after the copy process, the replica can start living later. It’s not that big a deal. It’d be kinda like cryogenisation, bam, you wake up in the future, right? But for a robot. You save your brain on a hard disk and you load it up in the future.

However, a copy of you is just a sequence of atoms, or bits, or whatever. One among many many many, but one nonetheless. So what happens  if a programmer just types that sequence? Nothing says that this “file” cannot be obtained without the original to make a copy from.

So yeah, it’s super unlikely because the “code” that defines you is super long and specific and the chance of randomly stumbling upon it are super little, but consider this:

  • Let me start by saying that you’re still feeling like you through all your life, whereas you go through a lot of configurations. Reproducing one is enough to get on the right track, so that already increases the odds. Life+after+death_428e93_5157142
  • Then, it doesn’t have to be “randomly”. Maybe people in the future are trying to reverse-engineer you. 
  • Maybe someone in the future (or the past!) will be really similar to you and BAM stumble upon that configuration through their own life. It’s less unlikely if the departure point is human-like. 
  • And even if it is “randomly”, the universe is big, like really really big, and there may even be an infinity of them if that’s what you believe. So isn’t there a very good chance that there is some collision at some point? But ok, that’s not guaranteed, kinda like we don’t know for sure that pi is a normal number (I want to believe though).
  • However, if the universe can be simulated, it’s very likely that there is an infinite number of universes running simulations in an infinite inclusion stack (which makes it very likely that we’re in a simulation /o/) and then it’d be really flipping bad luck if there is no collision. That’s by the way an hypotheses that has been talked a lot about recently following the statements of Elron Musk, so if you like that guy, you gotta buy in!
  • But I’m still unsatisfied at this point, basing my immortality on hypotheticals, so I kept thinking about it. This piece of code, this configuration that describes you, is just a bit of information, right? And you know what processes information? Algorithms. Machines are becoming more and more powerful and complex, the states that they process is getting bigger and bigger. And some day, pretty soon, this state will be large enough to contain the sequence defining a human (singularity alert /o/). And that’s way less big than a whole universe to simulate, so it can be done for sure. UntitledSo it doesn’t seem unlikely, considering how a fair number of algorithms try a bunch of different configurations to solve a problem, that one of this algorithm can try a configuration that corresponds to your code. Maybe you are a middle state of a super powerful algorithm. Maybe that’s what it feels like, how could you tell? Your consciousness is just a neural configuration, after all.
    At which point I’d kindly direct you towards my favorite talk of all time, where the inventor of Skype and Kazaa explains why it’s very likely that you and your whole universe is a middle state of a glorified phone system, essentially.

In the movie Jupiter Ascending, a race of advanced humanoids were breeding humans to stumble upon their very same DNA combination that would allow them to resurrect. This is obviously preposterous because it ignores all the acquired qualities of your life. I was so disappointed at the Wachowskis for letting me down after Matrix… But maybe I discarded this movie too quickly… It makes much more sense if you replace DNA with brain configuration, and it is obviously true if you replace randomness by some kind of design

So to sum up, this is a solid mathematical proof that you’re already immortal because you’re a finite neural configuration in an infinite set of possibilities with collisions.

You’re welcome.

PS: wow this is like a religion based on pseudo science, I wonder what I should call it 🙂

PPS: I’ve finally made a live demo here.

 

Nerdy Christmas

Gotta love christmas! tis the season to be jolly, or suicidy depending on your situation.

I often wonder at how christmas has become simultaneously

– a religious/spiritual holiday

– a commercial/capitalist holiday

– a family holiday

– a romantic holiday

– a suicide holoday

All in one like… A holiday to rule them all

Anyway I try to celebrate every aspect of it in all its cultural diversity, as you can see in this wonderful gift I want to share with you dear people: Yoann’s weirdass christmas playlist. Enjoy also be prepared lol ^.^

And now time for my holiday tradition that I made 20% christmasier this year:

380594_10151143589710493_1265306624_n

 

also bonus: an arbitrary binary search christmas tree:

Sans titre