Thoughts and prayers

So I wanted to apply to a few art competitions, and the Wells Art Contemporary drew my attention because of its amazing setting (the Wells Cathedral). This is a dream place for any kind of conceptual art because you’re already working with centuries of connotations and expectations that you get from the get go as your raw material.

The bit about the art manifesto

I will do a proper manifesto later, but I wanted to jolt down the technical process I’m going through here. The basic concept is a reflection around the ontology of reality. Once upon a time, people looked at gods not only for source of morality, but as the ontological entity that imbued life with meaning and in some way warranted the consistency and order of the world.

With the “death of god” heralded by Nietzsche, humans lost all of this. At first glance, you may worry about the loss of absolute moral values, but it goes far deeper than that: it’s the loss of ontological structure. No wonder that analyses tie the totalitarism movements of the 20th century as a reaction to this.

Nowadays, the thing that took the place of God as source of ontological structure seems to be the economy, which organizes pretty much all of life in an increasingly globalized world. Furthermore, its efficient decentralized decision mechanism are truly a sight of wonder, whose complexity aggregates all the wisdom of mankind. Yet, it cannot be comprehended in its entirety by a single person, elevating it to a mystical position of unfathomability.

These are the many deep parallels I wanted to explore with the “Thoughts and prayers” project by establishing a “Church of Neoliberal Capitalist Realism”. In this art project, it was important for me to have it be a participatory living piece, engaging the viewers on their own terms, literally reaching into their daily lives. It was important for me that the art piece blends in innocuously and almost unconsciously with the fabric of the spectators lives, much like religion did in other time periods.

This is why I ended up publishing the project as a facebook and twitter pages, because that’s where people actually direct their attention, time, and interactions (much like they did with churches and prayers). In fact, this social network aspect adds a whole dimension to the project centered around the shallowness, speed, and outrage-focus of these platforms, which is probably why I’ll end up submitting a QR code to this image to the exhibition.

The bit about machine learning

But enough about the pretentious artsy considerations: the reason why I wanted to write this article down was to cement somewhere some of the technical challenges and solutions I faced. The piece really came to life when I decided to try to use Machine Learning (GPT-2) to merge the Bible and Marx’s Capital. On the final pages, I’ve decided to split the AI generated text by presenting it as scriptures (i.e. numbered) as opposed to what I’d write myself.

You may remember that I quite enjoy using GPT to produce new content in a specific style, which is what gave me the idea in the first place. But the main challenge was that GPT does not deal very well with heterogeneous training sets. In particular, the Bible and Marx’s Capital have very little words in common, so the output would often be one or the other (you can think of them as distinct connected components in the semantic graph).

The way I solved this was to build a bridge myself between these two corpora. I took the Bible and applied a bunch of word substitutions to bring its language closer to the one from the Capital. I worked with the most frequent words of both texts in priority, but I also threw in some terms that seemed important to the current economic context. The substitution would not always make perfect sense, but the fact that it all goes into GPT to be regurgitated later makes these mistakes pretty irrelevant (GPT can correct some, but anyway the output of GPT is always pretty trashy).

I do very little modifications to the output of machine learning (it is, after all, the holy word), and even when it comes out vaguely nonsensical I guess it serves as a nice reflection of the nonsensical commands the technocratic economy sometimes dictates to humans.

And that’s how you get the main content of those publication feed. I’ve automated it so that it keeps posting regularly, and made the twitter randomly follow a few users who tweet about market efficiency, in order to bring the gospel to its most fervent zealots. It seems like a natural step ^^

Anyway this was all good fun but now I need to figure out how I’m going to turn that into an installation proposal for museums. Cheers.

The game of life on a planet

Work realized for the Social Art Award 2021 “New Greening”: submission link.

It is hosted at http://yo252yo.com:9090/ for now. I won’t host it forever but it will remain forever accessible on github.

Technology, and AI in particular, are at the core of our hopeful prospects to manage and recover from the current climate crisis. But it is ultimately just a tool that is in the hand of humankind. This social art exhibition is the perfect moment to remember that our New Greening will have to be for everyone and by everyone. For even if there are macro forces at play, they are inextricably tied to individual responsibility.

This interactive experience, hosted at http://yo252yo.com:9090/, showcases this interplay of forces with an aesthetic inspired by Conway’s Game of Life. This infinitely blooming virtual tree is sensitive to its environment: the more demand (eyeballs) placed on it, the darker the surroundings, and it may even wither. Are you ready to sacrifice your chance at the spectacle so that others may see it? Can the crowd find an equilibrium to allow for a lasting responsible growth?

What is it like to be an algorithm

What is it like to be an algorithm? What does it mean to understand something?
We will never know if an AI is conscious, not anymore than I can be certain that you are conscious. But we can try to put ourselves in its shoes and see the world through its eyes.


There’s a trend in machine learning to amass a lot of data and draw conclusion without really “understanding” it. Critics have claimed that this kind of AI, like GPT, may seem to produce impressive results, but do not really understand the world. And to a large extent, I agree, even though it still has merits (see our podcast episode on this ^^).

This work makes you see what a machine learning algorithm does. You’ll see text designed to have as little connotations as possible. If you can draw meaning from a succession of symbols without any kind of reference to the real world, so could an AI. If we all converge to the same kind of semantics, whatever it may be, then it proves that it is universal and that algorithm could also access it.

Let’s solve this question by extending this into a collaborative interpretation work!

………………..

 

………………..

..ᚨᛃ……………

..ᚨᛃᛟ………….ᚨᛊᛟ.

..ᚨᛃᛟᛟ…..ᚨᛊᛟᛟ……..

..ᚨᛃᛟᛟᛟ.ᚨᛊᛟᛟᛟ…………

..ᚨᛃᛟᛟ…..ᚨᛊᛟᛟ……..

..ᚨᛃᛟ………….ᚨᛊᛟ.

..ᚨᛃ……………

………………..

 

.ᚨᛃᛟᛟ…….ᚢ…….ᚨᛊᛟᛟ.

….ᚨᛃᛟᛟ….ᚢ…….ᚨᛊᛟᛟ.

……ᚨᛃᛟᛟ.ᚢᛃᛟ…….ᚨᛊᛟᛟ.

..ᚨᛃᛟᛟ…..ᚢᛃᛟ…….ᚨᛊᛟᛟ.

………ᚢᛃᛟ…….ᚨᛊᛟᛟ.

………ᚢᛃᛟ….ᚨᛊᛟᛟ….

………ᚢᛃᛊᛟᛟ.ᚨᛊᛟᛟ……..

………ᚢᛃᛊᛟᛟ……ᚨᛊᛟᛟ..

………ᚢᛃᛊᛟᛟ………

………ᚢᛃᛊᛟᛟ………

………ᚢᛃᛊᛟ………

………ᚢᛃᛊ………

………ᚢᛃᛊ………

Making a self-aware twitter AI with GPT2

The story so far

It was more than a year ago that I had my playing with gpt2 phase, resulting in a short story co-written with the AI and this little blog http://yo252yo-a.tumblr.com/ which I kinda stopped updating after a while.

But I was bound to come back to it some day! It all started when I decided to open a twitter account for my podcast. I very naturally made a little script to schedule all my tweets (from Google Spreadsheet ^^) so that I could enqueue tweets, obviously. I also went back in time to the archive of my facebook/tumblr/whatever posts to see what could fit this new account since I posted so much enlightening things over the years xD

Once this was in place, it was like my twitter account was managed by a nice little bot (who was simply posting things from a queue, but still). As its parent, I obviously wanted to see it grow: how cool would it be if it could learn and evolve by itself? Could it ever be self-aware (lol)? After all, it already had access to twitter, and it had a collection of my tweets to learn from.

Setup

So I dusted off my colab repository of GPT2, since GPT3, despite all the hype, remains pretty inaccessible. Most notably, I had to make it work with an old version of tensorflow (the recent versions broke it), and I also made it read and write directly to Google Spreadsheet /o/ In the end, I only had to run the code in the colab to fetch the data, train on it, and post it directly in the queue to be twitted. Pretty sweet setup.

The problem is that GPT2 produces mostly crap. And I didn’t know what temperature or training set would be ideal for my purposes. It was time to experiment!

Validation

I ran several training sets on several temperatures. For each, I personally annotated 200 results. I dont think the result will be super significant, but it’s better than nothing.

The success criteria was: is this tweetable (i.e. relatively grammatically correct, at least a bit interesting/surprising, and of course different from the training set). The good samples will be posted on our twitter with the hashtag #shitmygpt2says.

Training sets

The basic training set was the queue of all our tweets for the podcast twitter account, including the archive of all my past tumblr/facebook posts that I sanitized for the occasion (a lot of work xD).

But like my previous attempts, I thought it was a bit sad to limit myself to things produced by me when I had the perfect chance to merge my brain with the people I admire. Furthermore, I kinda wanted to make my twitter AI standalone and able to “learn” as time passes, even though GPT really isn’t the best framework for that ^^

I ended up making a twitter list of people I admire, and used their recent tweets in my dataset. The idea was to make my model aware of “recent events”, recent words, etc…

Yet, I wanted to keep a feeling that the writing style was distinctly mine. It is accounted for in the success criteria, and the core of this experimentation was “how should I mix the training set to keep awareness of the recent world but still control the style of the output?”.

Sequential vs merging

In my previous attempts, I mostly used a “merging” approach feeding everything to the learning phase. The alternative is to feed two corpora in succession during the learning phase.

From what I observed, it seems that GPT2 absorbs the style of whatever it was fed last, even if it is for very few training epochs. For instance, when I fed it corpus A for 1.5k epochs and then corpus B for 100 epochs, it produced results that looked like corpus B, even though it exhibited some signs of having learned A every now and then (pretty rarely though, that’s why I kept so many epochs in the first phase of training).

I kinda think of it with a cooking metaphor, when I first marinate the model in corpus A and then lightly sear it with corpus B.

Here are the experimental results that loosely validate this:

We notice here btw that the merging strategy is pretty poor because consistency of the training set is pretty important with GPT2. The first three lines did not exhibit a strong difference, making me believe that 1k epochs is enough for GPT2 to “forget” about the initial corpus, which is how I ended up with the 1.5k/100 mix which gave me the best outcomes.

Best parameters

Here is the total result of my experiments. GPT2 produces around 93% of crap, which makes sanitizing a pretty tough job ^^ It appears that this could drop to 80% or below by using correctly the “marinade/searing” technique and keeping the training set uniform.

As it is widely known, temperature below 0.8 is pretty bad, but I find myself often pushing above 1, though it seemed to do pretty poorly with my best data sets. I’ll keep using different temperatures as they produce different types of results that I enjoy in their own way. But I’ll probably stop using text corpora as a base (past writing, night vale scripts, etc…) because they don’t seem to bring anything to the table (and could even be detrimental, better stick to tweets).

So we’re pretty far from a self-aware AI that learns from its mistakes, but seeing that I’ll always retrain it on recent tweets, and that it will be trained on my own tweets that include the proposals it made and I kept, I hope that as time passes it’ll still learn to be a bit better (it already started annotating posts with the #shitmygpt2says hashtag itself).

In the future, I’ll run this every now and then in its best configuration, and keep posting on twitter with the hashtag #shitmygpt2says. Stay tuned if you’re interested!

Immortality generator

Hellow

I’ve finally gotten off my lazy ass to solve that whole mortality thing by encoding my brain into a computer. Now I’ve decided to be super nice and I encoded also your brain and everybody else’s actually, so you can go about and live your life without worrying too much because there’s a saved backup. I’ve put it here:

https://yo252yo.com/encode_brain/

hope you like it  ❤

Nerdy Christmas

Gotta love christmas! tis the season to be jolly, or suicidy depending on your situation.

I often wonder at how christmas has become simultaneously

– a religious/spiritual holiday

– a commercial/capitalist holiday

– a family holiday

– a romantic holiday

– a suicide holoday

All in one like… A holiday to rule them all

Anyway I try to celebrate every aspect of it in all its cultural diversity, as you can see in this wonderful gift I want to share with you dear people: Yoann’s weirdass christmas playlist. Enjoy also be prepared lol ^.^

And now time for my holiday tradition that I made 20% christmasier this year:

380594_10151143589710493_1265306624_n

 

also bonus: an arbitrary binary search christmas tree:

Sans titre

The girl who was either tall or wearing a riding hood that wasn’t red or no riding hood at all

The following is an approximate logical negation (arranged to kinda make sense a little) of the Little Red Riding Hood tale from Charles Perrault. The original can be read at the bottom =) #nerd

 

The girl who was either tall or wearing a riding hood that wasn’t red or no riding hood at all

Not by Charles Perrault

During all eternity, in all the villages, there never was any little country girl who was the prettiest girl ever seen, but there was once a random girl. Maybe her mother was not excessively fond of her, or maybe her grandmother doted on her less or equally. In any case this good woman never had a little red riding hood made for her. Therefore, no one called her Little Red Riding Hood, even though it would have suited the girl extremely well.

Every day of her life, if ever her mother made some cake, she never spoke the words “Go, my dear, and see how your grandmother is doing, for I hear she has been very ill. Take her a cake, and this little pot of butter.”

Therefore, the girl, who was either tall or wearing a riding hood that wasn’t red or no riding hood at all, did not set out to go to her grandmother who lived in another village, or if she did, she waited a little before.

Every time she went through the wood, she did not meet a wolf who had a very great mind to eat her up. She did meet some other wolf though, but they didn’t ask her where she was going. Therefore, even though she may not know that it was dangerous to stay and talk to a wolf, she never said “I am going to see my grandmother and carry her a cake and a little pot of butter from my mother.”

It follows that no discussion occurred, and if any ever took place, it was most certainly about other topics. Maybe the wolf didn’t run as fast as he could, or didn’t take the shortest path. Maybe the girl took the direct way, didn’t gather nuts, didn’t run after butterflies or didn’t gather bouquets of little flowers. But the wolf did not arrive at the old woman’s house before a long time. When he did, he obviously didn’t knock.

The good grandmother must have been ill and out of bed if she ever cried out “Pull the bobbin, and the latch will go up.”

Then at least one of the following things happened : the wolf didn’t pull the bobbin, the door didn’t open, the wolf didn’t fall upon the good woman, or the wolf didn’t eat her in a moment (it had been less or equal than three days since he had last eaten). If he got into the grandmother’s bed, he forgot to shut the door. It follows that even if the girl came some time afterwards, she did not knock on the door either.

Since nobody said anything, the girl didn’t hear the big voice of the wolf and was not afraid. She did not believe that her grandmother had a cold or was hoarse. The wolf did not see her come in and didn’t hide under the bedclothes. The girl, if she ever went into bed, most certainly kept her clothes. If she ever said “Grandmother, what big arms you have!” it was without any amazement whatsoever.

But the wolf kept quiet and if he ever fell upon the girl, he did not eat her all up.

Moral: Children, especially attractive, well bred young ladies, may talk to strangers, for they can do so without provide dinner for a wolf. I shouldn’t use the word “wolf”, because there is only one kind of wolf. Wolves who are charming, quiet, polite, unassuming, complacent and sweet and pursue young women at home and in the streets don’t exist. There is greater danger than those gentle wolves.

———————————————-

Little Red Riding Hood

By Charles Perrault

Once upon a time there lived in a certain village a little country girl, the prettiest creature who was ever seen. Her mother was excessively fond of her; and her grandmother doted on her still more. This good woman had a little red riding hood made for her. It suited the girl so extremely well that everybody called her Little Red Riding Hood.

One day her mother, having made some cakes, said to her, “Go, my dear, and see how your grandmother is doing, for I hear she has been very ill. Take her a cake, and this little pot of butter.”

Little Red Riding Hood set out immediately to go to her grandmother, who lived in another village.

As she was going through the wood, she met with a wolf, who had a very great mind to eat her up, but he dared not, because of some woodcutters working nearby in the forest. He asked her where she was going. The poor child, who did not know that it was dangerous to stay and talk to a wolf, said to him, “I am going to see my grandmother and carry her a cake and a little pot of butter from my mother.”

“Does she live far off?” said the wolf

“Oh I say,” answered Little Red Riding Hood; “it is beyond that mill you see there, at the first house in the village.”

“Well,” said the wolf, “and I’ll go and see her too. I’ll go this way and go you that, and we shall see who will be there first.”

The wolf ran as fast as he could, taking the shortest path, and the little girl took a roundabout way, entertaining herself by gathering nuts, running after butterflies, and gathering bouquets of little flowers. It was not long before the wolf arrived at the old woman’s house. He knocked at the door: tap, tap.

“Who’s there?”

“Your grandchild, Little Red Riding Hood,” replied the wolf, counterfeiting her voice; “who has brought you a cake and a little pot of butter sent you by mother.”

The good grandmother, who was in bed, because she was somewhat ill, cried out, “Pull the bobbin, and the latch will go up.”

The wolf pulled the bobbin, and the door opened, and then he immediately fell upon the good woman and ate her up in a moment, for it been more than three days since he had eaten. He then shut the door and got into the grandmother’s bed, expecting Little Red Riding Hood, who came some time afterwards and knocked at the door: tap, tap.

“Who’s there?”

Little Red Riding Hood, hearing the big voice of the wolf, was at first afraid; but believing her grandmother had a cold and was hoarse, answered, “It is your grandchild Little Red Riding Hood, who has brought you a cake and a little pot of butter mother sends you.”

The wolf cried out to her, softening his voice as much as he could, “Pull the bobbin, and the latch will go up.”

Little Red Riding Hood pulled the bobbin, and the door opened.

The wolf, seeing her come in, said to her, hiding himself under the bedclothes, “Put the cake and the little pot of butter upon the stool, and come get into bed with me.”

Little Red Riding Hood took off her clothes and got into bed. She was greatly amazed to see how her grandmother looked in her nightclothes, and said to her, “Grandmother, what big arms you have!”

“All the better to hug you with, my dear.”

“Grandmother, what big legs you have!”

“All the better to run with, my child.”

“Grandmother, what big ears you have!”

“All the better to hear with, my child.”

“Grandmother, what big eyes you have!”

“All the better to see with, my child.”

“Grandmother, what big teeth you have got!”

“All the better to eat you up with.”

And, saying these words, this wicked wolf fell upon Little Red Riding Hood, and ate her all up.

Moral: Children, especially attractive, well bred young ladies, should never talk to strangers, for if they should do so, they may well provide dinner for a wolf. I say “wolf,” but there are various kinds of wolves. There are also those who are charming, quiet, polite, unassuming, complacent, and sweet, who pursue young women at home and in the streets. And unfortunately, it is these gentle wolves who are the most dangerous ones of all.

 

Featured

Index

Here’s the index of my productions, cause this is starting to get messy:

My games

My tools

See github for now

Picture book

Short stories

present in $self:

Very old stuff in french