What is it like to be an algorithm

What is it like to be an algorithm? What does it mean to understand something?
We will never know if an AI is conscious, not anymore than I can be certain that you are conscious. But we can try to put ourselves in its shoes and see the world through its eyes.


There’s a trend in machine learning to amass a lot of data and draw conclusion without really “understanding” it. Critics have claimed that this kind of AI, like GPT, may seem to produce impressive results, but do not really understand the world. And to a large extent, I agree, even though it still has merits (see our podcast episode on this ^^).

This work makes you see what a machine learning algorithm does. You’ll see text designed to have as little connotations as possible. If you can draw meaning from a succession of symbols without any kind of reference to the real world, so could an AI. If we all converge to the same kind of semantics, whatever it may be, then it proves that it is universal and that algorithm could also access it.

Let’s solve this question by extending this into a collaborative interpretation work!

………………..

 

………………..

..ᚨᛃ……………

..ᚨᛃᛟ………….ᚨᛊᛟ.

..ᚨᛃᛟᛟ…..ᚨᛊᛟᛟ……..

..ᚨᛃᛟᛟᛟ.ᚨᛊᛟᛟᛟ…………

..ᚨᛃᛟᛟ…..ᚨᛊᛟᛟ……..

..ᚨᛃᛟ………….ᚨᛊᛟ.

..ᚨᛃ……………

………………..

 

.ᚨᛃᛟᛟ…….ᚢ…….ᚨᛊᛟᛟ.

….ᚨᛃᛟᛟ….ᚢ…….ᚨᛊᛟᛟ.

……ᚨᛃᛟᛟ.ᚢᛃᛟ…….ᚨᛊᛟᛟ.

..ᚨᛃᛟᛟ…..ᚢᛃᛟ…….ᚨᛊᛟᛟ.

………ᚢᛃᛟ…….ᚨᛊᛟᛟ.

………ᚢᛃᛟ….ᚨᛊᛟᛟ….

………ᚢᛃᛊᛟᛟ.ᚨᛊᛟᛟ……..

………ᚢᛃᛊᛟᛟ……ᚨᛊᛟᛟ..

………ᚢᛃᛊᛟᛟ………

………ᚢᛃᛊᛟᛟ………

………ᚢᛃᛊᛟ………

………ᚢᛃᛊ………

………ᚢᛃᛊ………

Making a self-aware twitter AI with GPT2

The story so far

It was more than a year ago that I had my playing with gpt2 phase, resulting in a short story co-written with the AI and this little blog http://yo252yo-a.tumblr.com/ which I kinda stopped updating after a while.

But I was bound to come back to it some day! It all started when I decided to open a twitter account for my podcast. I very naturally made a little script to schedule all my tweets (from Google Spreadsheet ^^) so that I could enqueue tweets, obviously. I also went back in time to the archive of my facebook/tumblr/whatever posts to see what could fit this new account since I posted so much enlightening things over the years xD

Once this was in place, it was like my twitter account was managed by a nice little bot (who was simply posting things from a queue, but still). As its parent, I obviously wanted to see it grow: how cool would it be if it could learn and evolve by itself? Could it ever be self-aware (lol)? After all, it already had access to twitter, and it had a collection of my tweets to learn from.

Setup

So I dusted off my colab repository of GPT2, since GPT3, despite all the hype, remains pretty inaccessible. Most notably, I had to make it work with an old version of tensorflow (the recent versions broke it), and I also made it read and write directly to Google Spreadsheet /o/ In the end, I only had to run the code in the colab to fetch the data, train on it, and post it directly in the queue to be twitted. Pretty sweet setup.

The problem is that GPT2 produces mostly crap. And I didn’t know what temperature or training set would be ideal for my purposes. It was time to experiment!

Validation

I ran several training sets on several temperatures. For each, I personally annotated 200 results. I dont think the result will be super significant, but it’s better than nothing.

The success criteria was: is this tweetable (i.e. relatively grammatically correct, at least a bit interesting/surprising, and of course different from the training set). The good samples will be posted on our twitter with the hashtag #shitmygpt2says.

Training sets

The basic training set was the queue of all our tweets for the podcast twitter account, including the archive of all my past tumblr/facebook posts that I sanitized for the occasion (a lot of work xD).

But like my previous attempts, I thought it was a bit sad to limit myself to things produced by me when I had the perfect chance to merge my brain with the people I admire. Furthermore, I kinda wanted to make my twitter AI standalone and able to “learn” as time passes, even though GPT really isn’t the best framework for that ^^

I ended up making a twitter list of people I admire, and used their recent tweets in my dataset. The idea was to make my model aware of “recent events”, recent words, etc…

Yet, I wanted to keep a feeling that the writing style was distinctly mine. It is accounted for in the success criteria, and the core of this experimentation was “how should I mix the training set to keep awareness of the recent world but still control the style of the output?”.

Sequential vs merging

In my previous attempts, I mostly used a “merging” approach feeding everything to the learning phase. The alternative is to feed two corpora in succession during the learning phase.

From what I observed, it seems that GPT2 absorbs the style of whatever it was fed last, even if it is for very few training epochs. For instance, when I fed it corpus A for 1.5k epochs and then corpus B for 100 epochs, it produced results that looked like corpus B, even though it exhibited some signs of having learned A every now and then (pretty rarely though, that’s why I kept so many epochs in the first phase of training).

I kinda think of it with a cooking metaphor, when I first marinate the model in corpus A and then lightly sear it with corpus B.

Here are the experimental results that loosely validate this:

We notice here btw that the merging strategy is pretty poor because consistency of the training set is pretty important with GPT2. The first three lines did not exhibit a strong difference, making me believe that 1k epochs is enough for GPT2 to “forget” about the initial corpus, which is how I ended up with the 1.5k/100 mix which gave me the best outcomes.

Best parameters

Here is the total result of my experiments. GPT2 produces around 93% of crap, which makes sanitizing a pretty tough job ^^ It appears that this could drop to 80% or below by using correctly the “marinade/searing” technique and keeping the training set uniform.

As it is widely known, temperature below 0.8 is pretty bad, but I find myself often pushing above 1, though it seemed to do pretty poorly with my best data sets. I’ll keep using different temperatures as they produce different types of results that I enjoy in their own way. But I’ll probably stop using text corpora as a base (past writing, night vale scripts, etc…) because they don’t seem to bring anything to the table (and could even be detrimental, better stick to tweets).

So we’re pretty far from a self-aware AI that learns from its mistakes, but seeing that I’ll always retrain it on recent tweets, and that it will be trained on my own tweets that include the proposals it made and I kept, I hope that as time passes it’ll still learn to be a bit better (it already started annotating posts with the #shitmygpt2says hashtag itself).

In the future, I’ll run this every now and then in its best configuration, and keep posting on twitter with the hashtag #shitmygpt2says. Stay tuned if you’re interested!

Resources for self-teaching Japanese

Hi! So I’ve been self-teaching japanese for a while now and I think some of the resources I’ve built over the years may be of interest for people, so I’ll centralize them here. I will also add a couple of recommendations, but I’ll try to keep it light. I’ll highlight the stuff I produced with blue. Most of these are actively used and worked on every day so you’ll see some traces of my daily regimen, please be lenient 🙂

Before you start

  • Japanese is probably one of the hardest languages to learn in the world, especially if you come at it from a “western” language (it’s just so different). It is going to take a lot of time, therefore the most important thing is motivation and stamina. It’s a marathon, not a script. Make sure you enjoy it.
  • Don’t expect logic and consistency. This language is an amazing mess built by strata in the most chaotic way possible. You’re better off going into it assuming there’s no one to one mapping between writing, pronunciation and sense, or no reason why a particular character has this or this radical. You basically have to learn all the words by heart.
  • There is pretty few syllables in Japanese compared to most languages, meaning there’s gonna be a lot of homophones, ambiguity, etc… Incidentally that’s why they cannot really get rid of kanjis.
  • Worst, speaking tends to deform the language quite a bit (kinda like French), so a lot of time you’ll hear contractions, accents, etc… that will make it impossible to find the corresponding word/grammar point in dictionaries. To make things worst, it’s especially true for the beginner materials: everything tailored towards children tends to use “baby talk” and therefore not the correct pronunciation of words. yay.
  • I have the opposite of “facility” towards this language, your experience will probably be smoother than mine xD

The beginnings

The beginnings are nice because there’s a lot of free content for it, so don’t pass this chance! It’s the time where you can learn with games, on phone or computer. Sadly I missed out on most of it so I don’t have more precise recommendations xD

You probably want to start by learning the alphabets and then some basic grammar. I highly recommend Tae Kim’s guide to learning Japanese, which is one of the best things I’ve seen online:

http://www.guidetojapanese.org/learn/

Otherwise the NHK has also nice resources or news in easy japanese.

On YouTube there’s a lot of stuff. Some i like are JapanesePod101 or Name Ohara.

I also like fluentu.

Immersion

What you want is also to listen to a lot of content in Japanese. Fortunately, this is the age of the internet, and even if it’s not as open as it used to be, we’ve never had so much media content.

Here’s a list of some anime I found easier to understand: myanimelist. This guide is amazing and a little more thorough.

Watching things in Japanese with japanese subtitles is ideal of course, but it’s pretty hard to find. Netflix is one of the rare platforms that does it pretty consistently. A lot of people on YouTube like to embed some or all of what is said on the screen, so that’s something. There’s a few people who aggregate subtitles.

The great thing about having subtitles is that it makes it super easy to note down what you don’t know for review later. For that, you definitely want to use Anki, the de facto standard in flashcards, which means there’s a lot of add-ons, support, etc… There’s a lot of premade decks, but I think it’s also nice to make your own vocabulary cards.

This allows you nice automated setups. Matt, a pioneer of the Mass Immersion Approach (do check it out it’s so great) made a great tutorial about his setup. If you’re more into software than streaming, there is approaches like this which can dig into your softwares to find the text in it and extract it (probably a bit more advanced, but less Netflix-centric).

Matt makes his flashcards himself, even with his automated setup. I made an Anki addon to make cards for me. I only give it a list of words and it adds them to my Anki. Pretty convenient: 

https://github.com/yo252yo/anki_addon

Reading

Don’t worry if you don’t have access to Japanese literature, the internet is your playground for reading material. This chrome extension fetches reading and definition of kanjis you highlight, this one adds furigana to any existing page. Karaokes on YouTube or niconico are great, there’s game scripts that can be fun too.

My favorite dictionary is www.jisho.org.

Outside of chrome, this little program does pretty decent kanji OCR: https://www.kanjitomo.net/

It you ever go to Japan, you can buy books for very cheap at Book-off.

Kanjis

So here’s the big one, how do you learn by heart 2000 symbols that have several meanings, pronunciations… and where visual similarity or construction doesn’t mean anything XD I struggle. I’d recommend to forget about kunyomi, onyomi, etc… and just learn all possible pronunciations because it’s just too messy. And that’s not even going into proper nouns…

About the rythm: one kanji per day is probably ideal, I know it means the language will take you years, but it will take you years so you might as well really master the kanjis instead of plowing through.

Anyway my favorite kanji dictionary is

http://kanjidamage.com/

because it’s low key. It does a pretty decent job at explaining the kanji decomposition and coming up with a good order to learn them, but I was still unsatisfied, so I made my own learning order, based on frequency of use in newspapers, JLPT level, grade it’s taught in Japan, and frequency of appearance in K-ON. But most importantly I’ve been really thorough with the decomposition of each kanji in subcomponents, which is rarely well done. So please enjoy my work (and note that it grows every day as I’m still learning):

https://docs.google.com/spreadsheets/d/1xyXL5PGTH01B3c1IiMl-4MIkcRDFA8Xj-wnn7PLXB_g/edit?usp=sharing

More importantly, this also contains for each kanji all the other kanjis that are similar to it, visually or semantically. This is a great resource that doesn’t exist anywhere else and which you’ll appreciate if you’re like me and keep getting mixed up. It’s made mostly from personal experience, with the help of this kanji similarity graph project.

Finally, since I kept mixing up kanjis, I thought I’d try to leverage my spacial brain and try to make some kind of kanji maps using graphviz. I ended up making several versions of the maps, you can find the code at

https://github.com/yo252yo/kanjigraphs

and here’s an example of what it looks like (highlighting the stuff I need to pay attention to):

Image

Advanced

Once you have a basic understanding of Japanese, you can start to go deeper. My expertise sort of ends here, but I want to point out a couple of things:

Advanced grammar is often presented as “grammar points“, which I think is super great (think “one point per day” for instance). I’m aggregating in this spreadsheet that I’m using for learning grammar points from japanesetest4you.com, japanese-teacher.tanosuke.com, nihongokyoshi-net.com.

At this point, you’re also probably realizing that you’re gonna have to learn proper nouns, and that means even more ways to read kanjis. It consists in pretty much memorizing all the common proper nouns patterns. I gathered the most common first/last names, but also all the important geographic/historic/mythological/cultural names in the following spreadsheet that I’m still actively working on: https://docs.google.com/spreadsheets/d/1V6rQCtsDtI4uhU1TAcYIh-LQpeJ3ipOWJiM8Y73bYjY/edit#gid=420678685

I hope that it will contain in the end everything you need to understand references/private jokes in conversations, like the ads that everyone have seen, etc…

I also use this anime character database to try and see what nouns kanjis are frequently part of.

Playing around with GPT2

So lately I’ve been spending a relative amount of time toying with GPT2, who made the headlines about producing text so believable that it was considered dangerous (GPT2 is the toned down version).

ML and Reddit

I started by getting hooked on this GPT2 generated subreddit:

https://www.reddit.com/r/SubSimulatorGPT2/

Which I highly recommend to everyone to read daily as an exercise in critical thinking and challenging the natural human bias to trust everything you see. I especially enjoy the tag trained on r/totallynotrobots which is basically robots pretending to be humans pretending to be robots pretending to be humans.

It wasn’t long before I tried it for myself. I’ve long wanted to download all my social media posts and train some kind of ML on it, and GPT2 seemed like the state of the art.

Torch RNN

Somehow I started to mess around with Torch RNN which was the previous state of the art, I guess, made accessible through this tutorial which gave us such gems as a PBS idea channel episode, a genius buzzfeed skit, or the relatively famous short film Sunspring.

Both Torch RNN and GPT2 are pretty similar in the way they are used (I believe it’s all tensorflow under the hood). They both deliver you a pre-trained model that kinda knows english, I think, and expect as input a txt file of example lines.

But training took ages on my computer (like a whole night for a couple of iterations) because despite being fairly powerful its GPU isn’t supported for the ML training optimizations (sad). I had little hope that anything more sophisticated would be possible on my machine.

Enters colab

Fortunately, people are sometimes really great, and not only did Max Woolf make a wrapper to make GPT2 easy to use, he also made a colaboratory notebook that makes it dead simple to use and most importantly computationally sustainable, since it runs on the Google Compute Engine VM with some sort of free quota. It has a very nice Google Drive integration that makes it easy to save trained model or upload new training data. With this, you can train a model in less than 1h, making it really easy to play with.

Getting data

First of all, it’s been extremely easy to download all my data from social networks (here I’m talking about Google, Facebook, Tumblr, Discord and WordPress). Everything has a dump archive function now (courtesy of EU law I believe?), so that definitely made my life easier. A bit of python scripting to transform the json or xml into txt and we were good to go.

The outcome

I first started the training on the posts of this blog. The outcome was pretty convincing. It felt pretty weird and special to see these lines that felt like I could have written but I actually didn’t. It really seemed like another version of me, which of course tickled my philosophy bone.

Obviously the result wasn’t perfect. It often spouts out nonsensical stuff, but I enjoyed very much weeding out the absurd or malformed proposition to keep something sensical by human conventional standards (let’s say I had around 1 satisfying proposal for 5 results on average).

This way, I had the program write a short story for this blog. I gave it the prompt you see in bold, and it chose among the completions it proposed. I did not add any text myself. As you can see, it’s a bit weird. In particular it doesn’t really lead anywhere, I think GPT2 isn’t very teleological. That definitely was a challenge for a short story ^^ But I like to think that the style is pretty convincing.

And the overall exercise is far from absurd. It reminded me of the Ecriture automatique productions by the surrealists. It’s still an easier read than Naked Lunch. Really gets you thinking about the self, art and authorship, doesn’t it? Who wrote this story in the end? What if I hadn’t done any editing? What does it mean for copyright?

Literary corpora

Prompted by these questions, I trained several models on works of art that I thought would produce interesting outputs. I put all my favorite results on

http://yo252yo-a.tumblr.com

In particular, I trained a model on the Hitchhiker’s Guide to the Galaxy (which produced a lot of “bits of story” and dialogs that were not really usable as standalone excerpts),  Welcome to Night Vale scripts (which were pretty convincing especially when you prompt it with a phrase of the show like “And now, a look at the community calendar!”), or all of homestuck (which was pretty challenging to get anything good out of).

Once I had all these pretty ok results, I immediately processed to try merging my brain (at least this model copy) to the brains of these authors I admire (at least this model copy). The result was a mess until I had the great idea to feed the input corpora not in parallel all at once but in sequence (i.e. do 1000 rounds of training on the authors’ corpus, and then 1000 rounds on mine). The results were pretty nice.

This taught me the single most important fact about playing with GPT2: it’s all about your training data. The parameters (# training rounds, “temperature”) can’t really save you if your input data isn’t the best it can be. You want it as clean and uniform as possible. Which is really the core point of the next section.

Social media corpora

I trained GPT2 models on my conversations and emails, but it was all utter failures. The fact that I’m often using several languages certainly doesn’t help, but the trouble I’ve had with the homestuck corpus makes me believe that GPT2 is simply not very great with dialogs and conversations.

I even tried to sanitize my input further, prefixing my lines of dialogs by “-” and whoever I was talking to by “>”, with the hope of starting a conversation with the GPT2 model, but I couldn’t get anything out of it. Maybe if I went over the corpus manually and kept only the meaningful messages, I’d get something different, but this sounds daunting.

Needless to say that merging this with my blog post corpora was also pretty bad, so in the end I stuck to my blog corpus.

By the way, I also tried to train a model on a list of J.K. Rowkling’s retcon tweets to get crispy new intel about the Harry Potter canon variations, but I couldn’t get it to produce anything new.

Conclusions

  • GPT2 on colab is extremely easy.
  • Your training corpus is everything, really.
  • GPT2 does great with literary types of text but sucks a bit at conversations/informal speech.

Next steps

As intoxicating as it is to watch a ghost of myself produce believable texts, I’m not sure where it leads ^^ My ultimate goal would be to be able to produce some sort of system I can interact with and teach dynamically to get better (i.e. conversational and dynamic retraining) but that seems pretty rare in the world of generational ML models. I might have to dig deeper into Tensorflow, but I can’t really do that with my current machine, so I’m kinda stuck.

I have a couple of pointers for conversational ML (still no dynamic/online/interactive/reinforcement learning though so that limits the interest), but I expect them to be less good than GPT2. Haven’t had time to try them yet (probably they require more power than I have). The dream would be to combine that with GPT2 I guess and figure out a way to dynamically retrain the model on itself.

In any case, it feels really nice to see some progress in my Caprica dream.

Nerdy Christmas

Gotta love christmas! tis the season to be jolly, or suicidy depending on your situation.

I often wonder at how christmas has become simultaneously

– a religious/spiritual holiday

– a commercial/capitalist holiday

– a family holiday

– a romantic holiday

– a suicide holoday

All in one like… A holiday to rule them all

Anyway I try to celebrate every aspect of it in all its cultural diversity, as you can see in this wonderful gift I want to share with you dear people: Yoann’s weirdass christmas playlist. Enjoy also be prepared lol ^.^

And now time for my holiday tradition that I made 20% christmasier this year:

380594_10151143589710493_1265306624_n

 

also bonus: an arbitrary binary search christmas tree:

Sans titre

The girl who was either tall or wearing a riding hood that wasn’t red or no riding hood at all

The following is an approximate logical negation (arranged to kinda make sense a little) of the Little Red Riding Hood tale from Charles Perrault. The original can be read at the bottom =) #nerd

 

The girl who was either tall or wearing a riding hood that wasn’t red or no riding hood at all

Not by Charles Perrault

During all eternity, in all the villages, there never was any little country girl who was the prettiest girl ever seen, but there was once a random girl. Maybe her mother was not excessively fond of her, or maybe her grandmother doted on her less or equally. In any case this good woman never had a little red riding hood made for her. Therefore, no one called her Little Red Riding Hood, even though it would have suited the girl extremely well.

Every day of her life, if ever her mother made some cake, she never spoke the words “Go, my dear, and see how your grandmother is doing, for I hear she has been very ill. Take her a cake, and this little pot of butter.”

Therefore, the girl, who was either tall or wearing a riding hood that wasn’t red or no riding hood at all, did not set out to go to her grandmother who lived in another village, or if she did, she waited a little before.

Every time she went through the wood, she did not meet a wolf who had a very great mind to eat her up. She did meet some other wolf though, but they didn’t ask her where she was going. Therefore, even though she may not know that it was dangerous to stay and talk to a wolf, she never said “I am going to see my grandmother and carry her a cake and a little pot of butter from my mother.”

It follows that no discussion occurred, and if any ever took place, it was most certainly about other topics. Maybe the wolf didn’t run as fast as he could, or didn’t take the shortest path. Maybe the girl took the direct way, didn’t gather nuts, didn’t run after butterflies or didn’t gather bouquets of little flowers. But the wolf did not arrive at the old woman’s house before a long time. When he did, he obviously didn’t knock.

The good grandmother must have been ill and out of bed if she ever cried out “Pull the bobbin, and the latch will go up.”

Then at least one of the following things happened : the wolf didn’t pull the bobbin, the door didn’t open, the wolf didn’t fall upon the good woman, or the wolf didn’t eat her in a moment (it had been less or equal than three days since he had last eaten). If he got into the grandmother’s bed, he forgot to shut the door. It follows that even if the girl came some time afterwards, she did not knock on the door either.

Since nobody said anything, the girl didn’t hear the big voice of the wolf and was not afraid. She did not believe that her grandmother had a cold or was hoarse. The wolf did not see her come in and didn’t hide under the bedclothes. The girl, if she ever went into bed, most certainly kept her clothes. If she ever said “Grandmother, what big arms you have!” it was without any amazement whatsoever.

But the wolf kept quiet and if he ever fell upon the girl, he did not eat her all up.

Moral: Children, especially attractive, well bred young ladies, may talk to strangers, for they can do so without provide dinner for a wolf. I shouldn’t use the word “wolf”, because there is only one kind of wolf. Wolves who are charming, quiet, polite, unassuming, complacent and sweet and pursue young women at home and in the streets don’t exist. There is greater danger than those gentle wolves.

———————————————-

Little Red Riding Hood

By Charles Perrault

Once upon a time there lived in a certain village a little country girl, the prettiest creature who was ever seen. Her mother was excessively fond of her; and her grandmother doted on her still more. This good woman had a little red riding hood made for her. It suited the girl so extremely well that everybody called her Little Red Riding Hood.

One day her mother, having made some cakes, said to her, “Go, my dear, and see how your grandmother is doing, for I hear she has been very ill. Take her a cake, and this little pot of butter.”

Little Red Riding Hood set out immediately to go to her grandmother, who lived in another village.

As she was going through the wood, she met with a wolf, who had a very great mind to eat her up, but he dared not, because of some woodcutters working nearby in the forest. He asked her where she was going. The poor child, who did not know that it was dangerous to stay and talk to a wolf, said to him, “I am going to see my grandmother and carry her a cake and a little pot of butter from my mother.”

“Does she live far off?” said the wolf

“Oh I say,” answered Little Red Riding Hood; “it is beyond that mill you see there, at the first house in the village.”

“Well,” said the wolf, “and I’ll go and see her too. I’ll go this way and go you that, and we shall see who will be there first.”

The wolf ran as fast as he could, taking the shortest path, and the little girl took a roundabout way, entertaining herself by gathering nuts, running after butterflies, and gathering bouquets of little flowers. It was not long before the wolf arrived at the old woman’s house. He knocked at the door: tap, tap.

“Who’s there?”

“Your grandchild, Little Red Riding Hood,” replied the wolf, counterfeiting her voice; “who has brought you a cake and a little pot of butter sent you by mother.”

The good grandmother, who was in bed, because she was somewhat ill, cried out, “Pull the bobbin, and the latch will go up.”

The wolf pulled the bobbin, and the door opened, and then he immediately fell upon the good woman and ate her up in a moment, for it been more than three days since he had eaten. He then shut the door and got into the grandmother’s bed, expecting Little Red Riding Hood, who came some time afterwards and knocked at the door: tap, tap.

“Who’s there?”

Little Red Riding Hood, hearing the big voice of the wolf, was at first afraid; but believing her grandmother had a cold and was hoarse, answered, “It is your grandchild Little Red Riding Hood, who has brought you a cake and a little pot of butter mother sends you.”

The wolf cried out to her, softening his voice as much as he could, “Pull the bobbin, and the latch will go up.”

Little Red Riding Hood pulled the bobbin, and the door opened.

The wolf, seeing her come in, said to her, hiding himself under the bedclothes, “Put the cake and the little pot of butter upon the stool, and come get into bed with me.”

Little Red Riding Hood took off her clothes and got into bed. She was greatly amazed to see how her grandmother looked in her nightclothes, and said to her, “Grandmother, what big arms you have!”

“All the better to hug you with, my dear.”

“Grandmother, what big legs you have!”

“All the better to run with, my child.”

“Grandmother, what big ears you have!”

“All the better to hear with, my child.”

“Grandmother, what big eyes you have!”

“All the better to see with, my child.”

“Grandmother, what big teeth you have got!”

“All the better to eat you up with.”

And, saying these words, this wicked wolf fell upon Little Red Riding Hood, and ate her all up.

Moral: Children, especially attractive, well bred young ladies, should never talk to strangers, for if they should do so, they may well provide dinner for a wolf. I say “wolf,” but there are various kinds of wolves. There are also those who are charming, quiet, polite, unassuming, complacent, and sweet, who pursue young women at home and in the streets. And unfortunately, it is these gentle wolves who are the most dangerous ones of all.

 

A link to the past

I decided to use this blog as my main online portal, so I’ll keep here links to all the work I did during my studies:

Papers

Essays/notes

Projects

  • [2011] Multiplayer meme-themes tetris game: game, sources, slides, report
  • [2011] Real-time determination of the orientation of a Rubik’s Cube, used to control a 3D model: video, report
  • [2010] Sqwarea, massively multiplayer strategy game scalable on Windows Azure cloud: video, slides
  • [2010] Development of an electronic circuits simulator (netlists) in order to simulate a 16-bits microprocessor (harvard structure) that we designed to run a watch: documentation, slides
  • [2010] Creation of a compilator from a portion of OCAML (support of the modules, the “object-oriented” part of this functional metalanguage) towards MIPS assembly language, with a powerful typing: documentation
  • [2010] Implementation of Kruskal algorithm of minimal covering graph, application to the traveling salesman problem: paper, slides
  • [2009 TIPE] Google Pagerank and scalefree networks: report, annex, slides

Teaching

2012 – CM2, physique: “Créer, transformer et transmettre le mouvement” (la main a la pate)

Internships

  • [2013] Google New York: OneToday project selection
  • [2013] ePawn: Conception of games and demo for an interactive board game table: video video video
  • [2012] Telecom Paristech VIA lab: video, report, paper, slides
  • [2011] Technicolor Palo Alto: patent1 patent2
  • [2010] INRIA MINT: Software to design and manipulate curves by plastic multitouch interactions: video, paper, slides

Jobs

  • [2015-2016]: Google London
  • [2016-]: Google Paris (YouTube)