Day 73: Code GANeration

I’ve constantly had meetings the past few work days so I haven’t been too active in blogging but I think today I’ll have a bit if time as I’m mostly just working on my presentation for tomorrow about GANs (Generative Adversarial Networks). It’s a topic that I’m quite interested in and wouldn’t mind exploring its capabilities even further, but probably not for the time being as I doubt my masterpiece will be using a GAN model.

Warning: Spoilers for my talk tomorrow

To prepare me for my talk tomorrow, I wanted to write about what I’m actually going to be going over, so that I can drill it into my brain and not forget. Well, some of the things I’ll be going over.

Okay, firstly, what is a GAN? In short, it’s a generative model and a discriminative model pitted against each other. While the generative model (G, well, generates some type of data (e.g. images, text), the discriminative model (D) attempts to determine whether that data is real (from the training set) or fake (generated by the model). G is given feedback from D to adjust accordingly in order to make a more realistic output. Then this process repeats until G makes such realistic data that D always thinks its real. And that’s it.

It’s a cool concept, no? It was proposed by Ian Goodfellow in 2014 (I’ve read so many of his papers) and when I read about it last December, I thought it was so great. I’ve read about how it can generate quite realistic images from a dataset, which is actually semi terrifying when you think about it but also quite… great?

An example is the generation of celebrities from a public dataset of photos of celebrities’ faces.

Source: Karras et. al.

In terms of textual data, it’s actually not as effective but is still in the works as text is a bit more difficult than data. Though I will say there are some hilarious texts that have been generated by GANs. For example, here are some generated by VGAN, which combines GAN with a variational autoencoder. What a VA is, is not that important. Well, it is but it’s not necessary to understand it.

  • you just ate in first , but that is the best thing .
  • but that did give me a much healthier and healthy .
  • you are not sure why loves these cookies . i will be ordering these again

Our goal is similar to text generation, however, as we are generating text, except for that text is just code. Therefore the grammar is quite different and, since we’re using Brainfuck, the vocabulary is also different. Rather than 36 or so characters, it’s just 8. You’d think that would be easier… but it’s really not. Difficulties include the possibility of constructing infinite loops, forgetting the “.” which represents the print character, invalid ASCII codes, and the fact that there are many ways to represent a single letter depending on the location of the pointer. There’s probably more that I’m forgetting.

I won’t write the results here as I want to reveal it all tomorrow (it’s not that interesting but I guess kind of depending on your interests); and also I’m too tired to create tables for the results. I would say the results were quite surprising in a way but in another way, I wasn’t sure what to expect but I guess I’m quite satisfied with everything, especially since we only had two and a half months for it.

That’s all for today. I spent roughly four hours finishing my presentation because I also wanted to make some visualisations.

TL;DR

GANs!!! Scary stuff, yet very awesome.

Daily Deutsch

Ich kann nicht heute. Ich habe schon so viel Deutsch gesprochen.

Deirdre Bringas Written by:

Deirdre is a self-proclaimed canine specialist by day and software engineer by night. She is a Boston Celtics fan despite living in San Diego most of her life, and listens to Earth, Wind, and Fire and Bruce Springsteen every day.