

The other model (called the discriminator) receives samples from both the generator and the training data, and has to be able to distinguish between the two sources. One takes noise as input and generates samples (and so is called the generator). The main idea behind a GAN is to have two competing neural network models. GANs are an interesting idea that were first introduced in 2014 by a group of researchers at the University of Montreal lead by Ian Goodfellow (now at OpenAI).

This is very desirable when working on data modelling problems in the real world, as unlabelled data is of course abundant, but getting labelled data is often expensive at best and impractical at worst. This can be converted to P(y|x) for classification via Bayes rule, but the generative ability could be used for something else as well, such as creating likely new (x, y) samples.īoth types of models are useful, but generative models have one interesting advantage over discriminative models - they have the potential to understand and explain the underlying structure of the input data even when there are no labels. A generative model tries to learn the joint probability of the input data and labels simultaneously, i.e.In probabilistic terms, they directly learn the conditional distribution P(y|x).

#Adversarial network radar code
The rest of this post will describe the GAN formulation in a bit more detail, and provide a brief example (with code in TensorFlow) of using a GAN to solve a toy problem. This, and the variations that are now being proposed is the most interesting idea in the last 10 years in ML, in my opinion.” - Yann LeCun “There are many interesting recent development in deep learning.The most important one, in my opinion, is adversarial training (also called GAN for Generative Adversarial Networks). The prominent deep learning researcher and director of AI research at Facebook, Yann LeCun, recently cited GANs as being one of the most important new developments in deep learning: One such promising approach is using Generative Adversarial Networks (GANs). Actually training models to create data like this is not easy, but in recent years a number of methods have started to work quite well. We can then hopefully use this representation to help us with other related tasks, such as classifying news articles by topic. Or in other words, the model should also have a good internal representation of news articles. The intuition behind this is that if we can get a model to write high-quality news articles for example, then it must have also learned a lot about news articles in general. These are models that can learn to create data that is similar to data that we give them. There has been a large resurgence of interest in generative models recently (see this blog post by OpenAI for example).
