Apologies for the absence - it's been an odd couple of weeks. In other news, I'll be at NIPS tomorrow!
A really good article about all the recent hype around machines beating doctors. This guy's a medical doctor and he has a solid machine learning background, so he can speak credibly on both fields. The key takeaway is to be skeptical of such headlines: the tasks often being highlighted aren't even tasks that Doctors spend their time on, and the results often aren't as novel or certain as they first look. He does end the article with a twist though, where he points out that Google just published a paper in a medical journal that passes all his tests. Dammit Google!
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation - Google Research
by Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean
Another paper from Google on Neural Machine Translation. Instead of just translating from one language to another (and learning a new model for each language pair), this time they're creating some sort of "universal" translator, with a single model allowing arbitrary translations between any pair of languages the model is trained on. The "zero-shot" translation is a bit of an over-statement, but it's still pretty impressive - it means that the language is able to translate between language pairs it hasn't explicitly been trained with, e.g. translating from English→Spanish, having only been trained on English→French and French→Spanish corpora.
I found the way they handled the multi-language translation a little odd. The target language is specified by a "token" shown before the start of the source sentence, which is reasonable, but what's weird is that there's nothing shown to the model to tell it what language it's translating from - it's just supposed to figure that out on its own. The authors claim that this is a simpler model and allows for better "code-switching" but that still sounds a little odd to me. In any case, I'm not all that familiar with the benchmark test sets for translation models, so I'll leave this up to the experts.
ICLR 2017 Discoveries
by Amund Tveit
Remember how in the last update I was saying that I didn't have enough time to look at ICLR 2017 papers? Turns out this guy went ahead and did it for you (me). He's nicely categorized a whole set of ICLR papers by topic, so this will be a useful quick index to peruse if you're (I'm) interested in any one topic. He even also has a Deep Learning newsletter! How dare he.
Google rolled up a handful of small fun AI/machine learning projects for the Internet to play with! One pretty fun one is Quick, Draw where you're asked to draw something, and there's an AI that separately tries to guess what you drew. To be honest, I'm not entirely sure what this is supposed to be. It's this odd mix between a showcase and a deposit of random fun projects. Like some of these are interactive on the page, and others require you to download/run code. Weird, but worth checking out.
New google research of generating super-resolution images, i.e. expanding images without losing any detail. There's been quite a number of works using neural networks to improve the fine-grained quality of super-resolution images (my favorite being this funny one here), but I guess this is Google's implementation with a bunch of enhancements. Note that the paper has been on arXiv for several months. The idea generally is that since image-recognition ConvNets already learn a "vocabulary" of super low-level image features, it's pretty intuitive to reuse that information to enhance the fine detail when expanding an image. The research blog doesn't state where it's getting used, but I wouldn't be surprised if it's getting used in something like PhotoScan, a Google application that uses your phone to scan photos at high resolution.
Deep Learning Book
by Ian Goodfellow and Yoshua Bengio and Aaron Courville
The Deep Learning Book is out! Ian Goodfellow and Yoshua Bengio are huge names in the Deep Learning space, and this is probably the first comprehensive academic treatment of the field. The contents of the book will remain available for free online (and has been for over a year now). I would've loved to have had it autographed at NIPS...