MLML - NIPS 2016 Update #2

Jason Phang, Thu 08 December 2016, Machine learning mailing list

deep-learning, machine-learning, mlml

We're down to day 4 of NIPS and it's been a wild, wild ride. There's been a great deal of new ideas and proposals, in both the core NIPS tracks as well as the poster sessions. Unfortunately, I haven't had a chance to dig into any of the new papers, but there's been plenty to talk about anyway.


Founding Uber AI Labs
     by Uber
Uber acquired Geometric Intelligence to create Uber AI Labs. To be honest, I'm not familiar with the work done by Geometric Intelligence and I'm a little surprised that Uber didn't already have an AI lab (they're also notably absent at NIPS), but this is a natural and obvious move for them. I would expect them to significantly expand this team in the coming months.

TorchCraft - arXiv
     by Facebook
I actually missed this from Monday - this was the third big announcement of reinforcement learning environments. Unfortunately, it's kind of hard to stand out on the same day the Universe is being announced (funny tweet). In this case, we have Facebook releasing a library that ties together StarCraft and Torch, Facebook's primary deep learning library. There should be no shortage now of environments for reinforcement learning. Also it's interesting to see how Blizzard's move to push the StarCrafts as the reinforcement learning environment is playing out across the different competing research labs - it may yet become the ImageNet of reinforcement learning. (There may also be something to be said about training our nascent AIs on what are effectively war simulations.)

Apple to Start Publishing AI Research to Hasten Deep Learning
     by Apple
Apple's finally publishing AI research! They've been conspicuously missing among the big tech firms in showing off their AI research, but now they've finally joined the club. I've mentioned before that we're all incredibly fortunate that very early on, the academic Machine Learning community decided to democratize all their research), and we've seen that play out very nicely with all the heavyweight tech firms. In the case of Apple, I'm guessing most of their research will have to do with Siri.

Amazon Go
     by Amazon
Amazon announced Amazon Go, a new kind of retail store where you don't need to checkout - you simply walk out with whatever you want to purchase and a combination of image recognition and machine learning algorithms and various sensors will determine what you just bought. There doesn't seem to be any further details about how exactly any of this works, but I suspect we'll be seeing more technical details in time.

"Major Advancements in Deep Learning in 2016"
     by Trylolabs
Here's a highly personal opinion: I don't like this article. Unfortunately it's getting sent around so I thought I should address it. It looks like a fairly arbitrary selection of "things that look cool". Somehow Generative Adversarial Networks, an idea published in 2014, gets a good chunk of the article devoted to just explaining it, but there's nothing about reinforcement learning. The newer applications of GANs are undoubtedly interesting and novel, but were clearly chosen for their pretty pictures. There's also nothing about meta-learning, WaveNet, differentiable neural computers, Layer Normalization, Value-Iternation Networks, external memory, Swapout, and so on. The remainder of the examples are just as arbitrarily chosen, with no justification as to they are more major than other very notable omissions. arXiv was launched in 2014, and GitXiv was launched in 2015. Of the listed Deep Learning toolkits, only CNTK was released in 2016.

I don't mean to disparage the writers of the article - I just don't think the article is representative of the "Major Advancements in Deep Learning in 2016".

Google's Embedding Projector: a tool for visualizing high dimensional data - Blog - arXiv - Documentation
     by Google
Google's hosting and open-sourcing a tool for projecting high-dimensional data into 2- or 3-dimension embeddings. Strictly speaking, the algorithms here aren't new (PCA and t-SNE), but it is nice that they're hosting the tool, so a user wouldn't have to install Tensorflow for scikit-learn to do some funky low-dimensional projection of their data. Also it looks really cool.

Seen at NIPS

Keynote Address: Predictive Analytics
     by Yann LeCun
We're still waiting for the actual videos of the NIPS talks to be uploaded, but in the meantime we're starting to see slides stream in. This is the keynote address of NIPS by Yann LeCun, founder of Facebook AI Research and one of the holy trinity of Deep Learning.

Learning to Learn by Gradient Descent by Gradient Descent - GitHub
     by Marcin Andrychowicz, Misha Denil, Sergio Gómez, Matthew W. Hoffman, David Pfau, Tom Schaul, Nando de Freitas (DeepMind)
Probably the winner of Best-Named Paper at NIPS, this is another work among many in "meta-learning", with deep-learning algorithms learning to improve their own learning capabilities. Notably, DeepMind also released usable code for this algorithm with the paper - usually there is at least some lag-time between the release of the paper and a clean version of the code for public consumption. Nando de Freitas also gave a really nice talk on Thursday about the need to have learned learning algorithms, rather than hand-engineered learning/descent algorithms - I'll be looking out for the slides for those.

Research from Big Tech

Finally, I want to give a special shout-out to the tech firms that are sponsoring and actively openly publishing their latest research. Each of the big players have pushed out a ton of research for NIPS alone, most notably DeepMind, who is embarrassing everyone else with the volume of their contributions. Definitely at least open the following pages to take a look at what these firms are doing to push our understanding of machine learning forward.

     Google DeepMind: Part 1 - Part 2 - Part 3