Machine Learning Mailing List - Issue 1

Jason Phang, Tue 01 November 2016, Machine learning mailing list

deep-learning, machine-learning, mlml

Here're some cool things I saw over the last week:

Image Synthesis from Yahoo's open_nsfw
by Gabriel Goh
Like many big websites, Yahoo! has developed some algorithm for classifying NSFW image content on their pages. This project flips it around and uses that network to generate NSFW images, kind of like DeepDream. Fascinating and also vaguely horrifying. NSFW of course.

Generative Visual Manipulation on the Natural Image Manifold - Github - 2MP
by Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, Alexei A. Efros (UC Berkeley and Adobe)
Really cool research by people at Berkeley and Adobe. They use a combination of ConvNets and GANs to create an "intelligent" image-editing tool. The motivation is that image-editing tools are powerful, but a novice user will usually and make terrible edits, moving them outside of the "space" of reasonable-looking pictures. Since neural networks are pretty good at image recognition and have some knowledge of low-level image features, we can use them to inform our image edits. They exploit this using a GAN, by having one network modify the picture subject to user-supplied constraints (desired edit), and another network discriminating between realistic-looking and unrealistic-looking pictures. Pushing the image back and forth between these two networks leads to fairly realistic-looking results. Also a shout-out to Two-Minute Papers, a YouTube channel that summarizes papers to two-minute (actually often more like five-minute) videos.

Which whale is it, anyway? Face recognition for right whales using deep learning
This is a write-up by on their winning Kaggle entry on identifying whales from images. The constraints are fairly challenging, with a body of only 4000+ images compared to hundreds of classes. (Some whale-classes only have a single observation!) They walk-through their multi-step approach, first training a network to locate and orientate the whale's head, and then to classify the whale. A good example of how research approaches may not always be clean and simple and may require multiple levels of feature engineering.

How to Use t-SNE Effectively
Informative, visual and highly interactive article from some guys at Google (?) on how to effectively use t-SNE, and an dimension-reduction technique that projects high-dimensional data into 2 or 3 dimensions for visualization.

TWiML&AI;: Talk #8 - Diogo Almeida
by TWiML&AI;
I thought this week's TWiML&AI; episode was particularly good. Diogo Almeida, a professor at NYU Abu Dhabi, walks us through his experience getting 1st place in a Kaggle competition, and his experience from the trenches in Deep Learning in balancing theoretical and empirical issues, overfitting the canonical data sets such as CIFAR-10 and ImageNet, and machine learning tooling. He also doesn't shy away from technical terms in this interview, so the level of discussion is higher than usual. Definitely worth a listen.

Deep Learning with Differential Privacy
... by Martín Abadi, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang (OpenAI and Google)
Differential privacy is the concept of preserving privacy of individuals in a dataset while still allowing a researcher to execute informative queries on that data. This has become increasingly important given the paradigm of "record-everything" big-data, centralization of data in several big tech companies, and heightened concerns about individual privacy. One simple approach to this is the add noise to your query / aggregation function proportional to the sensitivity of that function (how much the output changes for a single change in input). This paper from OpenAI considers how differential privacy can be applied to Deep Learning. The main change is to clip per-observation gradients and add noise at the parameter update (in this case, stochastic gradient descent) step, while also keeping track of the loss of privacy via a "privacy accountant" (since the learning algorithm is iterative). Performance takes a smaller hit than you might expect given that we are adding noise to our learning process, though the paper notes greater difficulty with convolutional networks compared to regular feed-forward neural networks.