Deep Learning How I Did It: Merck 1st place interview

Posted on November 1 2012 by George Dahl

What was your background prior to entering this challenge?

We are a team of computer science and statistics academics. Ruslan Salakhutdinov and Geoff Hinton are professors at the University of Toronto. George Dahl and Navdeep Jaitly are Ph.D. students working with Professor Hinton. Christopher “Gomez” Jordan-Squire is in the mathematics Ph.D. program at the University of Washington, studying (constrained) optimization applied to statistics and machine learning.

With the exception of Chris, whose research interests are somewhat different, we are highly active researchers in the burgeoning subfield of machine learning known as deep learning, a sub-field revived by Professor Hinton in 2006. George and Navdeep, along with collaborators in academia and industry, brought deep learning techniques to automatic speech recognition. Systems using these techniques are being commercialized by companies around the world, including Microsoft, IBM, and Google.

What made you decide to enter?

We wanted to show the Kaggle community the effectiveness of neural networks that use the latest techniques from the academic machine learning community, even when used on problems with relatively scarce data, such as the one from this competition. Neural nets similar to the ones we used have recently demonstrated a lot of success in computer vision, speech recognition, and other application domains.

What preprocessing and supervised learning methods did you use?

Since our goal was to demonstrate the power of our models, we did no feature engineering and only minimal preprocessing. The only preprocessing we did was occasionally, for some models, to log-transform each individual input feature/covariate. Whenever possible, we prefer to learn features rather than engineer them. This preference probably gives us a disadvantage relative to other Kaggle competitors who have more practice doing effective feature engineering. In this case, however, it worked out well. We probably should have explored more feature engineering and preprocessing possibilities since they might have given us a better solution.

As far as supervised learning goes, our solution had three essential components: single-task neural networks, multi-task neural networks, and Gaussian process regression. The neural nets typically had multiple hidden layers, used rectified linear hidden units, and used “dropout” to prevent overfitting. No random forests were harmed (or used) in the creation of our solution. We used simple, greedy, equally-weighted averaging of these three basic model types. At the very end we began experimenting with gradient boosted decision-tree ensembles to hedge our solution against what we believed other competitors would be using and improve our averages a bit. We didn’t have a lot of time to explore these models, but they seemed to make very different predictions from our other models and were thus more useful in our averages than their often weaker individual performances would suggest. For similar reasons, we suspect that averaging our models with the models from other top teams could improve performance quite a bit.

What was your most important insight into the data?

Our single most important insight was that the similarity between the fifteen tasks could be exploited well by a neural network using all inputs from all tasks and with an output layer with fifteen different output units. This architecture allows the network to reuse features it has learned in multiple tasks and share statistical strength between tasks. Since we can only assume that Merck is interested in even more than the fifteen molecular targets in the competition data, it should be possible to gain even more benefits from combining more and more targets.

Were you surprised by any of your insights?

We were somewhat surprised that using ridge regression for model averaging did not provide any detectable improvement over simple equally-weighted averaging.

Which tools did you use?

We used Matlab code released by Carl Rassmussen and Chris Williams to accompany their Gaussian processes book. For the neural nets we used a lot of our own research code (in python) and wrote some new neural net code specifically for the competition. Our research code is designed to run on GPUs using CUDA. The GPU component uses Tijmen Tieleman’s gnumpy library. Gnumpy runs on top of Volodymyr Mnih’s cudamat library. We also used scikits.learn for a variety of utility functions, our last minute experiments with gradient boosted decision trees, and our ill-fated attempts at more sophisticated model averaging.

What have you taken away from this competition?

Our experience has confirmed our opinion that training procedures for deep neural networks have now reached a stage where they can outperform other methods on a variety of tasks, not just speech and vision. In the Netflix competition, the Toronto group publicized their novel use of restricted Boltzmann machines for collaborative filtering, and the winners used this method to create several of the models that were averaged to produce the winning solution. In this competition we decided not to share our neural network methods before the close of the competition, which may have helped us win.

Trả lời