Deep learning with Tony Jebara, director of Machine learning research at Netflix

Published June 23, 2016   |   
arvindl

Tony Jebara is a Professor of Computer Science at Columbia University and Director of Machine Learning Research at Netflix. His research intersects computer science and statistics to develop new frameworks for learning from data with applications in social networks, spatio-temporal data, vision and text.
Deep learning at Netflix

At the Deep Learning Summit in Boston, on April 2016, Tony presented ‘Double-Cover Inference in Deep Belief Networks’. I caught up with him to hear more about his work at Netflix and his thoughts on the recent advancements in deep learning.
Tell us more about your work as Director of Machine Learning Research at Netflix.
At Netflix we are inventing the future of Internet television and helping members across the world find videos to watch and enjoy. We help them make a selection from a catalog of thousands of titles. But we need to tailor recommendations to each user and each session within seconds and within a menu of 10 to 20 visible options. Achieving this relies on our recommendation system which is really an ecosystem of many machine learning algorithms that operate together. We are constantly working on improving these algorithms. We are also leveraging machine learning across all parts of Netflix: from deciding which new titles to add to our catalog to finding ways to more efficiently stream videos across the Internet.
What do you feel are the leading factors enabling recent advancements and uptake of deep learning?
The recent adoption of deep learning has been enabled by the confluence of several factors. Of course, bigger data-sets and more computational power have been essential. But, they triggered something more important: the freedom to try bigger and deeper models. Recent theoretical research by Choromanska et al shows that models with more layers and more neurons per layer become resilient to the local optima that plague the optimization landscape. Bigger models find more reliable solutions during learning while small ones get stuck in bad solutions. So, it’s been a chain reaction: bigger computation led to bigger data, then to bigger models, then to better optima, and finally to better performance.
What are your thoughts on the recent surge of media interest surrounding deep learning?
The media interest certainly adds to the excitement. The uptake in press and articles has often revolved on deep learning shattering AI milestones in areas such as game-playing, computer vision and so on. But more practical progress is happening in business and commercial fronts where deep learning and machine learning are permeating almost every component of the workplace. So, beyond exciting milestones in the media (such as beating the grandmaster of Go), we are seeing a sustained ground swell in deep learning at companies all over the world.
How can larger corporations working on deep learning ensure that their work benefits others within this field?
Corporations are increasingly part of the conversation and now have a much stronger presence at leading conferences in machine learning and deep learning. They are not only providing funding and exhibit booths at the events but are also contributing papers and organizing workshops. Some are even releasing open-source software and systems (such as Google’s TensorFlow) which broaden the reach of deep learning to anyone in the world who wants to get involved.
What present or potential future applications applications of deep learning excite you most?
I’m excited to see how deep learning can help in recommendation, personalization and search. So far, we’ve seen deep learning solve tasks that humans are already good at, such as vision, speech recognition, gaming or natural language processing. But humans are notoriously bad at recommending content or items to their friends. We think aspirationally rather than realistically; we recommend a high-brow documentary that sounds intellectual rather than what our friends really would rather watch. I’m excited to see how deep learning can anticipate these bias and preferences and help us each optimize our entertainment, our disposable time and our everyday life in general.
What developments can we expect to see in deep learning in the next 5 years?
I expect the science behind deep learning to become much more rigorous from a theoretical perspective. Right now, we are building complicated models and getting great results without knowing quite why the systems are working well. The field is permeated with black-art intuitions and cryptic engineering know-how. However, in the next 5 years, we will have a better mathematical explanation of why deep learning works so well and what the limitations are. This theoretical understanding will be critical to enable the next generation of break-throughs in machine learning.

This interview originally appeared here. Republished with permission. Submit your copyright complaints here.