Author Archives: Sebastien Bubeck

2020

My latest post on this blog was on December 30th 2019. It seems like a lifetime away. The rate at which paradigm shifting events have been happening in 2020 is staggering. And it might very well be that the worst … Continue reading

Posted in Uncategorized | Leave a comment

A decade of fun and learning

I started out this decade with the project of writing a survey of the multi-armed bandit literature, which I had read thoroughly during the graduate studies that I was about to finish. At the time we resisted the temptation to … Continue reading

Posted in Uncategorized | 5 Comments

Convex body chasing, Steiner point, Sellke point, and SODA 2020 best papers

Big congratulations to my former intern Mark Sellke, and to the CMU team (C. J. Argue, Anupam Gupta, Guru Guruganesh, and Ziye Tang) for jointly winning the best paper award at SODA 2020 (as well as the best student paper … Continue reading

Posted in Theoretical Computer Science | 2 Comments

Guest post by Julien Mairal: A Kernel Point of View on Convolutional Neural Networks, part II

This is a continuation of Julien Mairal‘s guest post on CNNs, see part I here. Stability to deformations of convolutional neural networks In their ICML paper Zhang et al. introduce a functional space for CNNs with one layer, by noticing … Continue reading

Posted in Machine learning | Leave a comment

Guest post by Julien Mairal: A Kernel Point of View on Convolutional Neural Networks, part I

    I (n.b., Julien Mairal) have been interested in drawing links between neural networks and kernel methods for some time, and I am grateful to Sebastien for giving me the opportunity to say a few words about it on … Continue reading

Posted in Machine learning | 1 Comment

Optimal bound for stochastic bandits with corruption

Guest post by Mark Sellke. In the comments of the previous blog post we asked if the new viewpoint on best of both worlds can be used to get clean “interpolation” results. The context is as follows: in a STOC … Continue reading

Posted in Machine learning, Optimization, Theoretical Computer Science | Leave a comment

Amazing progress in adversarially robust stochastic multi-armed bandits

In this post I briefly discuss some recent stunning progress on robust bandits (for more background on bandits see these two posts, part 1 and part 2, in particular what is described below gives a solution to Open Problem 3 … Continue reading

Posted in Machine learning, Optimization, Theoretical Computer Science | 6 Comments

Nemirovski’s acceleration

I will describe here the very first (to my knowledge) acceleration algorithm for smooth convex optimization, which is due to Arkadi Nemirovski (dating back to the end of the 70’s). The algorithm relies on a -dimensional plane-search subroutine (which, in … Continue reading

Posted in Optimization | 8 Comments

A short proof for Nesterov’s momentum

Yesterday I posted the following picture on Twitter and it quickly became my most visible tweet ever (by far):     I thought this would be a good opportunity to revisit the proof of Nesterov’s momentum, especially since as it … Continue reading

Posted in Optimization | 11 Comments

Remembering Michael

It has been a year since the tragic event of September 2017. We now know what happened, and it is a tremendously sad story of undiagnosed type 1 diabetes. This summer at MSR Michael was still very present in our … Continue reading

Posted in Uncategorized | Leave a comment