Big Learning Workshop: Algorithms, Systems, and Tools for Learning at Scale at NIPS 2011 Invited Talk: Randomized Smoothing for (Parallel) Stochastic Optimization by John Duchi John Duchi is currently a PhD candidate in computer science at Berkeley, where he started in the fall of 2008. He works in the Statistical Artificial Intelligence Lab (SAIL) under the joint supervision of Mike Jordan and Martin Wainwright. John is currently supported by an NDSEG fellowship and starting next year, he will be supported by Facebook, who have generously awarded him a Facebook Fellowship. Abstract: By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates for stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for non-smooth optimization. A combination of our techniques with recent work on decentralized optimization yields order-optimal parallel stochastic optimization algorithms. We give applications of our results to statistical machine learning problems, providing experimental results demonstrating the effectiveness of our algorithms.
Get notified about new features and conference additions.