A Google TechTalk, presented by Martin Jaggi, 2021/11/8 ABSTRACT: Federated Learning with Strange Gradients. Collaborative learning methods such as federated learning are enabling many promising new applications for machine learning while respecting users' privacy. In this talk, we discuss recent gradient-based methods, specifically in cases when the exchanged gradients are violating the common unbiasedness assumption, and are actually different from those of our target objective. We address the three applications of 1) federated learning in the realistic setting of heterogeneous data, 2) personalization of collaboratively learned models to each participant, 3) learning with malicious or unreliable participants, in the sense of Byzantine robust training. For those applications, we demonstrate that algorithms with rigorous convergence guarantees can still be obtained and are practically feasible. About the Speaker: Martin Jaggi, EPFL Martin Jaggi is a Tenure Track Assistant Professor at EPFL, heading the Machine Learning and Optimization Laboratory. Before that, he was a postdoctoral researcher at ETH Zurich, at the Simons Institute in Berkeley, and at École Polytechnique in Paris. He earned his PhD in Machine Learning and Optimization from ETH Zurich in 2011, and a MSc in Mathematics also from ETH Zurich. For more information about the workshop: https://events.withgoogle.com/2021-workshop-on-federated-learning-and-analytics/#content
Get notified about new features and conference additions.