A Google TechTalk, presented by Jonathan Ullman, 2021/06/18 ABSTRACT: Differential Privacy for ML Series 2021. In many statistical problems, incorporating priors can significantly improve performance. However, the use of prior knowledge in differentially private query release has remained underexplored, despite such priors commonly being available in the form of public datasets, such as previous US Census releases. With the goal of releasing statistics about a private dataset, we present PMW^Pub, which, unlike existing baselines, leverages public data drawn from a related distribution as prior information. We provide a theoretical analysis and an empirical evaluation on the American Community Survey (ACS) and ADULT datasets, which shows that our method outperforms state-of-the-art methods. Furthermore, PMW^Pub scales well to high-dimensional data domains, where running many existing methods would be computationally infeasible. We also undertake a theoretical study of synthetic generation with public and private data, and show that, for many classes of queries, making a small fraction of the dataset public can provably improve accuracy by an arbitrarily large amount.This talk is based on joint work with Raef Bassily, Albert Cheu, Terrance Liu, Shay Moran, Aleksandar Nikolov, Thomas Steinke, Giuseppe Vietri, and Zhiwei Steven Wu. About the speaker: Jonathan Ullman is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University, and a member of the Cybersecurity & Privacy Institute. His research is about how to use data robustly, reliably, and responsibly, with a focus on data privacy and preventing false discovery in the empirical sciences. He earned his BSE from Princeton University in 2008 and his PhD from Harvard University in 2013. His work has been recognized with an NSF CAREER Award and a Google Research Award and his teaching has been recognized with the Ruth and Joel Spira Outstanding Teacher Award.
Get notified about new features and conference additions.