Differential privacy (DP) has been hailed as the gold standard of privacy-preserving data analysis, by providing strong privacy guarantees while still enabling use of potentially sensitive data. Formally, DP gives a mathematically rigorous worst-case bound on the maximum amount of information that can be learned about an individual's data from the output of a computation. In the past two decades, the privacy community has developed DP algorithms that satisfy this privacy guarantee and allow for accurate data analysis for a wide variety of computational problems and application domains. We have also begun to see a number of high-profile deployments of DP systems in practice, both at large technology companies and government entities. Despite the promise and success of DP thus far, there are a number of critical challenges left to be addressed before DP can be easily deployed in practice, including: mapping the mathematical privacy guarantees onto protection against real-world threats, developing explanations of its guarantees and tradeoffs for non-technical users, integration with other privacy & security tools, preventing misuse, and more.
Get notified about new features and conference additions.