conf.directory

Low Cost High Power Membership Inference Attacks

About this talk

A Google TechTalk, presented by Reza Shokri, 2024-04-17 ABSTRACT: Membership inference attacks (MIA) aim to detect if a particular data point was used in training a machine learning model. Recent strong attacks have high computational costs and inconsistent performance under varying conditions, rendering them unreliable for practical privacy risk assessment. I will present RMIA, a novel, efficient, and robust membership inference attack algorithm that accurately differentiates between population data and training data of a model, with minimal computational overhead. We achieve this through a robust statistical test that effectively leverages both reference models and reference data samples from the population. Our algorithm exhibits superior test power (true-positive rate) compared to all prior methods, even at extremely low false-positive rates (as low as 0). Under computation constraints, where only a limited number of pre-trained reference models (as few as 1) are available, and also when we vary other elements of the attack, our method performs exceptionally well, unlike some prior attacks that approach random guessing. I will argue that MIA tests, as privacy auditing tools, must be stress tested under low computation budget, few available reference models, and changes to data and models. A strong test is the one that outperforms others in all these practical scenarios, and not only in ideal cases. RMIA lays the groundwork for practical yet accurate and reliable data privacy risk analysis of machine learning. Reza Shokri (National University of Singapore)

Stay Updated

Get notified about new features and conference additions.

Low Cost High Power Membership Inference Attacks by Reza Shokri | conf.directory | conf.directory