A Google TechTalk, 2020/7/29, presented by Tom Goldstein, University of Maryland ABSTRACT: Dataset poisoning is a security vulnerability in which a bad actor modifies the training data for a machine learning system in a way that allows them to control test time behavior. In this talk, I discuss our recent work on "clean-label" data poisoning methods, in which poison images appear normal to a human, and are labeled correctly. I present several ways to create such poisoning attacks, and show that they can be made effective against black-box industrial systems, including Google AutoML.
Get notified about new features and conference additions.