Machine learning is increasingly used in security-critical applications, such as malware detection, face recognition, and autonomous driving. But can we trust machine learning? Unfortunately, the answer is No
. Learning methods are vulnerable to different types of attacks that thwart their secure application. However, most research has focused on attacks in the feature space of machine learning.
In my talk, we will learn that we should think beyond the feature space when thinking about the security of machine learning. First, the problem space with real-world objects such as PDF files or malicious code should be considered. Real attacks are possible but require specialized techniques. Second, the mapping from problem to feature space can introduce a considerable vulnerability in learning-based systems. Using the example of image scaling, we will examine how an adversary can exactly control the input to a learning algorithm. Third, we will also learn that the feature space also has an inherent connection to the media space of digital watermarking.
Erwin Quiring is a postdoctoral researcher at the Ruhr University Bochum as part of Germany's Excellence Cluster CASA. His main research focus lies in the intersection between machine learning and security, with topics such as malware detection, deep fake detection, or adversarial learning.