Social Network For Security Executives: Help Make Right Cyber Security Decisions
Humans are susceptible to social engineering. Machines are susceptible to tampering. Machine learning is vulnerable to adversarial attacks. Researchers have been able to successfully attack deep learning models used to classify malware to completely change their predictions by only accessing the output label of the model for the input samples fed by the attacker. Moreover, we've also seen attackers attempting to poison our training data for ML models by sending fake telemetry and trying to fool the classifier into believing that a given set of malware samples are actually benign. How do we detect and protect against such attacks? Is there a way we can make our models more robust to future attacks?
We'll discuss several strategies to make machine learning models more tamper resilient. We'll compare the difficulty of tampering with cloud-based models and client-based models. We'll discuss research that shows how singular models are susceptible to tampering, and some techniques, like stacked ensemble models, can be used to make them more resilient. We also talk about the importance of diversity in base ML models and technical details on how they can be optimized to handle different threat scenarios. Lastly, we'll describe suspected tampering activity we've witnessed using protection telemetry from over half a billion computers, and whether our mitigations worked.
Holly has been in the security industry since 1997. She's held roles in many types of disciplines, such as product and program management, incident response, communications, and, for the past few years, data science. She started working for Microsoft in 2010. Currently, she is a Principal Research Manager for the Windows Defender Antivirus Research team. Her team of researchers and data scientists apply machine learning, automation, and other next generation capabilities to malware detection.
Jugal Parikh has been working in the field of security and machine learning for seven years. He enjoys solving complex security problems like targeted attacks detection, static and behavioral file/ script based detection, and detecting adversarial attacks using machine learning. He is currently a Senior Data Scientist at Microsoft's Windows Defender Research team.
Randy Treit has worked in the antimalware and security fields since 2003. He is currently a Senior Security Researcher in the Windows Defender Research Team.