开源软件名称(OpenSource Name):yenchenlin/awesome-adversarial-machine-learning
开源软件地址(OpenSource Url):https://github.com/yenchenlin/awesome-adversarial-machine-learning
开源编程语言(OpenSource Language):
开源软件介绍(OpenSource Introduction):⚠️ Deprecated
I no longer include up-to-date papers, but the list is still a good reference for starters.
Awesome Adversarial Machine Learning:
A curated list of awesome adversarial machine learning resources, inspired by awesome-computer-vision.
Table of Contents
Blogs
- Breaking Linear Classifiers on ImageNet, A. Karpathy et al.
- Breaking things is easy, N. Papernot & I. Goodfellow et al.
- Attacking Machine Learning with Adversarial Examples, N. Papernot, I. Goodfellow, S. Huang, Y. Duan, P. Abbeel, J. Clark.
- Robust Adversarial Examples, Anish Athalye.
- A Brief Introduction to Adversarial Examples, A. Madry et al.
- Training Robust Classifiers (Part 1), A. Madry et al.
- Adversarial Machine Learning Reading List, N. Carlini
- Recommendations for Evaluating Adversarial Example Defenses, N. Carlini
Papers
General
Attack
Image Classification
- DeepFool: a simple and accurate method to fool deep neural networks, S. Moosavi-Dezfooli et al., CVPR 2016
- The Limitations of Deep Learning in Adversarial Settings, N. Papernot et al., ESSP 2016
- Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, N. Papernot et al., arxiv 2016
- Adversarial Examples In The Physical World, A. Kurakin et al., ICLR workshop 2017
- Delving into Transferable Adversarial Examples and Black-box Attacks Liu et al., ICLR 2017
- Towards Evaluating the Robustness of Neural Networks N. Carlini et al., SSP 2017
- Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, N. Papernot et al., Asia CCS 2017
- Privacy and machine learning: two unexpected allies?, I. Goodfellow et al.
Reinforcement Learning
Segmentation & Object Detection
VAE-GAN
Speech Recognition
Questiona Answering System
Defence
Adversarial Training
Defensive Distillation
Generative Model
Regularization
Others
Talks
- Do Statistical Models Understand the World?, I. Goodfellow, 2015
- Classifiers under Attack, David Evans, 2017
- Adversarial Examples in Machine Learning, Nicolas Papernot, 2017
- Poisoning Behavioral Malware Clustering, Biggio. B, Rieck. K, Ariu. D, Wressnegger. C, Corona. I. Giacinto, G. Roli. F, 2014
- Is Data Clustering in Adversarial Settings Secure?, BBiggio. B, Pillai. I, Rota Bulò. S, Ariu. D, Pelillo. M, Roli. F, 2015
- Poisoning complete-linkage hierarchical clustering, Biggio. B, Rota Bulò. S, Pillai. I, Mura. M, Zemene Mequanint. E, Pelillo. M, Roli. F, 2014
- Is Feature Selection Secure against Training Data Poisoning?, Xiao. H, Biggio. B, Brown. G, Fumera. G, Eckert. C, Roli. F, 2015
- Adversarial Feature Selection Against Evasion Attacks, Zhang. F, Chan. PPK, Biggio. B, Yeung. DS, Roli. F, 2016
Licenses
License
To the extent possible under law, Yen-Chen Lin has waived all copyright and related or neighboring rights to this work.
|
请发表评论