在线时间:8:00-16:00
迪恩网络APP
随时随地掌握行业动态
扫描二维码
关注迪恩网络微信公众号
开源软件名称(OpenSource Name):cchio/deep-pwning开源软件地址(OpenSource Url):https://github.com/cchio/deep-pwning开源编程语言(OpenSource Language):Python 100.0%开源软件介绍(OpenSource Introduction):Deep-pwning is a lightweight framework for experimenting with machine learning models with the goal of evaluating their robustness against a motivated adversary.Note that deep-pwning in its current state is no where close to maturity or completion. It is meant to be experimented with, expanded upon, and extended by you. Only then can we help it truly become the goto penetration testing toolkit for statistical machine learning models. BackgroundResearchers have found that it is surprisingly trivial to trick a machine learning model (classifier, clusterer, regressor etc.) into making an objectively wrong decisions. This field of research is called Adversarial Machine Learning. It is not hyperbole to claim that any motivated attacker can bypass any machine learning system, given enough information and time. However, this issue is often overlooked when architects and engineers design and build machine learning systems. The consequences are worrying when these systems are put into use in critical scenarios, such as in the medical, transportation, financial, or security-related fields. Hence, when one is evaluating the efficacy of applications using machine learning, their malleability in an adversarial setting should be measured alongside the system's precision and recall. This tool was released at DEF CON 24 in Las Vegas, August 2016, during a talk titled Machine Duping 101: Pwning Deep Learning Systems. StructureThis framework is built on top of Tensorflow, and many of the included examples in this repository are modified Tensorflow examples obtained from the Tensorflow GitHub repository. All of the included examples and code implement deep neural networks, but they can be used to generate adversarial images for similarly tasked classifiers that are not implemented with deep neural networks. This is because of the phenomenon of 'transferability' in machine learning, which was Papernot et al. expounded expertly upon in this paper. This means means that adversarial samples crafted with a DNN model A may be able to fool another distinctly structured DNN model B, as well as some other SVM model C. This figure taken from the aforementioned paper (Papernot et al.) shows the percentage of successful adversarial misclassification for a source model (used to generate the adversarial sample) on a target model (upon which the adversarial sample is tested). ComponentsDeep-pwning is modularized into several components to minimize code repetition. Because of the vastly different nature of potential classification tasks, the current iteration of the code is optimized for classifying images and phrases (using word vectors). These are the code modules that make up the current iteration of Deep-pwning:
These are the resource directories relevant to the application:
Getting StartedInstallationPlease follow the directions to install tensorflow found here https://www.tensorflow.org/versions/r0.8/get_started/os_setup.html which will allow you to pick the tensorflow binary to install. $ pip install -r requirements.txt Execution Example (with the MNIST driver)To restore from a previously trained checkpoint. (configuration in config/mnist.conf) $ cd dpwn
$ python mnist_driver.py --restore_checkpoint To train from scratch. (note that any previous checkpoint(s) located in the folder specified in the configuration will be overwritten) $ cd dpwn
$ python mnist_driver.py Task list
RequirementsNote that dpwn requires Tensorflow 0.8.0. Tensorflow 0.9.0 introduces some Contributing(borrowed from the amazing Requests repository by kennethreitz)
AcknowledgementsThere is so much impressive work from so many machine learning and security researchers that directly or indirectly contributed to this project, and inspired this framework. This is an inconclusive list of resources that was used or referenced in one way or another: Papers
Code
Datasets
|
2023-10-27
2022-08-15
2022-08-17
2022-09-23
2022-08-13
请发表评论