Difference between revisions of "Cybersecurity References"
| Line 77: | Line 77: | ||
Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification], 2017 | Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification], 2017 | ||
| + | |||
| + | Fredrikson et al. [http://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures], 12 Oct 2015 | ||
Goodfellow et al. [http://arxiv.org/pdf/1607.02533.pdf Adversarial examples in the physical world], 11 Feb 2017 | Goodfellow et al. [http://arxiv.org/pdf/1607.02533.pdf Adversarial examples in the physical world], 11 Feb 2017 | ||
Revision as of 15:57, 2 July 2018
______________________________________________________
Arulkumaran, K., Deisenroth, M., Brundage, M., and Bharath, A. A Brief Survey of Deep Reinforcement Learning, 28 Sep 2017
Akhtar et al. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, 26 Feb 2018
Brundage et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Feb 2018
______________________________________________________
Abadi, M. Chu , A. Goodfellow, I. McMahan, H. Mironov, I. Talwar, K. and Zhang, L. Deep Learning with Differential Privacy, 24 Oct 2016
Abhijith Introduction to Artificial intelligence for security professionals, 12 Aug 2017
Abramson, Myriam Toward Adversarial Online Learning and the Science of Deceptive Machines, 13 Sep 2017
Al-Dujaili et al. Adversarial Deep Learning for Robust Detection of Binary Encoded Malware, 25 Mar 2018
Allen, G., Chan T. Artificial Intelligence and National Security - BELFER CENTER STUDY, Jul 2017
American Technology Council (ATC), U.S. Government Report to the President on IT Modernization, 2017
Amodei, D. and Olah, C. et al. Concrete Problems in AI Safety, 25 Jul 2016
Anderson, H.S., Kharkar, A., Filar, B., Evans, D., and Roth, P. Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning, 26 Jan 2018
Anderson, H., Woodbridge, J., and Filar, B. DeepDGA: Adversarially-Tuned Domain Generation and Detection, 6 Oct 2016
Army Cyber Institute at West Point and Arizona State University The New Dogs of War: The Future of Weaponized Artificial Intelligence, 2017
Barreno et al. Can Machine Learning Be Secure?, 21 Mar 2016
Bastani, O., Kim, C., Bastani Interpreting Blackbox Models via Model Extraction, 22 May 2018
Biggio, B., Nelson, B., and P. Laskov Poisoning Attacks against Support Vector Machines, 25 Mar 2013
Biggio et al. Support Vector Machines Under Adversarial Label Noise, 2011
Bulo et al. Randomized Prediction Games for Adversarial Machine Learning, 11 Nov 2017
Carbon Black [http://www.carbonblack.com/wp-content/uploads/2017/03/Carbon_Black_Research_Report_NonMalwareAttacks_ArtificialIntelligence_MachineLearning_BeyondtheHype.pdf Beyond the Hype: Security Experts Weigh in on Artificial Intelligence, Machine Learning, and Non-Malware Attacks], 2017
Carlini et al. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text, 5 Jan 2018
Carlini, N., Mishra, P., Vaidya, T., Zhang, Y., Sherr, M., Shields, C., Wagner, D., and Zhou, W. Hidden Voice Commands, 2016
Chen H., Wang FY. Artificial Intelligence for Homeland Security, Jan 2005
Chen et al. Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach, 31 Oct 2017
Chen et al. EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples, 10 Feb 2018
Chen et al. Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach, 13 Oct 2017
Chen et al. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning, 15 Dec 2017
Conroy, N. Rubin V. Chen, Y. Automatic Deception Detection: Methods for Finding Fake News, Aug 2017
Crawford, K. and Calo, R. There is a blind spot in AI research, 20 Oct 2016
D’Avino, D., Cozzolino, D., Poggi, G., and Verdoliva, L. Autoencoder with recurrent neural networks for video forgery detection, 29 Aug 2017
Defense Science Board Terms of Reference - Defense Science Board Task Force on Counter Autonomy, 18 Jun 2018
Demontis et al. Infinity-Norm Support Vector Machines Against Adversarial Label Contamination, 2017
Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., and Wernsing, J. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy, 24 Feb 2016
Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. Robust Physical-World Attacks on Deep Learning Visual Classification, 2017
Fredrikson et al. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures, 12 Oct 2015
Goodfellow et al. Adversarial examples in the physical world, 11 Feb 2017
Goodfellow et al. Adversarial Examples that Fool both Human and Computer Vision, 22 May 2018
Goodfellow et al. Attacking Machine Learning with Adversarial Examples, 24 Feb 2017
Goodfellow et al. Explaining and Harnessing Adversarial Examples, 20 Mar 2015
Goodfellow et al. Generative Adversarial Nets, 2014
Goodfellow et al. The Space of Transferable Adversarial Examples, 23 May 2017
Goodfellow et al. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, 24 May 2016
Grosse et al. Adversarial Examples for Malware Detection, 12 Aug 2017
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P Adversarial Perturbations Against Deep Neural Networks for Malware Classification, 16 Jun 2016
Gu, T., Dolan-Gavitt, B., and Garg, S. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, 22 Aug 2017
Hitawala, S. Comparative Study on Generative Adversarial Networks, 12 Jan 2018
Hosseini et al. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 13 Mar 2017
Hosseini, H., Xiao, B. and Poovendran, R., Google’s Cloud Vision API Is Not Robust To Noise, 20 Jul 2017
Hu et al. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN, 20 Feb 2017
Huang et al. Adversarial Machine Learning, Oct 2011
Jin et al. Robust Convolutional Neural Networks under Adversarial Noise, 25 Feb 2016
Kantarcioglu et al. Adversarial Data Mining for Cyber Security, 28 Oct 2016
Kantchelian et al. Evasion and Hardening of Tree Ensemble Classifiers, 27 May 2016
Kantchelian Taming Evasions in Machine Learning Based Detection, 12 Aug 2016
Keshet et al. Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection, 13 Feb 2018
Kolosnjaji et al. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables, 12 Mar 2018
Kreuk et al. Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples, 13 May 2018
Laskov et al. Machine Learning in Adversarial Environments, 28 Jun 2010
Luo et al. Towards Imperceptible and Robust Adversarial Example Attacks against Neural, 15 Jan 2018
Madry et al. Towards Deep Learning Models Resistant to Adversarial Attacks, 19 Jun 2017
Mayer, M. Norwegian Institute for Defence Studies, Oslo IFS Insights, Apr 2018
Miller et al. Adversarial Learning. A Critical Review and Active Learning Study, 27 May 2017
Muñoz-González et al. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization, 29 Aug 2017
Naveiro et al. Adversarial classification: An adversarial risk analysis approach, 21 Feb 2018
Nguyen A, Yosinski J, Clune J. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, 2 Apr 2015 Video
North Atlantic Treaty Organization: Joint Air Power Competence Centre NATO Joint Air Power and Offensive Cyber Operations, Nov 2017
Norton et al. Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning, 1 Aug 2017
Ororbia II et al. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization, 29 Jul 2016
Papernot et al. Adversarial Examples in Machine Learning, 1 Feb 2017
Papernot et al. Adversarial Perturbations Against Deep Neural Networks, 16 Jun 2016
Papernot et al. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, 14 Nov 2015
Papernot et al. The Limitations of Deep Learning in Adversarial Settings, 24 Nov 2015
Papernot et al. On the (Statistical) Detection of Adversarial Examples, 21 Feb 2017
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, 19 Feb 2016
Papernot et al. Practical Black-Box Attacks against Machine Learning, 8 Feb 2016
Papernot, N., McDaniel, P., Sinha, A., and Wellman, Towards the Science of Security and Privacy in Machine Learning, 11 Nov 2016
Paudice et al. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection, 8 Feb 2018
Radford, A., Metz, L. and Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, 7 Jan 2016
Raghunathan et al. Certified Defenses against Adversarial Examples, 29 Jan 2018
Rahman, M., Azimpourkivi, M., Topkara, U., Carbunar, B. Video Liveness for Citizen Journalism: Attacks and Defenses, Apr 2017
Rouhani et al. CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning, 1 Apr 2018
Rouhani, B., Riazi, M., and Koushanfar, F. DeepSecure: Scalable Provably-Secure Deep Learning, 24 May 2017
Schneier, B. The Internet of Things is Wildly Insecure--and Often Unpatchable, 2014
Schneier, B. Security and the Internet of Things, 2017
Shen et al. AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems, 5 Dec 2016
Shokri, R., Stronati, M., and Shmatikov, V. Membership Inference Attacks Against Machine Learning Models, 31 Mar 2017
Stevens, R., Suciu, O., Ruef, A., Hong, S., Hicks, M., Dumitras, T. Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning, 17 Jan 2017
Stokes et al. Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models 16 Dec 2017
Stoica, I., Song, D., Popa, R., Patterson, D., Mahoney, M., Katz, R., Joseph, A., Jordan, M., Hellerstein, J., Gonzalez, J., Goldberg, K., Ghodsi, A., Culler, D., and Abbeel, P. A Berkeley View of Systems Challenges for AI, 15 Dec 2017
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. Intriguing properties of neural networks, 19 Feb 2014
Uesato et al. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks, 12 Jun 2018
U.S. Department of Defense Law of War Manual, Chapter XVI - Cyber Operations, 2015
U.S. Department of Defense: US Air Force Artificial Intelligence and National Security, 26 Apr 2018
U.S. Department of Homeland Security Artificial Intelligence White Paper | Science and Technology Advisory Committee (HSSTAC): Quadrennial Homeland Security Review Subcommittee, 10 Mar 2017
Uther et al.Adversarial Reinforcement Learning, Jan 2003
Wang C. et al. Adversary Resistant Deep Neural Networks with an Application to Malware Detection, 27 Apr 2017
Wang C. Evolutionary Generative Adversarial Networks, 1 Mar 2018
White House 2018 White House Summit on Artificial Intelligence for American Industry, 10 May 2018
Xiao et al. Adversarial and Secure Machine Learning, 27 Oct 2016
Xu et al. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers, Feb 2016
Xu et al. Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks, 5 Dec 2017
Yuan et al. Adversarial Examples. Attacks and Defenses for Deep Learning, 5 2018]