Difference between revisions of "Cybersecurity References"

From
Jump to: navigation, search
Line 74: Line 74:
 
Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., and Wernsing, J. [http://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/CryptonetsTechReport.pdf CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy], 24 Feb 2016  
 
Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., and Wernsing, J. [http://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/CryptonetsTechReport.pdf CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy], 24 Feb 2016  
  
Elsayed, G, Shankar, S., Cheung, B., Papernot, N., Kurakin, A. Goodfellow, I., Sohl-Dickstein, J. [http://arxiv.org/pdf/1802.08195.pdf Adversarial Examples that Fool both Human and Computer Vision], 22 May 2018
+
Elsayed, G, Shankar, S., Cheung, B., Papernot, N., Kurakin, A. Goodfellow, I., and Sohl-Dickstein, J. [http://arxiv.org/pdf/1802.08195.pdf Adversarial Examples that Fool both Human and Computer Vision], 22 May 2018
  
 
Everitt, T., Krakovna, V., Orseau, L., Hutter, M., and Legg, S [http://static.ijcai.org/proceedings-2017/0656.pdf Reinforcement Learning with a Corrupted Reward Channel], 19 Aug 2017
 
Everitt, T., Krakovna, V., Orseau, L., Hutter, M., and Legg, S [http://static.ijcai.org/proceedings-2017/0656.pdf Reinforcement Learning with a Corrupted Reward Channel], 19 Aug 2017
Line 80: Line 80:
 
Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification], 27 Jul 2017
 
Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification], 27 Jul 2017
  
Fredrikson, M., Jha, S., Ristenpart, T. [http://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures], 12 Oct 2015
+
Fredrikson, M., Jha, S., and Ristenpart, T. [http://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures], 12 Oct 2015
  
Goodfellow, I., Papernot, N., Huang, S.,Duan, Y., Abbeel, P., Clark, J. [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples], 24 Feb 2017
+
Goodfellow, I., Papernot, N., Huang, S.,Duan, Y., Abbeel, P., and Clark, J. [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples], 24 Feb 2017
  
Goodfellow, I., Shlens, J., Szegedy C. [http://arxiv.org/pdf/1412.6572.pdf Explaining and Harnessing Adversarial Examples], 20 Mar 2015
+
Goodfellow, I., Shlens, J., and Szegedy C. [http://arxiv.org/pdf/1412.6572.pdf Explaining and Harnessing Adversarial Examples], 20 Mar 2015
  
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y. [http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf Generative Adversarial Nets], 10 Jun 2014
+
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. [http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf Generative Adversarial Nets], 10 Jun 2014
  
 
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P  [http://www.patrickmcdaniel.org/pubs/esorics17.pdf Adversarial Examples for Malware Detection], 12 Aug 2017
 
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P  [http://www.patrickmcdaniel.org/pubs/esorics17.pdf Adversarial Examples for Malware Detection], 12 Aug 2017
Line 92: Line 92:
 
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P [http://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks for Malware Classification], 16 Jun 2016
 
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P [http://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks for Malware Classification], 16 Jun 2016
  
Grosse, K., Manoharan, P., Papernot, N., Backes, M., McDaniel, P. [http://arxiv.org/pdf/1702.06280.pdf On the (Statistical) Detection of Adversarial Examples], 21 Feb 2017
+
Grosse, K., Manoharan, P., Papernot, N., Backes, M., and McDaniel, P. [http://arxiv.org/pdf/1702.06280.pdf On the (Statistical) Detection of Adversarial Examples], 21 Feb 2017
  
 
Gu, T., Dolan-Gavitt, B., and Garg, S. [https://machine-learning-and-security.github.io/papers/mlsec17_paper_51.pdf BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain], 22 Aug 2017
 
Gu, T., Dolan-Gavitt, B., and Garg, S. [https://machine-learning-and-security.github.io/papers/mlsec17_paper_51.pdf BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain], 22 Aug 2017
Line 100: Line 100:
 
Hitawala, S. [http://arxiv.org/pdf/1801.04271.pdf Comparative Study on Generative Adversarial Networks], 12 Jan 2018
 
Hitawala, S. [http://arxiv.org/pdf/1801.04271.pdf Comparative Study on Generative Adversarial Networks], 12 Jan 2018
  
Hosseini, H., Chen, Y., Kannan, S., Zhang, B., Poovendran, R. [http://arxiv.org/pdf/1703.04318.pdf Blocking Transferability of Adversarial Examples in Black-Box Learning Systems], 13 Mar 2017
+
Hosseini, H., Chen, Y., Kannan, S., Zhang, B., and Poovendran, R. [http://arxiv.org/pdf/1703.04318.pdf Blocking Transferability of Adversarial Examples in Black-Box Learning Systems], 13 Mar 2017
  
 
Hosseini, H., Xiao, B. and Poovendran, R., [http://arxiv.org/pdf/1704.05051.pdf Google’s Cloud Vision API Is Not Robust To Noise], 20 Jul 2017
 
Hosseini, H., Xiao, B. and Poovendran, R., [http://arxiv.org/pdf/1704.05051.pdf Google’s Cloud Vision API Is Not Robust To Noise], 20 Jul 2017
  
Hu, W., Tan, Y. [http://arxiv.org/pdf/1702.05983.pdf Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN], ..MalGAN  20 Feb 2017   
+
Hu, W. and Tan, Y. [http://arxiv.org/pdf/1702.05983.pdf Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN], ..MalGAN  20 Feb 2017   
  
Huang, L., Joseph, A., Neson, B., Rubinstein, B., Tygar, J.D. [http://people.eecs.berkeley.edu/~tygar/papers/SML2/Adversarial_AISEC.pdf Adversarial Machine Learning], Oct 2011
+
Huang, L., Joseph, A., Neson, B., Rubinstein, B., and Tygar, J.D. [http://people.eecs.berkeley.edu/~tygar/papers/SML2/Adversarial_AISEC.pdf Adversarial Machine Learning], Oct 2011
  
Jin J., Dundar, A., Culurciello, E. [http://arxiv.org/pdf/1511.06306.pdf Robust Convolutional Neural Networks under Adversarial Noise], 25 Feb 2016
+
Jin J., Dundar, A., and Culurciello, E. [http://arxiv.org/pdf/1511.06306.pdf Robust Convolutional Neural Networks under Adversarial Noise], 25 Feb 2016
  
Kantarcioglu, M., Xi, B. [http://www.utdallas.edu/~muratk/CCS-tutorial.pdf Adversarial Data Mining for Cyber Security], 28 Oct 2016
+
Kantarcioglu, M. and Xi, B. [http://www.utdallas.edu/~muratk/CCS-tutorial.pdf Adversarial Data Mining for Cyber Security], 28 Oct 2016
  
Kantchelian, A., Tygar, J.D., Joseph, A. [https://arxiv.org/pdf/1509.07892.pdf Evasion and Hardening of Tree Ensemble Classifiers], 27 May 2016
+
Kantchelian, A., Tygar, J.D., and Joseph, A. [https://arxiv.org/pdf/1509.07892.pdf Evasion and Hardening of Tree Ensemble Classifiers], 27 May 2016
  
 
Kantchelian, A. [http://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection], 12 Aug 2016  
 
Kantchelian, A. [http://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection], 12 Aug 2016  
  
Kashyap, A., Parmar, R., Agarwal, M., Gupta, H. [http://arxiv.org/pdf/1703.09968.pdf An Evaluation of Digital Image Forgery Detection Approaches], 30 Mar 2017  
+
Kashyap, A., Parmar, R., Agarwal, M., and Gupta, H. [http://arxiv.org/pdf/1703.09968.pdf An Evaluation of Digital Image Forgery Detection Approaches], 30 Mar 2017  
  
Kolosnjaji, B., Demontiz, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., Roli, F. [http://arxiv.org/pdf/1803.04173.pdf Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables], 12 Mar 2018
+
Kolosnjaji, B., Demontiz, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., and Roli, F. [http://arxiv.org/pdf/1803.04173.pdf Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables], 12 Mar 2018
  
Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., Keshet, J. [http://www.groundai.com/project/adversarial-examples-on-discrete-sequences-for-beating-whole-binary-malware-detection/ Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection], 13 Feb 2018
+
Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J. [http://www.groundai.com/project/adversarial-examples-on-discrete-sequences-for-beating-whole-binary-malware-detection/ Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection], 13 Feb 2018
  
Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., Keshet, J.  [http://arxiv.org/pdf/1802.04528.pdf Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples], 13 May 2018
+
Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J.  [http://arxiv.org/pdf/1802.04528.pdf Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples], 13 May 2018
  
Kurakin, A., Goodfellow, I., Bengio, S. [http://arxiv.org/pdf/1607.02533.pdf Adversarial examples in the physical world], 11 Feb 2017
+
Kurakin, A., Goodfellow, I., and Bengio, S. [http://arxiv.org/pdf/1607.02533.pdf Adversarial examples in the physical world], 11 Feb 2017
  
Laskov, P., Lippmann, R. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.4564&rep=rep1&type=pdf Machine Learning in Adversarial Environments], 28 Jun 2010
+
Laskov, P. and Lippmann, R. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.4564&rep=rep1&type=pdf Machine Learning in Adversarial Environments], 28 Jun 2010
  
 
Lewis, L. [http://www.cna.org/cna_files/pdf/DRM-2017-U-016281-Final.pdf Insights for the Third Offset: Addressing Challenges of Autonomy and Artificial Intelligence in Military Operations], Sep 2017  
 
Lewis, L. [http://www.cna.org/cna_files/pdf/DRM-2017-U-016281-Final.pdf Insights for the Third Offset: Addressing Challenges of Autonomy and Artificial Intelligence in Military Operations], Sep 2017  
  
Lu, P., Chen, P., Chen, K., Yu, C. [http://arxiv.org/pdf/1805.00310.pdf On the Limitation of MagNet Defense against L1-based Adversarial Examples], 9 May 2018
+
Lu, P., Chen, P., Chen, K., and Yu, C. [http://arxiv.org/pdf/1805.00310.pdf On the Limitation of MagNet Defense against L1-based Adversarial Examples], 9 May 2018
  
Luo, B., Liu, Y. Wei, L., Xu, Q. [http://arxiv.org/pdf/1801.04693.pdf Towards Imperceptible and Robust Adversarial Example Attacks against Neural], 15 Jan 2018
+
Luo, B., Liu, Y. Wei, L., and Xu, Q. [http://arxiv.org/pdf/1801.04693.pdf Towards Imperceptible and Robust Adversarial Example Attacks against Neural], 15 Jan 2018
  
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A. [http://arxiv.org/pdf/1706.06083.pdf Towards Deep Learning Models Resistant to Adversarial Attacks], 19 Jun 2017
+
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. [http://arxiv.org/pdf/1706.06083.pdf Towards Deep Learning Models Resistant to Adversarial Attacks], 19 Jun 2017
  
 
Mayer, M.  Norwegian Institute for Defence Studies, Oslo [http://brage.bibsys.no/xmlui/bitstream/handle/11250/2497514/IFS%20Insights_4_2018_Mayer.pdf IFS Insights], Apr 2018
 
Mayer, M.  Norwegian Institute for Defence Studies, Oslo [http://brage.bibsys.no/xmlui/bitstream/handle/11250/2497514/IFS%20Insights_4_2018_Mayer.pdf IFS Insights], Apr 2018
  
Meng, D., Chen, H. [http://arxiv.org/pdf/1705.09064.pdf MagNet: a Two-Pronged Defense against Adversarial Examples], 11 Sep 2017
+
Meng, D. and Chen, H. [http://arxiv.org/pdf/1705.09064.pdf MagNet: a Two-Pronged Defense against Adversarial Examples], 11 Sep 2017
  
Miller, D., Hu, X., Qiu, Z., Kesidis, G. [http://arxiv.org/pdf/1705.09823.pdf Adversarial Learning. A Critical Review and Active Learning Study], 27 May 2017
+
Miller, D., Hu, X., Qiu, Z., and Kesidis, G. [http://arxiv.org/pdf/1705.09823.pdf Adversarial Learning. A Critical Review and Active Learning Study], 27 May 2017
  
Muñoz-González, L, Bissio, B., Demontis, A., Paudice, A., Wongreassamee, V., Lupu, E., Roli, F. [http://arxiv.org/pdf/1708.08689.pdf Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization], 29 Aug 2017
+
Muñoz-González, L, Bissio, B., Demontis, A., Paudice, A., Wongreassamee, V., Lupu, E., and Roli, F. [http://arxiv.org/pdf/1708.08689.pdf Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization], 29 Aug 2017
  
 
Nataraj, L [https://vision.ece.ucsb.edu/sites/vision.ece.ucsb.edu/files/publications/lakshman_thesis_final_1.pdf A Signal Processing Approach To Malware Analysis], Dec 2015
 
Nataraj, L [https://vision.ece.ucsb.edu/sites/vision.ece.ucsb.edu/files/publications/lakshman_thesis_final_1.pdf A Signal Processing Approach To Malware Analysis], Dec 2015
  
NATO Cooperative Cyber Defence Centre of Excellence; Minárik, T., Jakschis, R., Lindström, L. [http://ccdcoe.org/sites/default/files/multimedia/pdf/CyCon_2018_Full_Book.pdf 10th International Conference on Cyber Conflict CyCon X: Maximising Effects], 30 May 2018
+
Naveiro, R., Redondo, A., Insua, D., and Ruggeri, F. [http://arxiv.org/pdf/1802.07513.pdf Adversarial classification: An adversarial risk analysis approach], 21 Feb 2018
  
Naveiro, R., Redondo, A., Insua, D., Ruggeri, F. [http://arxiv.org/pdf/1802.07513.pdf Adversarial classification: An adversarial risk analysis approach], 21 Feb 2018
+
Nguyen A, Yosinski J, and Clune J. [http://arxiv.org/pdf/1412.1897.pdf Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images], 2 Apr 2015  [http://techtalks.tv/talks/deep-neural-networks-are-easily-fooled-high-confidence-predictions-for-unrecognizable-images/61573/ Video]
  
Nguyen A, Yosinski J, Clune J. [http://arxiv.org/pdf/1412.1897.pdf Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images], 2 Apr 2015  [http://techtalks.tv/talks/deep-neural-networks-are-easily-fooled-high-confidence-predictions-for-unrecognizable-images/61573/ Video]
+
North Atlantic Treaty Organization (NATO): Cooperative Cyber Defence Centre of Excellence; Minárik, T., Jakschis, R.,  and Lindström, L. [http://ccdcoe.org/sites/default/files/multimedia/pdf/CyCon_2018_Full_Book.pdf 10th International Conference on Cyber Conflict CyCon X: Maximising Effects], 30 May 2018
  
North Atlantic Treaty Organization: Joint Air Power Competence Centre [https://www.japcc.org/wp-content/uploads/JAPCC_OCO_screen.pdf  NATO Joint Air Power and Offensive Cyber Operations], Nov 2017
+
North Atlantic Treaty Organization (NATO): Joint Air Power Competence Centre [https://www.japcc.org/wp-content/uploads/JAPCC_OCO_screen.pdf  NATO Joint Air Power and Offensive Cyber Operations], Nov 2017
  
Norton, A, Qi, Y. [http://arxiv.org/pdf/1708.00807.pdf Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning], 1 Aug 2017
+
Norton, A. and Qi, Y. [http://arxiv.org/pdf/1708.00807.pdf Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning], 1 Aug 2017
  
Ororbia II, A., Giles, C., Kifer, D. [http://arxiv.org/pdf/1601.07213.pdf Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization], 29 Jul 2016
+
Ororbia II, A., Giles, C., and Kifer, D. [http://arxiv.org/pdf/1601.07213.pdf Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization], 29 Jul 2016
  
Papernot N., Goodfellow, I., Erlingsson, U., McDaniel, P. [http://www.usenix.org/sites/default/files/conference/protected-files/enigma17_slides_papernot.pdf  Adversarial Examples in Machine Learning], 1 Feb 2017
+
Papernot N., Goodfellow, I., Erlingsson, U., and McDaniel, P. [http://www.usenix.org/sites/default/files/conference/protected-files/enigma17_slides_papernot.pdf  Adversarial Examples in Machine Learning], 1 Feb 2017
  
Papernot N., Goodfellow, I., Sheatsley, R., Feinman, R., McDaniel, P. [http://pdfs.semanticscholar.org/308b/72045130d02b849b4a8f914eae6d0d684add.pdf Cleverhans v.1.0.0: an adversarial machine learning library], 14 Dec 2016
+
Papernot N., Goodfellow, I., Sheatsley, R., Feinman, R., and McDaniel, P. [http://pdfs.semanticscholar.org/308b/72045130d02b849b4a8f914eae6d0d684add.pdf Cleverhans v.1.0.0: an adversarial machine learning library], 14 Dec 2016
  
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A. [http://arxiv.org/pdf/1511.04508.pdf Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks], 14 Nov 2015
+
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. [http://arxiv.org/pdf/1511.04508.pdf Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks], 14 Nov 2015
  
Papernot, N., McDaniel, P, Jha, S., Fredrikson, M., Celik, Z.B., Swami, A. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings], 24 Nov 2015
+
Papernot, N., McDaniel, P, Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings], 24 Nov 2015
  
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A [http://arxiv.org/pdf/1602.02697v2.pdf Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples], 19 Feb 2016
+
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. [http://arxiv.org/pdf/1602.02697v2.pdf Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples], 19 Feb 2016
  
Papernot et al. [http://arxiv.org/pdf/1602.02697.pdf Practical Black-Box Attacks against Machine Learning], 8 Feb 2016
+
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. [http://arxiv.org/pdf/1602.02697.pdf Practical Black-Box Attacks against Machine Learning], 8 Feb 2016
  
Papernot, N., McDaniel, P., Sinha, A., and Wellman, [http://pdfs.semanticscholar.org/ebab/687cd1be7d25392c11f89fce6a63bef7219d.pdf Towards the Science of Security and Privacy in Machine Learning], 11 Nov 2016
+
Papernot, N., McDaniel, P., Sinha, A., and Wellman M. [http://pdfs.semanticscholar.org/ebab/687cd1be7d25392c11f89fce6a63bef7219d.pdf Towards the Science of Security and Privacy in Machine Learning], 11 Nov 2016
  
Papernot, N., McDaniel, P., Goodfellow I. [http://arxiv.org/pdf/1605.07277.pdf Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples], 24 May 2016
+
Papernot, N., McDaniel, P., and Goodfellow I. [http://arxiv.org/pdf/1605.07277.pdf Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples], 24 May 2016
  
Paudice et al. [http://arxiv.org/pdf/1802.03041.pdf Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection], 8 Feb 2018  
+
Paudice A., Muñoz-González, L., Gyorgy, A., and Lupu, E. [http://arxiv.org/pdf/1802.03041.pdf Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection], 8 Feb 2018  
  
 
Radford, A., Metz, L. and Chintala, S. [http://arxiv.org/pdf/1511.06434.pdf  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks], 7 Jan 2016
 
Radford, A., Metz, L. and Chintala, S. [http://arxiv.org/pdf/1511.06434.pdf  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks], 7 Jan 2016
  
Raghunathan, A., Steinhardt, J., Liang, P. [http://arxiv.org/pdf/1801.09344.pdf Certified Defenses against Adversarial Examples], 29 Jan 2018
+
Raghunathan, A., Steinhardt, J., and Liang, P. [http://arxiv.org/pdf/1801.09344.pdf Certified Defenses against Adversarial Examples], 29 Jan 2018
  
Rahman, M., Azimpourkivi, M., Topkara, U., Carbunar, B. [http://users.cs.fiu.edu/~carbunar/vamos.pdf  Video Liveness for Citizen Journalism: Attacks and Defenses], Apr 2017
+
Rahman, M., Azimpourkivi, M., Topkara, U., and Carbunar, B. [http://users.cs.fiu.edu/~carbunar/vamos.pdf  Video Liveness for Citizen Journalism: Attacks and Defenses], Apr 2017
  
 
Rouhani, B., Riazi, M., and Koushanfar, F.  [http://arxiv.org/pdf/1709.02538.pdf CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning], 1 Apr 2018
 
Rouhani, B., Riazi, M., and Koushanfar, F.  [http://arxiv.org/pdf/1709.02538.pdf CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning], 1 Apr 2018
Line 192: Line 192:
 
Schneier, B.  [http://www.schneier.com/blog/archives/2017/02/security_and_th.html Security and the Internet of Things], 2017
 
Schneier, B.  [http://www.schneier.com/blog/archives/2017/02/security_and_th.html Security and the Internet of Things], 2017
  
Shen, S., Tople, S., Saxena, P. [http://www.comp.nus.edu.sg/~shruti90/papers/auror.pdf AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems], 5 Dec 2016
+
Shen, S., Tople, S., and Saxena, P. [http://www.comp.nus.edu.sg/~shruti90/papers/auror.pdf AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems], 5 Dec 2016
  
 
Shokri, R., Stronati, M., and Shmatikov, V. [http://arxiv.org/pdf/1610.05820.pdf Membership Inference Attacks Against Machine Learning Models], 31 Mar 2017
 
Shokri, R., Stronati, M., and Shmatikov, V. [http://arxiv.org/pdf/1610.05820.pdf Membership Inference Attacks Against Machine Learning Models], 31 Mar 2017
Line 198: Line 198:
 
Šrndic, N. and Laskov, P. [http://www.utdallas.edu/~muratk/courses/dmsec_files/srndic-laskov-sp2014.pdf Practical Evasion of a Learning-Based Classifier: A Case Study], 2014  
 
Šrndic, N. and Laskov, P. [http://www.utdallas.edu/~muratk/courses/dmsec_files/srndic-laskov-sp2014.pdf Practical Evasion of a Learning-Based Classifier: A Case Study], 2014  
  
Stevens, R., Suciu, O., Ruef, A., Hong, S., Hicks, M., Dumitras, T. [http://arxiv.org/pdf/1701.04739.pdf  Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning], 17 Jan 2017
+
Stevens, R., Suciu, O., Ruef, A., Hong, S., Hicks, M., and Dumitras, T. [http://arxiv.org/pdf/1701.04739.pdf  Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning], 17 Jan 2017
  
Stokes, J., Wang, D., Marinescu, M., Mariono, M., Bussone, B. [http://arxiv.org/pdf/1712.05919.pdf Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models] 16 Dec 2017
+
Stokes, J., Wang, D., Marinescu, M., Mariono, M., and Bussone, B. [http://arxiv.org/pdf/1712.05919.pdf Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models] 16 Dec 2017
  
 
Stoica, I., Song, D., Popa, R., Patterson, D., Mahoney, M., Katz, R., Joseph, A., Jordan, M., Hellerstein, J., Gonzalez, J., Goldberg, K., Ghodsi, A., Culler, D., and Abbeel, P. [http://arxiv.org/pdf/1712.05855.pdf A Berkeley View of Systems Challenges for AI], 15 Dec 2017
 
Stoica, I., Song, D., Popa, R., Patterson, D., Mahoney, M., Katz, R., Joseph, A., Jordan, M., Hellerstein, J., Gonzalez, J., Goldberg, K., Ghodsi, A., Culler, D., and Abbeel, P. [http://arxiv.org/pdf/1712.05855.pdf A Berkeley View of Systems Challenges for AI], 15 Dec 2017
Line 206: Line 206:
 
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. [http://arxiv.org/pdf/1312.6199.pdf Intriguing properties of neural networks], 19 Feb 2014
 
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. [http://arxiv.org/pdf/1312.6199.pdf Intriguing properties of neural networks], 19 Feb 2014
  
Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P. [http://arxiv.org/pdf/1704.03453.pdf The Space of Transferable Adversarial Examples], 23 May 2017
+
Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. [http://arxiv.org/pdf/1704.03453.pdf The Space of Transferable Adversarial Examples], 23 May 2017
  
Uesato, J., O'Donoghue, B., Oord, A., Kohli, P. [http://arxiv.org/pdf/1802.05666.pdf Adversarial Risk and the Dangers of Evaluating Against Weak Attacks], 12 Jun 2018
+
Uesato, J., O'Donoghue, B., Oord, A., and Kohli, P. [http://arxiv.org/pdf/1802.05666.pdf Adversarial Risk and the Dangers of Evaluating Against Weak Attacks], 12 Jun 2018
  
 
U.S. Department of Defense [http://www.defense.gov/Portals/1/Documents/law_war_manual15.pdf Law of War Manual, Chapter XVI | Cyber Operations], 2015
 
U.S. Department of Defense [http://www.defense.gov/Portals/1/Documents/law_war_manual15.pdf Law of War Manual, Chapter XVI | Cyber Operations], 2015
Line 228: Line 228:
 
U.S. Department of Homeland Security [http://info.publicintelligence.net/OCIA-ArtificialIntelligence.pdf Narrative Analysis: Artificial Intelligence  | National Protection and Programs Directorate - Office of Cyber and Infrastructure Analysis], July 2017
 
U.S. Department of Homeland Security [http://info.publicintelligence.net/OCIA-ArtificialIntelligence.pdf Narrative Analysis: Artificial Intelligence  | National Protection and Programs Directorate - Office of Cyber and Infrastructure Analysis], July 2017
  
Uther, W., Veloso, M. [http://www.cs.cmu.edu/~mmv/papers/03TR-advRL.pdf Adversarial Reinforcement Learning], Jan 2003
+
Uther, W. and Veloso, M. [http://www.cs.cmu.edu/~mmv/papers/03TR-advRL.pdf Adversarial Reinforcement Learning], Jan 2003
  
 
Waltzmann, R. [http://www.rand.org/content/dam/rand/pubs/testimonies/CT400/CT473/RAND_CT473.pdf The Weaponization of Information: The Need for Cognitive Security], testimony presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity, 27 Apr 2017
 
Waltzmann, R. [http://www.rand.org/content/dam/rand/pubs/testimonies/CT400/CT473/RAND_CT473.pdf The Weaponization of Information: The Need for Cognitive Security], testimony presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity, 27 Apr 2017
  
Wang, Q. Guo, W., Zhang, K., Ororbia II, A., Xing, X., Giles, C., Liu, X. [http://arxiv.org/pdf/1610.01239.pdf Adversary Resistant Deep Neural Networks with an Application to Malware Detection], 27 Apr 2017
+
Wang, Q. Guo, W., Zhang, K., Ororbia II, A., Xing, X., Giles, C., and Liu, X. [http://arxiv.org/pdf/1610.01239.pdf Adversary Resistant Deep Neural Networks with an Application to Malware Detection], 27 Apr 2017
  
 
Wang C. [http://arxiv.org/pdf/1803.00657.pdf Evolutionary Generative Adversarial Networks], 1 Mar 2018
 
Wang C. [http://arxiv.org/pdf/1803.00657.pdf Evolutionary Generative Adversarial Networks], 1 Mar 2018
Line 238: Line 238:
 
Xiao, H. [http://pdfs.semanticscholar.org/6adb/6154e091e6448d63327eadb6159746a2710d.pdf Adversarial and Secure Machine Learning], 27 Oct 2016
 
Xiao, H. [http://pdfs.semanticscholar.org/6adb/6154e091e6448d63327eadb6159746a2710d.pdf Adversarial and Secure Machine Learning], 27 Oct 2016
  
Xu, W., Qi, Y., Evans, D. [http://evademl.org/docs/evademl.pdf Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers], Feb 2016
+
Xu, W., Qi, Y., and Evans, D. [http://evademl.org/docs/evademl.pdf Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers], Feb 2016
  
Xu, W., Evans, D., Qi, Y. [http://arxiv.org/pdf/1704.01155.pdf Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks], 5 Dec 2017
+
Xu, W., Evans, D., and Qi, Y. [http://arxiv.org/pdf/1704.01155.pdf Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks], 5 Dec 2017
  
Yampolskiy, R., Spellchecker, M.S. [http://arxiv.org/ftp/arxiv/papers/1610/1610.07997.pdf Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures], Oct 2016
+
Yampolskiy, R., and Spellchecker, M.S. [http://arxiv.org/ftp/arxiv/papers/1610/1610.07997.pdf Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures], Oct 2016
  
Yan J., Qi, Y., Rao, Q [http://downloads.hindawi.com/journals/scn/2018/7247095.pdf Detecting Malware with an Ensemble Method Based on Deep Neural Network], 18 Aug 2017
+
Yan J., Qi, Y., and Rao, Q [http://downloads.hindawi.com/journals/scn/2018/7247095.pdf Detecting Malware with an Ensemble Method Based on Deep Neural Network], 18 Aug 2017
  
Yuan, X., He, P., Zhu, Q., Bhat, R., Li, X. [http://arxiv.org/pdf/1712.07107.pdf Adversarial Examples. Attacks and Defenses for Deep Learning], 5 2018]
+
Yuan, X., He, P., Zhu, Q., Bhat, R., and Li, X. [http://arxiv.org/pdf/1712.07107.pdf Adversarial Examples. Attacks and Defenses for Deep Learning], 5 2018]
  
Zane, C., Markel, A. [http://www.dtic.mil/dtic/tr/fulltext/u2/a619747.pdf Machine Learning Malware Detection], 2015
+
Zane, C. and Markel, A. [http://www.dtic.mil/dtic/tr/fulltext/u2/a619747.pdf Machine Learning Malware Detection], 2015
  
Zhang, C., Bengio S., Hardt, M., Recht, B., Vinyals, O. [http://arxiv.org/pdf/1611.03530.pdf Understanding deep learning requires rethinking generalization], 26 Feb 2017
+
Zhang, C., Bengio S., Hardt, M., Recht, B., and Vinyals, O. [http://arxiv.org/pdf/1611.03530.pdf Understanding deep learning requires rethinking generalization], 26 Feb 2017

Revision as of 16:52, 7 July 2018

Google search...

__________________________________________________________


Akhtar, N. and Mian, A. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, 26 Feb 2018

Brundage et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Feb 2018

__________________________________________________________

Abadi, M. Chu , A. Goodfellow, I. McMahan, H. Mironov, I. Talwar, K. and Zhang, L. Deep Learning with Differential Privacy, 24 Oct 2016

Abhijith, Wallace, B., Akhavan-Masouleh, S., Davis, A., Wojnowicz, M., and Brook, J. Introduction to Artificial intelligence for security professionals, 12 Aug 2017

Abramson, M. Toward Adversarial Online Learning and the Science of Deceptive Machines, 13 Sep 2017

Al-Dujaili, A., Haung, A., Hemberg, E., and O'Reilly, U. Adversarial Deep Learning for Robust Detection of Binary Encoded Malware, 25 Mar 2018

Allen, G., and Chan T. Artificial Intelligence and National Security - BELFER CENTER STUDY, Jul 2017

Amodei, D. and Olah, C. Concrete Problems in AI Safety, 25 Jul 2016

Anderson, H.S., Kharkar, A., and Filar, B. Evading Machine Learning Malware Detection, 27 Jul 2017

Anderson, H.S., Kharkar, A., Filar, B., Evans, D., and Roth, P. Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning, 26 Jan 2018

Anderson, H.S., Woodbridge, J., and Filar, B. DeepDGA: Adversarially-Tuned Domain Generation and Detection, 6 Oct 2016

Arulkumaran, K., Deisenroth, M., Brundage, M., and Bharath, A. A Brief Survey of Deep Reinforcement Learning, 28 Sep 2017

Barreno, M., Nelson, B., Sears, R., Joseph, A., and Tygar, J.D. Can Machine Learning Be Secure?, 21 Mar 2016

Bastani, O., Kim, C., and Bastani, H. Interpreting Blackbox Models via Model Extraction, 22 May 2018

Biggio, B., Nelson, B., and Laskov, P. Poisoning Attacks against Support Vector Machines, 25 Mar 2013

Biggio, B., Nelson, B., and Laskov, P. Support Vector Machines Under Adversarial Label Noise, 2011

Bulò, S., Biggio, B., Pillai, I., Pellillo, M., and Roli, F. Randomized Prediction Games for Adversarial Machine Learning, 11 Nov 2017

Carbon Black Beyond the Hype: Security Experts Weigh in on Artificial Intelligence, Machine Learning, and Non-Malware Attacks, 2017

Carlini, N. and Wagner, D. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text, 5 Jan 2018

Carlini, N., Mishra, P., Vaidya, T., Zhang, Y., Sherr, M., Shields, C., Wagner, D., and Zhou, W. Hidden Voice Commands, 2016

Carlini, N., Wagner, D. MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples, 22 Nov 2017

Chen, H. and Wang FY. Artificial Intelligence for Homeland Security, Jan 2005

Chen, P., Sharma, Y., Zhang, H., Yi, J., and Hsieh, C. EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples, 10 Feb 2018

Chen, S., Xue, M., Fan, L., Hao, S., Xu, L., Zhu, H., and Li, B. Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach, 31 Oct 2017

Chen, S., Xue, M., Fan, L., and Zhu, H. Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach, 13 Oct 2017

Chen, X., Liu, C., Li, B., Lu, K., and Song, D. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning, 15 Dec 2017

Conroy, N. Rubin V. and Chen, Y. Automatic Deception Detection: Methods for Finding Fake News, Aug 2017

Crawford, K. and Calo, R. There is a blind spot in AI research, 20 Oct 2016

D’Avino, D., Cozzolino, D., Poggi, G., and Verdoliva, L. Autoencoder with recurrent neural networks for video forgery detection, 29 Aug 2017

Demontis, A., Biggio, B., Fumera, G., Giacintio, G., and Roli, F. Infinity-Norm Support Vector Machines Against Adversarial Label Contamination, 2017

Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., and Wernsing, J. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy, 24 Feb 2016

Elsayed, G, Shankar, S., Cheung, B., Papernot, N., Kurakin, A. Goodfellow, I., and Sohl-Dickstein, J. Adversarial Examples that Fool both Human and Computer Vision, 22 May 2018

Everitt, T., Krakovna, V., Orseau, L., Hutter, M., and Legg, S Reinforcement Learning with a Corrupted Reward Channel, 19 Aug 2017

Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. Robust Physical-World Attacks on Deep Learning Visual Classification, 27 Jul 2017

Fredrikson, M., Jha, S., and Ristenpart, T. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures, 12 Oct 2015

Goodfellow, I., Papernot, N., Huang, S.,Duan, Y., Abbeel, P., and Clark, J. Attacking Machine Learning with Adversarial Examples, 24 Feb 2017

Goodfellow, I., Shlens, J., and Szegedy C. Explaining and Harnessing Adversarial Examples, 20 Mar 2015

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative Adversarial Nets, 10 Jun 2014

Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P Adversarial Examples for Malware Detection, 12 Aug 2017

Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P Adversarial Perturbations Against Deep Neural Networks for Malware Classification, 16 Jun 2016

Grosse, K., Manoharan, P., Papernot, N., Backes, M., and McDaniel, P. On the (Statistical) Detection of Adversarial Examples, 21 Feb 2017

Gu, T., Dolan-Gavitt, B., and Garg, S. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, 22 Aug 2017

Hicks, K., Hunter, A.P., Samp, L.S., and Coll, G. Assessing the Third Offset Strategy 2017

Hitawala, S. Comparative Study on Generative Adversarial Networks, 12 Jan 2018

Hosseini, H., Chen, Y., Kannan, S., Zhang, B., and Poovendran, R. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 13 Mar 2017

Hosseini, H., Xiao, B. and Poovendran, R., Google’s Cloud Vision API Is Not Robust To Noise, 20 Jul 2017

Hu, W. and Tan, Y. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN, ..MalGAN 20 Feb 2017

Huang, L., Joseph, A., Neson, B., Rubinstein, B., and Tygar, J.D. Adversarial Machine Learning, Oct 2011

Jin J., Dundar, A., and Culurciello, E. Robust Convolutional Neural Networks under Adversarial Noise, 25 Feb 2016

Kantarcioglu, M. and Xi, B. Adversarial Data Mining for Cyber Security, 28 Oct 2016

Kantchelian, A., Tygar, J.D., and Joseph, A. Evasion and Hardening of Tree Ensemble Classifiers, 27 May 2016

Kantchelian, A. Taming Evasions in Machine Learning Based Detection, 12 Aug 2016

Kashyap, A., Parmar, R., Agarwal, M., and Gupta, H. An Evaluation of Digital Image Forgery Detection Approaches, 30 Mar 2017

Kolosnjaji, B., Demontiz, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., and Roli, F. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables, 12 Mar 2018

Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J. Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection, 13 Feb 2018

Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J. Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples, 13 May 2018

Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial examples in the physical world, 11 Feb 2017

Laskov, P. and Lippmann, R. Machine Learning in Adversarial Environments, 28 Jun 2010

Lewis, L. Insights for the Third Offset: Addressing Challenges of Autonomy and Artificial Intelligence in Military Operations, Sep 2017

Lu, P., Chen, P., Chen, K., and Yu, C. On the Limitation of MagNet Defense against L1-based Adversarial Examples, 9 May 2018

Luo, B., Liu, Y. Wei, L., and Xu, Q. Towards Imperceptible and Robust Adversarial Example Attacks against Neural, 15 Jan 2018

Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks, 19 Jun 2017

Mayer, M. Norwegian Institute for Defence Studies, Oslo IFS Insights, Apr 2018

Meng, D. and Chen, H. MagNet: a Two-Pronged Defense against Adversarial Examples, 11 Sep 2017

Miller, D., Hu, X., Qiu, Z., and Kesidis, G. Adversarial Learning. A Critical Review and Active Learning Study, 27 May 2017

Muñoz-González, L, Bissio, B., Demontis, A., Paudice, A., Wongreassamee, V., Lupu, E., and Roli, F. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization, 29 Aug 2017

Nataraj, L A Signal Processing Approach To Malware Analysis, Dec 2015

Naveiro, R., Redondo, A., Insua, D., and Ruggeri, F. Adversarial classification: An adversarial risk analysis approach, 21 Feb 2018

Nguyen A, Yosinski J, and Clune J. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, 2 Apr 2015 Video

North Atlantic Treaty Organization (NATO): Cooperative Cyber Defence Centre of Excellence; Minárik, T., Jakschis, R., and Lindström, L. 10th International Conference on Cyber Conflict CyCon X: Maximising Effects, 30 May 2018

North Atlantic Treaty Organization (NATO): Joint Air Power Competence Centre NATO Joint Air Power and Offensive Cyber Operations, Nov 2017

Norton, A. and Qi, Y. Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning, 1 Aug 2017

Ororbia II, A., Giles, C., and Kifer, D. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization, 29 Jul 2016

Papernot N., Goodfellow, I., Erlingsson, U., and McDaniel, P. Adversarial Examples in Machine Learning, 1 Feb 2017

Papernot N., Goodfellow, I., Sheatsley, R., Feinman, R., and McDaniel, P. Cleverhans v.1.0.0: an adversarial machine learning library, 14 Dec 2016

Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, 14 Nov 2015

Papernot, N., McDaniel, P, Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. The Limitations of Deep Learning in Adversarial Settings, 24 Nov 2015

Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, 19 Feb 2016

Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. Practical Black-Box Attacks against Machine Learning, 8 Feb 2016

Papernot, N., McDaniel, P., Sinha, A., and Wellman M. Towards the Science of Security and Privacy in Machine Learning, 11 Nov 2016

Papernot, N., McDaniel, P., and Goodfellow I. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, 24 May 2016

Paudice A., Muñoz-González, L., Gyorgy, A., and Lupu, E. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection, 8 Feb 2018

Radford, A., Metz, L. and Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, 7 Jan 2016

Raghunathan, A., Steinhardt, J., and Liang, P. Certified Defenses against Adversarial Examples, 29 Jan 2018

Rahman, M., Azimpourkivi, M., Topkara, U., and Carbunar, B. Video Liveness for Citizen Journalism: Attacks and Defenses, Apr 2017

Rouhani, B., Riazi, M., and Koushanfar, F. CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning, 1 Apr 2018

Rouhani, B., Riazi, M., and Koushanfar, F. DeepSecure: Scalable Provably-Secure Deep Learning, 24 May 2017

Rubinstein, B., Nelson, B., Huang, L., Joseph, A., Lau, S., Rao, S., Taft, N., and Tygar, J.D. ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors, 2009

Schneier, B. The Internet of Things is Wildly Insecure--and Often Unpatchable, 2014

Schneier, B. Security and the Internet of Things, 2017

Shen, S., Tople, S., and Saxena, P. AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems, 5 Dec 2016

Shokri, R., Stronati, M., and Shmatikov, V. Membership Inference Attacks Against Machine Learning Models, 31 Mar 2017

Šrndic, N. and Laskov, P. Practical Evasion of a Learning-Based Classifier: A Case Study, 2014

Stevens, R., Suciu, O., Ruef, A., Hong, S., Hicks, M., and Dumitras, T. Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning, 17 Jan 2017

Stokes, J., Wang, D., Marinescu, M., Mariono, M., and Bussone, B. Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models 16 Dec 2017

Stoica, I., Song, D., Popa, R., Patterson, D., Mahoney, M., Katz, R., Joseph, A., Jordan, M., Hellerstein, J., Gonzalez, J., Goldberg, K., Ghodsi, A., Culler, D., and Abbeel, P. A Berkeley View of Systems Challenges for AI, 15 Dec 2017

Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. Intriguing properties of neural networks, 19 Feb 2014

Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. The Space of Transferable Adversarial Examples, 23 May 2017

Uesato, J., O'Donoghue, B., Oord, A., and Kohli, P. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks, 12 Jun 2018

U.S. Department of Defense Law of War Manual, Chapter XVI | Cyber Operations, 2015

U.S. Department of Defense: US Air Force Artificial Intelligence and National Security, 26 Apr 2018

U.S. Department of Defense: Army Cyber Institute at West Point and Arizona State University The New Dogs of War: The Future of Weaponized Artificial Intelligence, 2017

U.S. Department of Defense: U.S. Defense Science Board Report of the Defense Science Board (DSB) Task Force on Cyber Supply Chain, Apr 2017

U.S. Department of Defense: U.S. Defense Science Board Terms of Reference | Defense Science Board (DSB) Task Force on Counter Autonomy, 18 Jun 2018

U.S. Government: American Technology Council (ATC) Report to the President on IT Modernization, 2017

U.S. Government: White House 2018 White House Summit on Artificial Intelligence for American Industry, 10 May 2018

U.S. Department of Homeland Security Artificial Intelligence White Paper | Science and Technology Advisory Committee (HSSTAC): Quadrennial Homeland Security Review Subcommittee, 10 Mar 2017

U.S. Department of Homeland Security Narrative Analysis: Artificial Intelligence | National Protection and Programs Directorate - Office of Cyber and Infrastructure Analysis, July 2017

Uther, W. and Veloso, M. Adversarial Reinforcement Learning, Jan 2003

Waltzmann, R. The Weaponization of Information: The Need for Cognitive Security, testimony presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity, 27 Apr 2017

Wang, Q. Guo, W., Zhang, K., Ororbia II, A., Xing, X., Giles, C., and Liu, X. Adversary Resistant Deep Neural Networks with an Application to Malware Detection, 27 Apr 2017

Wang C. Evolutionary Generative Adversarial Networks, 1 Mar 2018

Xiao, H. Adversarial and Secure Machine Learning, 27 Oct 2016

Xu, W., Qi, Y., and Evans, D. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers, Feb 2016

Xu, W., Evans, D., and Qi, Y. Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks, 5 Dec 2017

Yampolskiy, R., and Spellchecker, M.S. Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures, Oct 2016

Yan J., Qi, Y., and Rao, Q Detecting Malware with an Ensemble Method Based on Deep Neural Network, 18 Aug 2017

Yuan, X., He, P., Zhu, Q., Bhat, R., and Li, X. Adversarial Examples. Attacks and Defenses for Deep Learning, 5 2018]

Zane, C. and Markel, A. Machine Learning Malware Detection, 2015

Zhang, C., Bengio S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking generalization, 26 Feb 2017