Difference between revisions of "Cybersecurity References"

From
Jump to: navigation, search
m
m (Text replacement - "http:" to "https:")
Line 5: Line 5:
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
}}
 
}}
[http://www.google.com/search?ei=3-M0W6f4C8O45gLx7onoCQ&q=cyber+security+%7Eadversary+artificial+intelligence+attack+threat+%7Edefense&oq=cyber+security+%7Eadversary+artificial+intelligence+attack+threat+%7Edefense Google search...]
+
[https://www.google.com/search?ei=3-M0W6f4C8O45gLx7onoCQ&q=cyber+security+%7Eadversary+artificial+intelligence+attack+threat+%7Edefense&oq=cyber+security+%7Eadversary+artificial+intelligence+attack+threat+%7Edefense Google search...]
  
 
* [[Case Studies]]
 
* [[Case Studies]]
Line 13: Line 13:
 
* [[Offense - Adversarial Threats/Attacks]]
 
* [[Offense - Adversarial Threats/Attacks]]
 
* [[Capabilities]]  
 
* [[Capabilities]]  
* [http://www.csiac.org/resources/cybersecurity-related-websites/  Cybersecurity-related Websites | Cyber Security and Information Systems Information Analysis Center (CSIAC)]
+
* [https://www.csiac.org/resources/cybersecurity-related-websites/  Cybersecurity-related Websites | Cyber Security and Information Systems Information Analysis Center (CSIAC)]
 
* [[Government Services]] ...for other related papers
 
* [[Government Services]] ...for other related papers
 
* [[Cybersecurity: National Institute of Standards and Technology (NIST) & U.S. Department of Homeland Security (DHS)]]
 
* [[Cybersecurity: National Institute of Standards and Technology (NIST) & U.S. Department of Homeland Security (DHS)]]
* [http://www.arxiv-sanity.com/ Arxiv Sanity Preserver] to accelerate research
+
* [https://www.arxiv-sanity.com/ Arxiv Sanity Preserver] to accelerate research
* [http://whitepapers.virtualprivatelibrary.net/Scholar.pdf Academic and Scholar Search Engines and Sources | Marcus P. Zillman - Virtual Private Library]
+
* [https://whitepapers.virtualprivatelibrary.net/Scholar.pdf Academic and Scholar Search Engines and Sources | Marcus P. Zillman - Virtual Private Library]
  
 
__________________________________________________________
 
__________________________________________________________
  
  
Abadi, M. Chu , A. Goodfellow, I. McMahan, H. Mironov, I. Talwar, K. and Zhang, L. [http://arxiv.org/pdf/1607.00133.pdf Deep Learning with Differential Privacy], 24 Oct 2016
+
Abadi, M. Chu , A. Goodfellow, I. McMahan, H. Mironov, I. Talwar, K. and Zhang, L. [https://arxiv.org/pdf/1607.00133.pdf Deep Learning with Differential Privacy], 24 Oct 2016
  
Abhijith, Wallace, B., Akhavan-Masouleh, S., Davis, A., Wojnowicz, M., and Brook, J. [http://www.eurotablets.eu/book/1292451282/download-introduction-to-artificial-intelligence-for-security-professionals-the-cylance-press.pdf Introduction to Artificial intelligence for security professionals], 12 Aug 2017
+
Abhijith, Wallace, B., Akhavan-Masouleh, S., Davis, A., Wojnowicz, M., and Brook, J. [https://www.eurotablets.eu/book/1292451282/download-introduction-to-artificial-intelligence-for-security-professionals-the-cylance-press.pdf Introduction to Artificial intelligence for security professionals], 12 Aug 2017
  
Abramson, M. [http://pdfs.semanticscholar.org/b2f7/69ddcf8cae594f39e839aa29b27b98f403ca.pdf Toward Adversarial Online Learning and the Science of Deceptive Machines], 13 Sep 2017
+
Abramson, M. [https://pdfs.semanticscholar.org/b2f7/69ddcf8cae594f39e839aa29b27b98f403ca.pdf Toward Adversarial Online Learning and the Science of Deceptive Machines], 13 Sep 2017
  
Agarap, A. and Pepito, F. [http://arxiv.org/pdf/1801.00318.pdf Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification], 31 Dec 2017
+
Agarap, A. and Pepito, F. [https://arxiv.org/pdf/1801.00318.pdf Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification], 31 Dec 2017
  
Akhtar, N. and Mian, A. [http://arxiv.org/pdf/1801.00553.pdf Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey], 26 Feb 2018
+
Akhtar, N. and Mian, A. [https://arxiv.org/pdf/1801.00553.pdf Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey], 26 Feb 2018
  
Al-Dujaili, A., Haung, A., Hemberg, E., and O'Reilly, U. [http://arxiv.org/pdf/1801.02950.pdf Adversarial Deep Learning for Robust Detection of Binary Encoded Malware], 25 Mar 2018
+
Al-Dujaili, A., Haung, A., Hemberg, E., and O'Reilly, U. [https://arxiv.org/pdf/1801.02950.pdf Adversarial Deep Learning for Robust Detection of Binary Encoded Malware], 25 Mar 2018
  
Alkasassbeh, M. [http://arxiv.org/ftp/arxiv/papers/1712/1712.09623.pdf An empirical evaluation for the intrusion detection features based on machine learning and feature selection methods], 27 Dec 2017
+
Alkasassbeh, M. [https://arxiv.org/ftp/arxiv/papers/1712/1712.09623.pdf An empirical evaluation for the intrusion detection features based on machine learning and feature selection methods], 27 Dec 2017
  
Allen, G., and Chan T. [http://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf Artificial Intelligence and National Security - Belfer Center Study], Jul 2017
+
Allen, G., and Chan T. [https://www.belfercenter.org/sites/default/files/files/publication/AI%20NatSec%20-%20final.pdf Artificial Intelligence and National Security - Belfer Center Study], Jul 2017
  
Alzantot, M., Balaji, B., and Srivastava, M. [http://arxiv.org/pdf/1801.00554.pdf Did you hear that? Adversarial Examples Against Automatic] [[Speech Recognition]], 2 Jan 2018
+
Alzantot, M., Balaji, B., and Srivastava, M. [https://arxiv.org/pdf/1801.00554.pdf Did you hear that? Adversarial Examples Against Automatic] [[Speech Recognition]], 2 Jan 2018
  
Amodei, D. and Olah, C. [http://arxiv.org/pdf/1606.06565.pdf Concrete Problems in AI Safety], 25 Jul 2016
+
Amodei, D. and Olah, C. [https://arxiv.org/pdf/1606.06565.pdf Concrete Problems in AI Safety], 25 Jul 2016
  
Anderson, H.S., Kharkar, A., and Filar, B. [http://www.blackhat.com/docs/us-17/thursday/us-17-Anderson-Bot-Vs-Bot-Evading-Machine-Learning-Malware-Detection-wp.pdf Evading Machine Learning Malware Detection], 27 Jul 2017
+
Anderson, H.S., Kharkar, A., and Filar, B. [https://www.blackhat.com/docs/us-17/thursday/us-17-Anderson-Bot-Vs-Bot-Evading-Machine-Learning-Malware-Detection-wp.pdf Evading Machine Learning Malware Detection], 27 Jul 2017
  
Anderson, H.S., Kharkar, A., Filar, B., Evans, D., and Roth, P. [http://arxiv.org/pdf/1801.08917.pdf Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning], 26 Jan 2018
+
Anderson, H.S., Kharkar, A., Filar, B., Evans, D., and Roth, P. [https://arxiv.org/pdf/1801.08917.pdf Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning], 26 Jan 2018
  
Anderson, H.S., Woodbridge, J., and Filar, B. [http://arxiv.org/pdf/1610.01969.pdf DeepDGA: Adversarially-Tuned Domain Generation and Detection], 6 Oct 2016
+
Anderson, H.S., Woodbridge, J., and Filar, B. [https://arxiv.org/pdf/1610.01969.pdf DeepDGA: Adversarially-Tuned Domain Generation and Detection], 6 Oct 2016
  
Arulkumaran, K., Deisenroth, M., Brundage, M., and Bharath, A. [http://arxiv.org/pdf/1708.05866.pdf A Brief Survey of Deep Reinforcement Learning], 28 Sep 2017
+
Arulkumaran, K., Deisenroth, M., Brundage, M., and Bharath, A. [https://arxiv.org/pdf/1708.05866.pdf A Brief Survey of Deep Reinforcement Learning], 28 Sep 2017
  
Athalye, A. and Carlini, N. [http://arxiv.org/pdf/1804.03286.pdf On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses], 10 Apr 2018
+
Athalye, A. and Carlini, N. [https://arxiv.org/pdf/1804.03286.pdf On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses], 10 Apr 2018
  
Bao, R., Liang, S., and Wang, Q. [http://arxiv.org/pdf/1805.07862.pdf Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference], 21 May 2018
+
Bao, R., Liang, S., and Wang, Q. [https://arxiv.org/pdf/1805.07862.pdf Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference], 21 May 2018
  
Barreno, M., Nelson, B., Sears, R., Joseph, A., and Tygar, J.D. [http://bnrg.cs.berkeley.edu/~adj/publications/paper-files/asiaccs06.pdf Can Machine Learning Be Secure?], 21 Mar 2016
+
Barreno, M., Nelson, B., Sears, R., Joseph, A., and Tygar, J.D. [https://bnrg.cs.berkeley.edu/~adj/publications/paper-files/asiaccs06.pdf Can Machine Learning Be Secure?], 21 Mar 2016
  
Bastani, O., Kim, C., and Bastani, H. [http://arxiv.org/pdf/1705.08504.pdf Interpreting Blackbox Models via Model Extraction], 22 May 2018
+
Bastani, O., Kim, C., and Bastani, H. [https://arxiv.org/pdf/1705.08504.pdf Interpreting Blackbox Models via Model Extraction], 22 May 2018
  
Bauer, H., Burkacky, O., and Knochenhauer, C. [http://www.mckinsey.com/industries/semiconductors/our-insights/security-in-the-internet-of-things], May 2017
+
Bauer, H., Burkacky, O., and Knochenhauer, C. [https://www.mckinsey.com/industries/semiconductors/our-insights/security-in-the-internet-of-things], May 2017
  
Biggio, B., Nelson, B., and Laskov, P. [http://arxiv.org/pdf/1206.6389.pdf Poisoning Attacks against Support Vector Machines], 25 Mar 2013
+
Biggio, B., Nelson, B., and Laskov, P. [https://arxiv.org/pdf/1206.6389.pdf Poisoning Attacks against Support Vector Machines], 25 Mar 2013
  
Biggio, B., Nelson, B., and Laskov, P. [http://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise], 2011
+
Biggio, B., Nelson, B., and Laskov, P. [https://proceedings.mlr.press/v20/biggio11/biggio11.pdf Support Vector Machines Under Adversarial Label Noise], 2011
  
Brundage et al. [http://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation], Feb 2018
+
Brundage et al. [https://arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation], Feb 2018
  
Bulò, S., Biggio, B., Pillai, I., Pellillo, M., and Roli, F. [http://pralab.diee.unica.it/sites/default/files/bulo16-tnnls.pdf Randomized Prediction Games for Adversarial Machine Learning], 11 Nov 2017
+
Bulò, S., Biggio, B., Pillai, I., Pellillo, M., and Roli, F. [https://pralab.diee.unica.it/sites/default/files/bulo16-tnnls.pdf Randomized Prediction Games for Adversarial Machine Learning], 11 Nov 2017
  
Cao, X. and Zhenqiang Gong, N. [http://arxiv.org/pdf/1709.05583.pdf Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification], 11 Jan 2018
+
Cao, X. and Zhenqiang Gong, N. [https://arxiv.org/pdf/1709.05583.pdf Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification], 11 Jan 2018
  
Carbon Black [http://www.carbonblack.com/wp-content/uploads/2017/03/Carbon_Black_Research_Report_NonMalwareAttacks_ArtificialIntelligence_MachineLearning_BeyondtheHype.pdf Beyond the Hype: Security Experts Weigh in on Artificial Intelligence, Machine Learning, and Non-Malware Attacks], 2017
+
Carbon Black [https://www.carbonblack.com/wp-content/uploads/2017/03/Carbon_Black_Research_Report_NonMalwareAttacks_ArtificialIntelligence_MachineLearning_BeyondtheHype.pdf Beyond the Hype: Security Experts Weigh in on Artificial Intelligence, Machine Learning, and Non-Malware Attacks], 2017
  
Carlini, N. and Wagner, D. [http://arxiv.org/pdf/1705.07263.pdf Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods], 1 Nov 2017
+
Carlini, N. and Wagner, D. [https://arxiv.org/pdf/1705.07263.pdf Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods], 1 Nov 2017
  
Carlini, N. and Wagner, D. [http://arxiv.org/pdf/1801.01944.pdf Audio Adversarial Examples: Targeted Attacks on Speech-to-Text], 5 Jan 2018
+
Carlini, N. and Wagner, D. [https://arxiv.org/pdf/1801.01944.pdf Audio Adversarial Examples: Targeted Attacks on Speech-to-Text], 5 Jan 2018
  
Carlini, N., Mishra, P., Vaidya, T., Zhang, Y., Sherr, M., Shields, C., Wagner, D., and Zhou, W. [http://nicholas.carlini.com/papers/2016_usenix_hiddenvoicecommands.pdf Hidden Voice Commands], 2016
+
Carlini, N., Mishra, P., Vaidya, T., Zhang, Y., Sherr, M., Shields, C., Wagner, D., and Zhou, W. [https://nicholas.carlini.com/papers/2016_usenix_hiddenvoicecommands.pdf Hidden Voice Commands], 2016
  
Carlini, N. and Wagner, D. [http://arxiv.org/pdf/1711.08478.pdf MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples], 22 Nov 2017
+
Carlini, N. and Wagner, D. [https://arxiv.org/pdf/1711.08478.pdf MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples], 22 Nov 2017
  
Carlini, N. and Wagner, D. [http://arxiv.org/pdf/1608.04644.pdf Towards Evaluating the Robustness of Neural Networks], 22 Mar 2017
+
Carlini, N. and Wagner, D. [https://arxiv.org/pdf/1608.04644.pdf Towards Evaluating the Robustness of Neural Networks], 22 Mar 2017
  
Chen, H. and Wang FY. [http://www.researchgate.net/publication/242517767_Artificial_Intelligence_for_Homeland_Security Artificial Intelligence for Homeland Security], Jan 2005
+
Chen, H. and Wang FY. [https://www.researchgate.net/publication/242517767_Artificial_Intelligence_for_Homeland_Security Artificial Intelligence for Homeland Security], Jan 2005
  
Chen, P., Sharma, Y., Zhang, H., Yi, J., and Hsieh, C. [http://arxiv.org/pdf/1709.04114.pdf EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples], 10 Feb 2018
+
Chen, P., Sharma, Y., Zhang, H., Yi, J., and Hsieh, C. [https://arxiv.org/pdf/1709.04114.pdf EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples], 10 Feb 2018
  
Chen, S., Xue, M., Fan, L., Hao, S., Xu, L., Zhu, H., and Li, B. [http://arxiv.org/pdf/1706.04146.pdf Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach], 31 Oct 2017
+
Chen, S., Xue, M., Fan, L., Hao, S., Xu, L., Zhu, H., and Li, B. [https://arxiv.org/pdf/1706.04146.pdf Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach], 31 Oct 2017
  
Chen, S., Xue, M., Fan, L., and Zhu, H. [http://www.researchgate.net/publication/317576889_Hardening_Malware_Detection_Systems_Against_Cyber_Maneuvers_An_Adversarial_Machine_Learning_Approach Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach], 13 Oct 2017
+
Chen, S., Xue, M., Fan, L., and Zhu, H. [https://www.researchgate.net/publication/317576889_Hardening_Malware_Detection_Systems_Against_Cyber_Maneuvers_An_Adversarial_Machine_Learning_Approach Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach], 13 Oct 2017
  
Chen, X., Liu, C., Li, B., Lu, K., and Song, D. [http://arxiv.org/pdf/1712.05526.pdf Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning], 15 Dec 2017
+
Chen, X., Liu, C., Li, B., Lu, K., and Song, D. [https://arxiv.org/pdf/1712.05526.pdf Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning], 15 Dec 2017
  
Conroy, N. Rubin V. and Chen, Y. [http://www.asist.org/files/meetings/am15/proceedings/submissions/posters/193poster.pdf Automatic Deception Detection: Methods for Finding Fake News], Aug 2017
+
Conroy, N. Rubin V. and Chen, Y. [https://www.asist.org/files/meetings/am15/proceedings/submissions/posters/193poster.pdf Automatic Deception Detection: Methods for Finding Fake News], Aug 2017
  
Crawford, K. and Calo, R. [http://www.nature.com/polopoly_fs/1.20805!/menu/main/topColumns/topLeftColumn/pdf/538311a.pdf There is a blind spot in AI research], 20 Oct 2016
+
Crawford, K. and Calo, R. [https://www.nature.com/polopoly_fs/1.20805!/menu/main/topColumns/topLeftColumn/pdf/538311a.pdf There is a blind spot in AI research], 20 Oct 2016
  
Das, N., Shanbhogue, M., Chen, S., Chen, L., Kounavis, M., and Chau, D. [http://arxiv.org/pdf/1805.11852.pdf ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio], 30 May 2018
+
Das, N., Shanbhogue, M., Chen, S., Chen, L., Kounavis, M., and Chau, D. [https://arxiv.org/pdf/1805.11852.pdf ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio], 30 May 2018
  
Das, N., Shanbhogue, M., Chen, S., Hohman, F., Chen, L., Kounavis, M., and Chau, D. [http://arxiv.org/pdf/1705.02900.pdf Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression], 8 May 2017
+
Das, N., Shanbhogue, M., Chen, S., Hohman, F., Chen, L., Kounavis, M., and Chau, D. [https://arxiv.org/pdf/1705.02900.pdf Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression], 8 May 2017
  
Das, N., Shanbhogue, M., Chen, S., Hohman, F., Li, S., Chen, L., Kounavis, M., and Chau, D. [http://arxiv.org/pdf/1802.06816.pdf Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression], 19 Feb 2018
+
Das, N., Shanbhogue, M., Chen, S., Hohman, F., Li, S., Chen, L., Kounavis, M., and Chau, D. [https://arxiv.org/pdf/1802.06816.pdf Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression], 19 Feb 2018
  
D’Avino, D., Cozzolino, D., Poggi, G., and Verdoliva, L. [http://arxiv.org/pdf/1708.08754.pdf Autoencoder with recurrent neural networks for video forgery detection], 29 Aug 2017
+
D’Avino, D., Cozzolino, D., Poggi, G., and Verdoliva, L. [https://arxiv.org/pdf/1708.08754.pdf Autoencoder with recurrent neural networks for video forgery detection], 29 Aug 2017
  
Demontis, A., Biggio, B., Fumera, G., Giacintio, G., and Roli, F. [http://ceur-ws.org/Vol-1816/paper-11.pdf Infinity-Norm Support Vector Machines Against Adversarial Label Contamination], 2017
+
Demontis, A., Biggio, B., Fumera, G., Giacintio, G., and Roli, F. [https://ceur-ws.org/Vol-1816/paper-11.pdf Infinity-Norm Support Vector Machines Against Adversarial Label Contamination], 2017
  
Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., and Wernsing, J. [http://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/CryptonetsTechReport.pdf CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy], 24 Feb 2016  
+
Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., and Wernsing, J. [https://www.microsoft.com/en-us/research/wp-content/uploads/2016/04/CryptonetsTechReport.pdf CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy], 24 Feb 2016  
  
Elsayed, G, Shankar, S., Cheung, B., Papernot, N., Kurakin, A. Goodfellow, I., and Sohl-Dickstein, J. [http://arxiv.org/pdf/1802.08195.pdf Adversarial Examples that Fool both Human and Computer Vision], 22 May 2018
+
Elsayed, G, Shankar, S., Cheung, B., Papernot, N., Kurakin, A. Goodfellow, I., and Sohl-Dickstein, J. [https://arxiv.org/pdf/1802.08195.pdf Adversarial Examples that Fool both Human and Computer Vision], 22 May 2018
  
Elsayed, G., Goodfellow, I., and Sohl-Dickstein, J. [http://arxiv.org/pdf/1806.11146.pdf Adversarial Reprogramming of Neural Networks], 28 Jun 2018
+
Elsayed, G., Goodfellow, I., and Sohl-Dickstein, J. [https://arxiv.org/pdf/1806.11146.pdf Adversarial Reprogramming of Neural Networks], 28 Jun 2018
  
Everitt, T., Krakovna, V., Orseau, L., Hutter, M., and Legg, S [http://static.ijcai.org/proceedings-2017/0656.pdf Reinforcement Learning with a Corrupted Reward Channel], 19 Aug 2017
+
Everitt, T., Krakovna, V., Orseau, L., Hutter, M., and Legg, S [https://static.ijcai.org/proceedings-2017/0656.pdf Reinforcement Learning with a Corrupted Reward Channel], 19 Aug 2017
  
Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. [http://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification], 27 Jul 2017
+
Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. [https://arxiv.org/pdf/1707.08945.pdf Robust Physical-World Attacks on Deep Learning Visual Classification], 27 Jul 2017
  
Fawzi, A., Fawzi, H., and Fawzi, O. [http://arxiv.org/pdf/1802.08686.pdf Adversarial vulnerability for any classifier], 23 Feb 2018
+
Fawzi, A., Fawzi, H., and Fawzi, O. [https://arxiv.org/pdf/1802.08686.pdf Adversarial vulnerability for any classifier], 23 Feb 2018
  
Folz, J., Palacio, S., Hees, J., Borth, D., and Dengel, A. [http://arxiv.org/pdf/1803.07994.pdf Adversarial Defense based on Structure-to-Signal Autoencoders], 21 Mar 2018
+
Folz, J., Palacio, S., Hees, J., Borth, D., and Dengel, A. [https://arxiv.org/pdf/1803.07994.pdf Adversarial Defense based on Structure-to-Signal Autoencoders], 21 Mar 2018
  
Fredrikson, M., Jha, S., and Ristenpart, T. [http://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures], 12 Oct 2015
+
Fredrikson, M., Jha, S., and Ristenpart, T. [https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures], 12 Oct 2015
  
Goodfellow, I., Papernot, N., Huang, S.,Duan, Y., Abbeel, P., and Clark, J. [http://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples], 24 Feb 2017
+
Goodfellow, I., Papernot, N., Huang, S.,Duan, Y., Abbeel, P., and Clark, J. [https://blog.openai.com/adversarial-example-research/ Attacking Machine Learning with Adversarial Examples], 24 Feb 2017
  
Goodfellow, I., Shlens, J., and Szegedy C. [http://arxiv.org/pdf/1412.6572.pdf Explaining and Harnessing Adversarial Examples], 20 Mar 2015
+
Goodfellow, I., Shlens, J., and Szegedy C. [https://arxiv.org/pdf/1412.6572.pdf Explaining and Harnessing Adversarial Examples], 20 Mar 2015
  
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. [http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf Generative Adversarial Nets], 10 Jun 2014
+
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. [https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf Generative Adversarial Nets], 10 Jun 2014
  
Gopinath, D., Katz, G., Pasareanu, C., and Barrett, C. [http://arxiv.org/pdf/1710.00486.pdf DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks], 2 Oct 2017
+
Gopinath, D., Katz, G., Pasareanu, C., and Barrett, C. [https://arxiv.org/pdf/1710.00486.pdf DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks], 2 Oct 2017
  
Goswami, G., Ratha, N., Agarwal, A., Singh, R., and Vatsa, M. [http://arxiv.org/pdf/1803.00401.pdf Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks], 22 Feb 2018
+
Goswami, G., Ratha, N., Agarwal, A., Singh, R., and Vatsa, M. [https://arxiv.org/pdf/1803.00401.pdf Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks], 22 Feb 2018
  
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P  [http://www.patrickmcdaniel.org/pubs/esorics17.pdf Adversarial Examples for Malware Detection], 12 Aug 2017
+
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P  [https://www.patrickmcdaniel.org/pubs/esorics17.pdf Adversarial Examples for Malware Detection], 12 Aug 2017
  
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P [http://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks for Malware Classification], 16 Jun 2016
+
Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P [https://arxiv.org/pdf/1606.04435.pdf Adversarial Perturbations Against Deep Neural Networks for Malware Classification], 16 Jun 2016
  
Grosse, K, Pfaff, D., Smith, M.T., and Backes, M. [http://arxiv.org/pdf/1711.06598.pdf How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models], 16 Feb 2018
+
Grosse, K, Pfaff, D., Smith, M.T., and Backes, M. [https://arxiv.org/pdf/1711.06598.pdf How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models], 16 Feb 2018
  
Grosse, K, Smith, M.T., and Backes, M. [http://arxiv.org/pdf/1806.02032.pdf Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification], 6 Jun 2018
+
Grosse, K, Smith, M.T., and Backes, M. [https://arxiv.org/pdf/1806.02032.pdf Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification], 6 Jun 2018
  
Grosse, K., Manoharan, P., Papernot, N., Backes, M., and McDaniel, P. [http://arxiv.org/pdf/1702.06280.pdf On the (Statistical) Detection of Adversarial Examples], 21 Feb 2017
+
Grosse, K., Manoharan, P., Papernot, N., Backes, M., and McDaniel, P. [https://arxiv.org/pdf/1702.06280.pdf On the (Statistical) Detection of Adversarial Examples], 21 Feb 2017
  
Gu, T., Dolan-Gavitt, B., and Garg, S. [http://machine-learning-and-security.github.io/papers/mlsec17_paper_51.pdf BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain], 22 Aug 2017
+
Gu, T., Dolan-Gavitt, B., and Garg, S. [https://machine-learning-and-security.github.io/papers/mlsec17_paper_51.pdf BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain], 22 Aug 2017
  
Hendrycks, D. and Dietterich, T. [http://arxiv.org/pdf/1807.01697.pdf Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations], 4 Jul 2018
+
Hendrycks, D. and Dietterich, T. [https://arxiv.org/pdf/1807.01697.pdf Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations], 4 Jul 2018
  
 
Hicks, K., Hunter, A.P., Samp, L.S., and Coll, G. [https://csis-prod.s3.amazonaws.com/s3fs-public/publication/170302_Ellman_ThirdOffsetStrategySummary_Web.pdf?EXO1GwjFU22_Bkd5A.nx.fJXTKRDKbVR Assessing the Third Offset Strategy] 2017
 
Hicks, K., Hunter, A.P., Samp, L.S., and Coll, G. [https://csis-prod.s3.amazonaws.com/s3fs-public/publication/170302_Ellman_ThirdOffsetStrategySummary_Web.pdf?EXO1GwjFU22_Bkd5A.nx.fJXTKRDKbVR Assessing the Third Offset Strategy] 2017
  
Hitawala, S. [http://arxiv.org/pdf/1801.04271.pdf Comparative Study on Generative Adversarial Networks], 12 Jan 2018
+
Hitawala, S. [https://arxiv.org/pdf/1801.04271.pdf Comparative Study on Generative Adversarial Networks], 12 Jan 2018
  
Homoliak, I., Toffalini, F., Guarnizo, J., Elovici, Y., and Ochoa, M. [http://arxiv.org/pdf/1805.01612.pdf Insight into Insiders: A Survey of Insider Threat Taxonomies, Analysis, Modeling, and Countermeasures], 4 May 2018
+
Homoliak, I., Toffalini, F., Guarnizo, J., Elovici, Y., and Ochoa, M. [https://arxiv.org/pdf/1805.01612.pdf Insight into Insiders: A Survey of Insider Threat Taxonomies, Analysis, Modeling, and Countermeasures], 4 May 2018
  
Hosseini, H., Chen, Y., Kannan, S., Zhang, B., and Poovendran, R. [http://arxiv.org/pdf/1703.04318.pdf Blocking Transferability of Adversarial Examples in Black-Box Learning Systems], 13 Mar 2017
+
Hosseini, H., Chen, Y., Kannan, S., Zhang, B., and Poovendran, R. [https://arxiv.org/pdf/1703.04318.pdf Blocking Transferability of Adversarial Examples in Black-Box Learning Systems], 13 Mar 2017
  
Hosseini, H. and Poovendran, R. [http://arxiv.org/pdf/1804.00499.pdf Semantic Adversarial Examples], 16 Mar 2018
+
Hosseini, H. and Poovendran, R. [https://arxiv.org/pdf/1804.00499.pdf Semantic Adversarial Examples], 16 Mar 2018
  
Hosseini, H., Xiao, B. and Poovendran, R. [http://arxiv.org/pdf/1704.05051.pdf Google’s Cloud Vision API Is Not Robust To Noise], 20 Jul 2017
+
Hosseini, H., Xiao, B. and Poovendran, R. [https://arxiv.org/pdf/1704.05051.pdf Google’s Cloud Vision API Is Not Robust To Noise], 20 Jul 2017
  
Jakubovitz, D. and Giryes, R. [http://arxiv.org/pdf/1803.08680.pdf Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization], 23 Mar 2018
+
Jakubovitz, D. and Giryes, R. [https://arxiv.org/pdf/1803.08680.pdf Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization], 23 Mar 2018
  
Lu, P., Chen, P., and Yu, C. [http://arxiv.org/pdf/1803.09638.pdf On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples], 26 Mar 2018
+
Lu, P., Chen, P., and Yu, C. [https://arxiv.org/pdf/1803.09638.pdf On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples], 26 Mar 2018
  
Hu, W. and Tan, Y. [http://arxiv.org/pdf/1702.05983.pdf Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN], ..MalGAN  20 Feb 2017   
+
Hu, W. and Tan, Y. [https://arxiv.org/pdf/1702.05983.pdf Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN], ..MalGAN  20 Feb 2017   
  
Huang, L., Joseph, A., Neson, B., Rubinstein, B., and Tygar, J.D. [http://people.eecs.berkeley.edu/~tygar/papers/SML2/Adversarial_AISEC.pdf Adversarial Machine Learning], Oct 2011
+
Huang, L., Joseph, A., Neson, B., Rubinstein, B., and Tygar, J.D. [https://people.eecs.berkeley.edu/~tygar/papers/SML2/Adversarial_AISEC.pdf Adversarial Machine Learning], Oct 2011
  
Ilyas, A., Engstrom, L., Athalye, A., and Lin, J. [http://arxiv.org/pdf/1804.08598.pdf Black-box Adversarial Attacks with Limited Queries and Information], 7 Jun 2018
+
Ilyas, A., Engstrom, L., Athalye, A., and Lin, J. [https://arxiv.org/pdf/1804.08598.pdf Black-box Adversarial Attacks with Limited Queries and Information], 7 Jun 2018
  
Ilyas, A., Jalal, A., Asteri, E., Daskalakis, C., and Dimakis, A.G. [http://arxiv.org/pdf/1712.09196.pdf The Robust Manifold Defense: Adversarial Training using Generative Models], 26 Dec 2017
+
Ilyas, A., Jalal, A., Asteri, E., Daskalakis, C., and Dimakis, A.G. [https://arxiv.org/pdf/1712.09196.pdf The Robust Manifold Defense: Adversarial Training using Generative Models], 26 Dec 2017
  
Jia, J. and Gong, N. [http://arxiv.org/pdf/1805.04810.pdf AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning], 13 May 2018
+
Jia, J. and Gong, N. [https://arxiv.org/pdf/1805.04810.pdf AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning], 13 May 2018
  
Jin J., Dundar, A., and Culurciello, E. [http://arxiv.org/pdf/1511.06306.pdf Robust Convolutional Neural Networks under Adversarial Noise], 25 Feb 2016
+
Jin J., Dundar, A., and Culurciello, E. [https://arxiv.org/pdf/1511.06306.pdf Robust Convolutional Neural Networks under Adversarial Noise], 25 Feb 2016
  
Kantarcioglu, M. and Xi, B. [http://www.utdallas.edu/~muratk/CCS-tutorial.pdf Adversarial Data Mining for Cyber Security], 28 Oct 2016
+
Kantarcioglu, M. and Xi, B. [https://www.utdallas.edu/~muratk/CCS-tutorial.pdf Adversarial Data Mining for Cyber Security], 28 Oct 2016
  
 
Kantchelian, A., Tygar, J.D., and Joseph, A. [https://arxiv.org/pdf/1509.07892.pdf Evasion and Hardening of Tree Ensemble Classifiers], 27 May 2016
 
Kantchelian, A., Tygar, J.D., and Joseph, A. [https://arxiv.org/pdf/1509.07892.pdf Evasion and Hardening of Tree Ensemble Classifiers], 27 May 2016
  
Kantchelian, A. [http://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection], 12 Aug 2016  
+
Kantchelian, A. [https://pdfs.semanticscholar.org/4a8d/97172382144b9906e2cec69d3decb4188fb7.pdf Taming Evasions in Machine Learning Based Detection], 12 Aug 2016  
  
Kashyap, A., Parmar, R., Agarwal, M., and Gupta, H. [http://arxiv.org/pdf/1703.09968.pdf An Evaluation of Digital Image Forgery Detection Approaches], 30 Mar 2017  
+
Kashyap, A., Parmar, R., Agarwal, M., and Gupta, H. [https://arxiv.org/pdf/1703.09968.pdf An Evaluation of Digital Image Forgery Detection Approaches], 30 Mar 2017  
  
Kolosnjaji, B., Demontiz, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., and Roli, F. [http://arxiv.org/pdf/1803.04173.pdf Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables], 12 Mar 2018
+
Kolosnjaji, B., Demontiz, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., and Roli, F. [https://arxiv.org/pdf/1803.04173.pdf Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables], 12 Mar 2018
  
Kouzemtchenko, A. [http://arxiv.org/pdf/1806.09035.pdf Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions], 23 Jun 2018
+
Kouzemtchenko, A. [https://arxiv.org/pdf/1806.09035.pdf Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions], 23 Jun 2018
  
Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J. [http://www.groundai.com/project/adversarial-examples-on-discrete-sequences-for-beating-whole-binary-malware-detection/ Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection], 13 Feb 2018
+
Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J. [https://www.groundai.com/project/adversarial-examples-on-discrete-sequences-for-beating-whole-binary-malware-detection/ Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection], 13 Feb 2018
  
Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J.  [http://arxiv.org/pdf/1802.04528.pdf Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples], 13 May 2018
+
Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J.  [https://arxiv.org/pdf/1802.04528.pdf Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples], 13 May 2018
  
Kurakin, A., Goodfellow, I., and Bengio, S. [http://arxiv.org/pdf/1607.02533.pdf Adversarial examples in the physical world], 11 Feb 2017
+
Kurakin, A., Goodfellow, I., and Bengio, S. [https://arxiv.org/pdf/1607.02533.pdf Adversarial examples in the physical world], 11 Feb 2017
  
Kurakin, A., Goodfellow, I., Bengio, S., Dong, Y., Liao, F., Liang, M., Pang, T., Zhu, J., Hu, X., Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A., Huang, S., Zhao, Y., Zhao, Y., Han, Z., Long, J., Berdibekov, Y., Akiba, T., Tokui, S., and Abe, M. [http://arxiv.org/pdf/1804.00097.pdf Adversarial Attacks and Defences Competition - Google Brain organized a NIPS 2017 competition], 31 Mar 2018
+
Kurakin, A., Goodfellow, I., Bengio, S., Dong, Y., Liao, F., Liang, M., Pang, T., Zhu, J., Hu, X., Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A., Huang, S., Zhao, Y., Zhao, Y., Han, Z., Long, J., Berdibekov, Y., Akiba, T., Tokui, S., and Abe, M. [https://arxiv.org/pdf/1804.00097.pdf Adversarial Attacks and Defences Competition - Google Brain organized a NIPS 2017 competition], 31 Mar 2018
  
Laskov, P. and Lippmann, R. [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.4564&rep=rep1&type=pdf Machine Learning in Adversarial Environments], 28 Jun 2010
+
Laskov, P. and Lippmann, R. [https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.375.4564&rep=rep1&type=pdf Machine Learning in Adversarial Environments], 28 Jun 2010
  
Lassance, C., Gripon, V., and Ortega, A. [http://arxiv.org/pdf/1805.10133.pdf Laplacian Power Networks: Bounding Indicator Function Smoothness for Adversarial Defense], 24 May 2018
+
Lassance, C., Gripon, V., and Ortega, A. [https://arxiv.org/pdf/1805.10133.pdf Laplacian Power Networks: Bounding Indicator Function Smoothness for Adversarial Defense], 24 May 2018
  
Lewis, L. [http://www.cna.org/cna_files/pdf/DRM-2017-U-016281-Final.pdf Insights for the Third Offset: Addressing Challenges of Autonomy and Artificial Intelligence in Military Operations], Sep 2017  
+
Lewis, L. [https://www.cna.org/cna_files/pdf/DRM-2017-U-016281-Final.pdf Insights for the Third Offset: Addressing Challenges of Autonomy and Artificial Intelligence in Military Operations], Sep 2017  
  
Li, Y. [http://arxiv.org/pdf/1802.06552.pdf Generative Classifiers More Robust to Adversarial Attacks?], 19 Feb 2018
+
Li, Y. [https://arxiv.org/pdf/1802.06552.pdf Generative Classifiers More Robust to Adversarial Attacks?], 19 Feb 2018
  
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., and Zhu, J. [http://arxiv.org/pdf/1712.02976.pdf Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser], 8 May 2018
+
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., and Zhu, J. [https://arxiv.org/pdf/1712.02976.pdf Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser], 8 May 2018
  
Lin, Y., Liu, M., Sun, M., and Huang, J. [http://arxiv.org/pdf/1710.00814.pdf Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight], 2 Oct 2017
+
Lin, Y., Liu, M., Sun, M., and Huang, J. [https://arxiv.org/pdf/1710.00814.pdf Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight], 2 Oct 2017
  
Liu, Z., Liu, Q., Liu, T., Wang, Y., and Wen, W. [http://arxiv.org/pdf/1803.05787.pdf Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples], 14 Mar 2018
+
Liu, Z., Liu, Q., Liu, T., Wang, Y., and Wen, W. [https://arxiv.org/pdf/1803.05787.pdf Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples], 14 Mar 2018
  
Liu, Y., Chen, J., and Chen, H. [http://arxiv.org/pdf/1801.02850.pdf Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks], 9 Jan 2018
+
Liu, Y., Chen, J., and Chen, H. [https://arxiv.org/pdf/1801.02850.pdf Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks], 9 Jan 2018
  
Liu, Q., Liu, T., Liu, Z., Wang, Y., Jin, Y., and Wen, W. [http://arxiv.org/pdf/1802.05193.pdf Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks], 19 Mar 2018
+
Liu, Q., Liu, T., Liu, Z., Wang, Y., Jin, Y., and Wen, W. [https://arxiv.org/pdf/1802.05193.pdf Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks], 19 Mar 2018
  
Lu, J., Sibai, H., Fabry, E., and Forsyth, D. [http://arxiv.org/pdf/1707.03501.pdf NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles], 12 jul 2017
+
Lu, J., Sibai, H., Fabry, E., and Forsyth, D. [https://arxiv.org/pdf/1707.03501.pdf NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles], 12 jul 2017
  
Lu, P., Chen, P., Chen, K., and Yu, C. [http://arxiv.org/pdf/1805.00310.pdf On the Limitation of MagNet Defense against L1-based Adversarial Examples], 9 May 2018
+
Lu, P., Chen, P., Chen, K., and Yu, C. [https://arxiv.org/pdf/1805.00310.pdf On the Limitation of MagNet Defense against L1-based Adversarial Examples], 9 May 2018
  
Luo, B., Liu, Y. Wei, L., and Xu, Q. [http://arxiv.org/pdf/1801.04693.pdf Towards Imperceptible and Robust Adversarial Example Attacks against Neural], 15 Jan 2018
+
Luo, B., Liu, Y. Wei, L., and Xu, Q. [https://arxiv.org/pdf/1801.04693.pdf Towards Imperceptible and Robust Adversarial Example Attacks against Neural], 15 Jan 2018
  
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. [http://arxiv.org/pdf/1706.06083.pdf Towards Deep Learning Models Resistant to Adversarial Attacks], 19 Jun 2017
+
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. [https://arxiv.org/pdf/1706.06083.pdf Towards Deep Learning Models Resistant to Adversarial Attacks], 19 Jun 2017
  
Maiorca, D., Biggio, B., Chiappe, M., and Giacinto, G. [http://arxiv.org/pdf/1710.10225.pdf Adversarial Detection of Flash Malware: Limitations and Open Issues], 27 Oct 2017
+
Maiorca, D., Biggio, B., Chiappe, M., and Giacinto, G. [https://arxiv.org/pdf/1710.10225.pdf Adversarial Detection of Flash Malware: Limitations and Open Issues], 27 Oct 2017
  
Mayer, M.  Norwegian Institute for Defence Studies, Oslo [http://brage.bibsys.no/xmlui/bitstream/handle/11250/2497514/IFS%20Insights_4_2018_Mayer.pdf IFS Insights], Apr 2018
+
Mayer, M.  Norwegian Institute for Defence Studies, Oslo [https://brage.bibsys.no/xmlui/bitstream/handle/11250/2497514/IFS%20Insights_4_2018_Mayer.pdf IFS Insights], Apr 2018
  
Meidan, Y., Bohadana, M., Shabtai, A., Ochoa, M., Tippenhauer, N., Guarnizo, J., and Elovici, Y. [http://arxiv.org/pdf/1709.04647.pdf Detection of Unauthorized IoT Devices Using Machine Learning Technique], 14 Sep 2017
+
Meidan, Y., Bohadana, M., Shabtai, A., Ochoa, M., Tippenhauer, N., Guarnizo, J., and Elovici, Y. [https://arxiv.org/pdf/1709.04647.pdf Detection of Unauthorized IoT Devices Using Machine Learning Technique], 14 Sep 2017
  
Meng, D. and Chen, H. [http://arxiv.org/pdf/1705.09064.pdf MagNet: a Two-Pronged Defense against Adversarial Examples], 11 Sep 2017
+
Meng, D. and Chen, H. [https://arxiv.org/pdf/1705.09064.pdf MagNet: a Two-Pronged Defense against Adversarial Examples], 11 Sep 2017
  
Miller, D., Hu, X., Qiu, Z., and Kesidis, G. [http://arxiv.org/pdf/1705.09823.pdf Adversarial Learning. A Critical Review and Active Learning Study], 27 May 2017
+
Miller, D., Hu, X., Qiu, Z., and Kesidis, G. [https://arxiv.org/pdf/1705.09823.pdf Adversarial Learning. A Critical Review and Active Learning Study], 27 May 2017
  
Moosavi-Dezfooli, S., Shrivastava, A., and Tuzel, O. [http://arxiv.org/pdf/1802.06806.pdf Divide, Denoise, and Defend against Adversarial Attacks], 19 Feb 2018
+
Moosavi-Dezfooli, S., Shrivastava, A., and Tuzel, O. [https://arxiv.org/pdf/1802.06806.pdf Divide, Denoise, and Defend against Adversarial Attacks], 19 Feb 2018
  
Muñoz-González, L, Bissio, B., Demontis, A., Paudice, A., Wongreassamee, V., Lupu, E., and Roli, F. [http://arxiv.org/pdf/1708.08689.pdf Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization], 29 Aug 2017
+
Muñoz-González, L, Bissio, B., Demontis, A., Paudice, A., Wongreassamee, V., Lupu, E., and Roli, F. [https://arxiv.org/pdf/1708.08689.pdf Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization], 29 Aug 2017
  
Naseer, M., Khan, S., Porikli, F. [http://arxiv.org/pdf/1807.01216.pdf Local Gradients Smoothing: Defense against localized adversarial attacks], 2 Jul 2018
+
Naseer, M., Khan, S., Porikli, F. [https://arxiv.org/pdf/1807.01216.pdf Local Gradients Smoothing: Defense against localized adversarial attacks], 2 Jul 2018
  
 
Nataraj, L [https://vision.ece.ucsb.edu/sites/vision.ece.ucsb.edu/files/publications/lakshman_thesis_final_1.pdf A Signal Processing Approach To Malware Analysis], Dec 2015
 
Nataraj, L [https://vision.ece.ucsb.edu/sites/vision.ece.ucsb.edu/files/publications/lakshman_thesis_final_1.pdf A Signal Processing Approach To Malware Analysis], Dec 2015
  
Naveiro, R., Redondo, A., Insua, D., and Ruggeri, F. [http://arxiv.org/pdf/1802.07513.pdf Adversarial classification: An adversarial risk analysis approach], 21 Feb 2018
+
Naveiro, R., Redondo, A., Insua, D., and Ruggeri, F. [https://arxiv.org/pdf/1802.07513.pdf Adversarial classification: An adversarial risk analysis approach], 21 Feb 2018
  
Nguyen A, Yosinski J, and Clune J. [http://arxiv.org/pdf/1412.1897.pdf Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images], 2 Apr 2015  [http://techtalks.tv/talks/deep-neural-networks-are-easily-fooled-high-confidence-predictions-for-unrecognizable-images/61573/ Video]
+
Nguyen A, Yosinski J, and Clune J. [https://arxiv.org/pdf/1412.1897.pdf Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images], 2 Apr 2015  [https://techtalks.tv/talks/deep-neural-networks-are-easily-fooled-high-confidence-predictions-for-unrecognizable-images/61573/ Video]
  
Nayebi, A. and Ganguli, S. [http://arxiv.org/pdf/1703.09202.pdf Biologically inspired protection of deep networks from adversarial attacks], 27 Mar 2017
+
Nayebi, A. and Ganguli, S. [https://arxiv.org/pdf/1703.09202.pdf Biologically inspired protection of deep networks from adversarial attacks], 27 Mar 2017
  
Nicolae, M., Sinn, M., Tran, M., Rawat, A., Wistuba, M., Zantedeschi, V., Molloy, I., Edwards, B. [http://arxiv.org/pdf/1807.01069.pdf Adversarial Robustness Toolbox v0.2.2], 3 Jul 2018
+
Nicolae, M., Sinn, M., Tran, M., Rawat, A., Wistuba, M., Zantedeschi, V., Molloy, I., Edwards, B. [https://arxiv.org/pdf/1807.01069.pdf Adversarial Robustness Toolbox v0.2.2], 3 Jul 2018
  
North Atlantic Treaty Organization (NATO): Cooperative Cyber Defence Centre of Excellence; Minárik, T., Jakschis, R.,  and Lindström, L. [http://ccdcoe.org/sites/default/files/multimedia/pdf/CyCon_2018_Full_Book.pdf 10th International Conference on Cyber Conflict CyCon X: Maximising Effects], 30 May 2018
+
North Atlantic Treaty Organization (NATO): Cooperative Cyber Defence Centre of Excellence; Minárik, T., Jakschis, R.,  and Lindström, L. [https://ccdcoe.org/sites/default/files/multimedia/pdf/CyCon_2018_Full_Book.pdf 10th International Conference on Cyber Conflict CyCon X: Maximising Effects], 30 May 2018
  
 
North Atlantic Treaty Organization (NATO): Joint Air Power Competence Centre [https://www.japcc.org/wp-content/uploads/JAPCC_OCO_screen.pdf  NATO Joint Air Power and Offensive Cyber Operations], Nov 2017
 
North Atlantic Treaty Organization (NATO): Joint Air Power Competence Centre [https://www.japcc.org/wp-content/uploads/JAPCC_OCO_screen.pdf  NATO Joint Air Power and Offensive Cyber Operations], Nov 2017
  
North Atlantic Treaty Organization (NATO): U.S. Department of Defense: U.S. Army Research Laboratory [http://www.arl.army.mil/arlreports/2018/ARL-TR-8337.pdf  Initial Reference Architecture of an Intelligent Autonomous] [[Agents|Agent]] for Cyber Defense, Mar 2018
+
North Atlantic Treaty Organization (NATO): U.S. Department of Defense: U.S. Army Research Laboratory [https://www.arl.army.mil/arlreports/2018/ARL-TR-8337.pdf  Initial Reference Architecture of an Intelligent Autonomous] [[Agents|Agent]] for Cyber Defense, Mar 2018
  
North Atlantic Treaty Organization (NATO): U.S. Department of Defense: U.S. Army Research Laboratory - Research Group IST-152-RTG [http://arxiv.org/ftp/arxiv/papers/1804/1804.07646.pdf Toward Intelligent Autonomous] [[Agents]] for Cyber Defense: Report of the 2017 Workshop, Apr 2018
+
North Atlantic Treaty Organization (NATO): U.S. Department of Defense: U.S. Army Research Laboratory - Research Group IST-152-RTG [https://arxiv.org/ftp/arxiv/papers/1804/1804.07646.pdf Toward Intelligent Autonomous] [[Agents]] for Cyber Defense: Report of the 2017 Workshop, Apr 2018
  
Norton, A. and Qi, Y. [http://arxiv.org/pdf/1708.00807.pdf Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning], 1 Aug 2017
+
Norton, A. and Qi, Y. [https://arxiv.org/pdf/1708.00807.pdf Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning], 1 Aug 2017
  
Ororbia II, A., Giles, C., and Kifer, D. [http://arxiv.org/pdf/1601.07213.pdf Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization], 29 Jul 2016
+
Ororbia II, A., Giles, C., and Kifer, D. [https://arxiv.org/pdf/1601.07213.pdf Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization], 29 Jul 2016
  
Papernot N., Goodfellow, I., Erlingsson, U., and McDaniel, P. [http://www.usenix.org/sites/default/files/conference/protected-files/enigma17_slides_papernot.pdf  Adversarial Examples in Machine Learning], 1 Feb 2017
+
Papernot N., Goodfellow, I., Erlingsson, U., and McDaniel, P. [https://www.usenix.org/sites/default/files/conference/protected-files/enigma17_slides_papernot.pdf  Adversarial Examples in Machine Learning], 1 Feb 2017
  
Papernot N., Goodfellow, I., Sheatsley, R., Feinman, R., and McDaniel, P. [http://pdfs.semanticscholar.org/308b/72045130d02b849b4a8f914eae6d0d684add.pdf Cleverhans v.1.0.0: an adversarial machine learning library], 14 Dec 2016
+
Papernot N., Goodfellow, I., Sheatsley, R., Feinman, R., and McDaniel, P. [https://pdfs.semanticscholar.org/308b/72045130d02b849b4a8f914eae6d0d684add.pdf Cleverhans v.1.0.0: an adversarial machine learning library], 14 Dec 2016
  
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. [http://arxiv.org/pdf/1511.04508.pdf Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks], 14 Nov 2015
+
Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. [https://arxiv.org/pdf/1511.04508.pdf Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks], 14 Nov 2015
  
Papernot, N., McDaniel, P, Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. [http://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings], 24 Nov 2015
+
Papernot, N., McDaniel, P, Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. [https://arxiv.org/abs/1511.07528 The Limitations of Deep Learning in Adversarial Settings], 24 Nov 2015
  
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. [http://arxiv.org/pdf/1602.02697v2.pdf Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples], 19 Feb 2016
+
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. [https://arxiv.org/pdf/1602.02697v2.pdf Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples], 19 Feb 2016
  
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. [http://arxiv.org/pdf/1602.02697.pdf Practical Black-Box Attacks against Machine Learning], 8 Feb 2016
+
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. [https://arxiv.org/pdf/1602.02697.pdf Practical Black-Box Attacks against Machine Learning], 8 Feb 2016
  
Papernot, N., McDaniel, P., Sinha, A., and Wellman M. [http://pdfs.semanticscholar.org/ebab/687cd1be7d25392c11f89fce6a63bef7219d.pdf Towards the Science of Security and Privacy in Machine Learning], 11 Nov 2016
+
Papernot, N., McDaniel, P., Sinha, A., and Wellman M. [https://pdfs.semanticscholar.org/ebab/687cd1be7d25392c11f89fce6a63bef7219d.pdf Towards the Science of Security and Privacy in Machine Learning], 11 Nov 2016
  
Papernot, N. et al. [http://arxiv.org/pdf/1610.00768.pdf Technical Report on the CleverHans v2.1.0 Adversarial Examples Library], 27 Jun 2018
+
Papernot, N. et al. [https://arxiv.org/pdf/1610.00768.pdf Technical Report on the CleverHans v2.1.0 Adversarial Examples Library], 27 Jun 2018
  
Papernot, N., McDaniel, P., and Goodfellow I. [http://arxiv.org/pdf/1605.07277.pdf Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples], 24 May 2016
+
Papernot, N., McDaniel, P., and Goodfellow I. [https://arxiv.org/pdf/1605.07277.pdf Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples], 24 May 2016
  
Paudice A., Muñoz-González, L., Gyorgy, A., and Lupu, E. [http://arxiv.org/pdf/1802.03041.pdf Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection], 8 Feb 2018  
+
Paudice A., Muñoz-González, L., Gyorgy, A., and Lupu, E. [https://arxiv.org/pdf/1802.03041.pdf Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection], 8 Feb 2018  
  
Prakash, A., Moran, N., Garber, S., DiLillo, A., and Storer, J. [http://arxiv.org/pdf/1801.08926.pdf Deflecting Adversarial Attacks with Pixel Deflection], 30 Mar 2018
+
Prakash, A., Moran, N., Garber, S., DiLillo, A., and Storer, J. [https://arxiv.org/pdf/1801.08926.pdf Deflecting Adversarial Attacks with Pixel Deflection], 30 Mar 2018
  
Radford, A., Metz, L. and Chintala, S. [http://arxiv.org/pdf/1511.06434.pdf  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks], 7 Jan 2016
+
Radford, A., Metz, L. and Chintala, S. [https://arxiv.org/pdf/1511.06434.pdf  Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks], 7 Jan 2016
  
Raghunathan, A., Steinhardt, J., and Liang, P. [http://arxiv.org/pdf/1801.09344.pdf Certified Defenses against Adversarial Examples], 29 Jan 2018
+
Raghunathan, A., Steinhardt, J., and Liang, P. [https://arxiv.org/pdf/1801.09344.pdf Certified Defenses against Adversarial Examples], 29 Jan 2018
  
Rahman, M., Azimpourkivi, M., Topkara, U., and Carbunar, B. [http://users.cs.fiu.edu/~carbunar/vamos.pdf  Video Liveness for Citizen Journalism: Attacks and Defenses], Apr 2017
+
Rahman, M., Azimpourkivi, M., Topkara, U., and Carbunar, B. [https://users.cs.fiu.edu/~carbunar/vamos.pdf  Video Liveness for Citizen Journalism: Attacks and Defenses], Apr 2017
  
Ranjan, R., Sankaranarayanan, S., Castillo, Carlos D., and Chellappa, R. [http://arxiv.org/pdf/1712.00699.pdf Improving Network Robustness against Adversarial Attacks with Compact Convolution], 22 Mar 2018
+
Ranjan, R., Sankaranarayanan, S., Castillo, Carlos D., and Chellappa, R. [https://arxiv.org/pdf/1712.00699.pdf Improving Network Robustness against Adversarial Attacks with Compact Convolution], 22 Mar 2018
  
Ross, Andrew and Doshi-Velez, F. [http://arxiv.org/pdf/1711.09404.pdf Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients], 26 Nov 2017
+
Ross, Andrew and Doshi-Velez, F. [https://arxiv.org/pdf/1711.09404.pdf Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients], 26 Nov 2017
  
Rouhani, B., Riazi, M., and Koushanfar, F.  [http://arxiv.org/pdf/1709.02538.pdf CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning], 1 Apr 2018
+
Rouhani, B., Riazi, M., and Koushanfar, F.  [https://arxiv.org/pdf/1709.02538.pdf CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning], 1 Apr 2018
  
Rouhani, B., Riazi, M., and Koushanfar, F. [http://arxiv.org/ftp/arxiv/papers/1705/1705.08963.pdf  DeepSecure: Scalable Provably-Secure Deep Learning], 24 May 2017
+
Rouhani, B., Riazi, M., and Koushanfar, F. [https://arxiv.org/ftp/arxiv/papers/1705/1705.08963.pdf  DeepSecure: Scalable Provably-Secure Deep Learning], 24 May 2017
  
Rubinstein, B., Nelson, B., Huang, L., Joseph, A., Lau, S., Rao, S., Taft, N., and Tygar, J.D. [http://www.utdallas.edu/~muratk/courses/dmsec_files/rpca_imc09.pdf  ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors], 2009
+
Rubinstein, B., Nelson, B., Huang, L., Joseph, A., Lau, S., Rao, S., Taft, N., and Tygar, J.D. [https://www.utdallas.edu/~muratk/courses/dmsec_files/rpca_imc09.pdf  ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors], 2009
  
Samangouei, P., Kabkab, M., and Chellappa, R. [http://arxiv.org/pdf/1805.06605.pdf Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models], 18 May 2018
+
Samangouei, P., Kabkab, M., and Chellappa, R. [https://arxiv.org/pdf/1805.06605.pdf Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models], 18 May 2018
  
Santhanam, G. and Grnarova P. [http://arxiv.org/pdf/1805.10652.pdf Defending Against Adversarial Attacks by Leveraging an Entire GAN],  [http://www.youtube.com/watch?v=dFZY9BSIQXU audio], 27 May 2018
+
Santhanam, G. and Grnarova P. [https://arxiv.org/pdf/1805.10652.pdf Defending Against Adversarial Attacks by Leveraging an Entire GAN],  [https://www.youtube.com/watch?v=dFZY9BSIQXU audio], 27 May 2018
  
Schneier, B.  [http://www.wired.com/2014/01/theres-no-good-way-to-patch-the-internet-ofthings-and-thats-a-huge-problem/ The Internet of Things is Wildly Insecure--and Often Unpatchable], 2014
+
Schneier, B.  [https://www.wired.com/2014/01/theres-no-good-way-to-patch-the-internet-ofthings-and-thats-a-huge-problem/ The Internet of Things is Wildly Insecure--and Often Unpatchable], 2014
  
Schneier, B.  [http://www.schneier.com/blog/archives/2017/02/security_and_th.html Security and the Internet of Things], 2017
+
Schneier, B.  [https://www.schneier.com/blog/archives/2017/02/security_and_th.html Security and the Internet of Things], 2017
  
Shafahi, A., Ronny H.W., Najibi, M., Suciu, O., Studer, C., Dumitras, T., and Goldstein, T. [http://arxiv.org/pdf/1804.00792.pdf Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks], 3 Apr 2018
+
Shafahi, A., Ronny H.W., Najibi, M., Suciu, O., Studer, C., Dumitras, T., and Goldstein, T. [https://arxiv.org/pdf/1804.00792.pdf Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks], 3 Apr 2018
  
Sharif, M., Bauer, L., and Reiter, M. [http://arxiv.org/pdf/1802.09653.pdf On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples], 27 Feb 2018
+
Sharif, M., Bauer, L., and Reiter, M. [https://arxiv.org/pdf/1802.09653.pdf On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples], 27 Feb 2018
  
Shen, S., Tople, S., and Saxena, P. [http://www.comp.nus.edu.sg/~shruti90/papers/auror.pdf AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems], 5 Dec 2016
+
Shen, S., Tople, S., and Saxena, P. [https://www.comp.nus.edu.sg/~shruti90/papers/auror.pdf AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems], 5 Dec 2016
  
Shokri, R., Stronati, M., and Shmatikov, V. [http://arxiv.org/pdf/1610.05820.pdf Membership Inference Attacks Against Machine Learning Models], 31 Mar 2017
+
Shokri, R., Stronati, M., and Shmatikov, V. [https://arxiv.org/pdf/1610.05820.pdf Membership Inference Attacks Against Machine Learning Models], 31 Mar 2017
  
Singh T. and Kantardzic, M. [http://arxiv.org/pdf/1703.07909.pdf Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains], 23 Mar 2017
+
Singh T. and Kantardzic, M. [https://arxiv.org/pdf/1703.07909.pdf Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains], 23 Mar 2017
  
Sitawarin, C., Bhagoji, A., Mosenia, A., Chiang, M., and Mittal P. [http://arxiv.org/pdf/1802.06430.pdf  DARTS: Deceiving Autonomous Cars with Toxic Signs], 31 May 2018
+
Sitawarin, C., Bhagoji, A., Mosenia, A., Chiang, M., and Mittal P. [https://arxiv.org/pdf/1802.06430.pdf  DARTS: Deceiving Autonomous Cars with Toxic Signs], 31 May 2018
  
Šrndic, N. and Laskov, P. [http://www.utdallas.edu/~muratk/courses/dmsec_files/srndic-laskov-sp2014.pdf Practical Evasion of a Learning-Based Classifier: A Case Study], 2014  
+
Šrndic, N. and Laskov, P. [https://www.utdallas.edu/~muratk/courses/dmsec_files/srndic-laskov-sp2014.pdf Practical Evasion of a Learning-Based Classifier: A Case Study], 2014  
  
Stevens, R., Suciu, O., Ruef, A., Hong, S., Hicks, M., and Dumitras, T. [http://arxiv.org/pdf/1701.04739.pdf  Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning], 17 Jan 2017
+
Stevens, R., Suciu, O., Ruef, A., Hong, S., Hicks, M., and Dumitras, T. [https://arxiv.org/pdf/1701.04739.pdf  Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning], 17 Jan 2017
  
Stokes, J., Wang, D., Marinescu, M., Mariono, M., and Bussone, B. [http://arxiv.org/pdf/1712.05919.pdf Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models] 16 Dec 2017
+
Stokes, J., Wang, D., Marinescu, M., Mariono, M., and Bussone, B. [https://arxiv.org/pdf/1712.05919.pdf Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models] 16 Dec 2017
  
Stoica, I., Song, D., Popa, R., Patterson, D., Mahoney, M., Katz, R., Joseph, A., Jordan, M., Hellerstein, J., Gonzalez, J., Goldberg, K., Ghodsi, A., Culler, D., and Abbeel, P. [http://arxiv.org/pdf/1712.05855.pdf A Berkeley View of Systems Challenges for AI], 15 Dec 2017
+
Stoica, I., Song, D., Popa, R., Patterson, D., Mahoney, M., Katz, R., Joseph, A., Jordan, M., Hellerstein, J., Gonzalez, J., Goldberg, K., Ghodsi, A., Culler, D., and Abbeel, P. [https://arxiv.org/pdf/1712.05855.pdf A Berkeley View of Systems Challenges for AI], 15 Dec 2017
  
Sun, Z., Ozay, M., and Okatani, T. [http://arxiv.org/pdf/1711.01791.pdf HyperNetworks with statistical filtering for defending adversarial examples], 6 Nov 2017
+
Sun, Z., Ozay, M., and Okatani, T. [https://arxiv.org/pdf/1711.01791.pdf HyperNetworks with statistical filtering for defending adversarial examples], 6 Nov 2017
  
Svoboda, J., Masci, J., Monti, F., Bronstein, M., and Guibas, L. [http://arxiv.org/pdf/1806.00088.pdf PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks], 31 May 2018
+
Svoboda, J., Masci, J., Monti, F., Bronstein, M., and Guibas, L. [https://arxiv.org/pdf/1806.00088.pdf PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks], 31 May 2018
  
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. [http://arxiv.org/pdf/1312.6199.pdf Intriguing properties of neural networks], 19 Feb 2014
+
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. [https://arxiv.org/pdf/1312.6199.pdf Intriguing properties of neural networks], 19 Feb 2014
  
Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. [http://arxiv.org/pdf/1704.03453.pdf The Space of Transferable Adversarial Examples], 23 May 2017
+
Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. [https://arxiv.org/pdf/1704.03453.pdf The Space of Transferable Adversarial Examples], 23 May 2017
  
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. [http://arxiv.org/pdf/1705.07204.pdf  Ensemble Adversarial Training: Attacks and Defenses], 30 Jan 2018
+
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. [https://arxiv.org/pdf/1705.07204.pdf  Ensemble Adversarial Training: Attacks and Defenses], 30 Jan 2018
  
Tretschk, E., Oh, S., and Fritz, M. [http://arxiv.org/pdf/1805.12487.pdf Sequential Attacks on] [[Agents]] for Long-Term Adversarial Goals, 5 Jul 2018
+
Tretschk, E., Oh, S., and Fritz, M. [https://arxiv.org/pdf/1805.12487.pdf Sequential Attacks on] [[Agents]] for Long-Term Adversarial Goals, 5 Jul 2018
  
Tsuzuku, Y., Sato, I., Sugiyama, M. [http://arxiv.org/pdf/1802.04034.pdf Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks], 22 May 2018
+
Tsuzuku, Y., Sato, I., Sugiyama, M. [https://arxiv.org/pdf/1802.04034.pdf Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks], 22 May 2018
  
Uesato, J., O'Donoghue, B., Oord, A., and Kohli, P. [http://arxiv.org/pdf/1802.05666.pdf Adversarial Risk and the Dangers of Evaluating Against Weak Attacks], 12 Jun 2018
+
Uesato, J., O'Donoghue, B., Oord, A., and Kohli, P. [https://arxiv.org/pdf/1802.05666.pdf Adversarial Risk and the Dangers of Evaluating Against Weak Attacks], 12 Jun 2018
  
U.S. Department of Defense [http://www.defense.gov/Portals/1/Documents/law_war_manual15.pdf Law of War Manual, Chapter XVI | Cyber Operations], 2015
+
U.S. Department of Defense [https://www.defense.gov/Portals/1/Documents/law_war_manual15.pdf Law of War Manual, Chapter XVI | Cyber Operations], 2015
  
U.S. Department of Defense: US Air Force [http://www.everycrsreport.com/files/20180426_R45178_27fad5077138df0a45f2bf5dc00f4bb61c9a4e88.pdf Artificial Intelligence and National Security], 26 Apr 2018
+
U.S. Department of Defense: US Air Force [https://www.everycrsreport.com/files/20180426_R45178_27fad5077138df0a45f2bf5dc00f4bb61c9a4e88.pdf Artificial Intelligence and National Security], 26 Apr 2018
  
U.S. Department of Defense: U.S. Army Cyber Institute at West Point and Arizona State University [http://threatcasting.com/wp-content/uploads/2017/09/ThreatcastingWest2017.pdf  The New Dogs of War: The Future of Weaponized Artificial Intelligence], 2017
+
U.S. Department of Defense: U.S. Army Cyber Institute at West Point and Arizona State University [https://threatcasting.com/wp-content/uploads/2017/09/ThreatcastingWest2017.pdf  The New Dogs of War: The Future of Weaponized Artificial Intelligence], 2017
  
U.S. Department of Defense: U.S. Defense Science Board [http://www.dtic.mil/dtic/tr/fulltext/u2/1032191.pdf Report of the Defense Science Board (DSB) Task Force on Cyber Supply Chain], Apr 2017
+
U.S. Department of Defense: U.S. Defense Science Board [https://www.dtic.mil/dtic/tr/fulltext/u2/1032191.pdf Report of the Defense Science Board (DSB) Task Force on Cyber Supply Chain], Apr 2017
  
U.S. Department of Defense: U.S. Defense Science Board [http://www.acq.osd.mil/dsb/TORs/2018_TOR_CounterAutonomy_18Jun2018.pdf Terms of Reference | Defense Science Board (DSB) Task Force on Counter Autonomy], 18 Jun 2018
+
U.S. Department of Defense: U.S. Defense Science Board [https://www.acq.osd.mil/dsb/TORs/2018_TOR_CounterAutonomy_18Jun2018.pdf Terms of Reference | Defense Science Board (DSB) Task Force on Counter Autonomy], 18 Jun 2018
  
U.S. Government: American Technology Council (ATC) [http://itmodernization.cio.gov/assets/report/Report%20to%20the%20President%20on%20IT%20Modernization%20-%20Final.pdf Report to the President on IT Modernization], 2017  
+
U.S. Government: American Technology Council (ATC) [https://itmodernization.cio.gov/assets/report/Report%20to%20the%20President%20on%20IT%20Modernization%20-%20Final.pdf Report to the President on IT Modernization], 2017  
  
U.S. Government: White House [http://www.whitehouse.gov/wp-content/uploads/2018/05/Summary-Report-of-White-House-AI-Summit.pdf 2018 White House Summit on Artificial Intelligence for American Industry], 10 May 2018
+
U.S. Government: White House [https://www.whitehouse.gov/wp-content/uploads/2018/05/Summary-Report-of-White-House-AI-Summit.pdf 2018 White House Summit on Artificial Intelligence for American Industry], 10 May 2018
  
U.S. [[Government Services#Department of Homeland Security (DHS)| Department of Homeland Security (DHS)]] [http://www.dhs.gov/sites/default/files/publications/Artificial%20Intelligence%20Whitepaper%202017_508%20FINAL_2.pdf Artificial Intelligence White Paper | Science and Technology Advisory Committee (HSSTAC): Quadrennial Homeland Security Review Subcommittee], 10 Mar 2017
+
U.S. [[Government Services#Department of Homeland Security (DHS)| Department of Homeland Security (DHS)]] [https://www.dhs.gov/sites/default/files/publications/Artificial%20Intelligence%20Whitepaper%202017_508%20FINAL_2.pdf Artificial Intelligence White Paper | Science and Technology Advisory Committee (HSSTAC): Quadrennial Homeland Security Review Subcommittee], 10 Mar 2017
  
U.S. [[Government Services#Department of Homeland Security (DHS)| Department of Homeland Security (DHS)]] [http://info.publicintelligence.net/OCIA-ArtificialIntelligence.pdf Narrative Analysis: Artificial Intelligence  | National Protection and Programs Directorate - Office of Cyber and Infrastructure Analysis], July 2017
+
U.S. [[Government Services#Department of Homeland Security (DHS)| Department of Homeland Security (DHS)]] [https://info.publicintelligence.net/OCIA-ArtificialIntelligence.pdf Narrative Analysis: Artificial Intelligence  | National Protection and Programs Directorate - Office of Cyber and Infrastructure Analysis], July 2017
  
Uther, W. and Veloso, M. [http://www.cs.cmu.edu/~mmv/papers/03TR-advRL.pdf Adversarial Reinforcement Learning], Jan 2003
+
Uther, W. and Veloso, M. [https://www.cs.cmu.edu/~mmv/papers/03TR-advRL.pdf Adversarial Reinforcement Learning], Jan 2003
  
Vijaykeerthy, D., Suri, A., Mehta, S., and Kumaraguru, P. [http://arxiv.org/pdf/1802.01448.pdf Hardening Deep Neural Networks via Adversarial Model Cascades], 12 Feb 2018
+
Vijaykeerthy, D., Suri, A., Mehta, S., and Kumaraguru, P. [https://arxiv.org/pdf/1802.01448.pdf Hardening Deep Neural Networks via Adversarial Model Cascades], 12 Feb 2018
  
Waltzmann, R. [http://www.rand.org/content/dam/rand/pubs/testimonies/CT400/CT473/RAND_CT473.pdf The Weaponization of Information: The Need for Cognitive Security], testimony presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity, 27 Apr 2017
+
Waltzmann, R. [https://www.rand.org/content/dam/rand/pubs/testimonies/CT400/CT473/RAND_CT473.pdf The Weaponization of Information: The Need for Cognitive Security], testimony presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity, 27 Apr 2017
  
Wang, Q. Guo, W., Zhang, K., Ororbia II, A., Xing, X., Giles, C., and Liu, X. [http://arxiv.org/pdf/1610.01239.pdf Adversary Resistant Deep Neural Networks with an Application to Malware Detection], 27 Apr 2017
+
Wang, Q. Guo, W., Zhang, K., Ororbia II, A., Xing, X., Giles, C., and Liu, X. [https://arxiv.org/pdf/1610.01239.pdf Adversary Resistant Deep Neural Networks with an Application to Malware Detection], 27 Apr 2017
  
Wang, D., Li, C., Wen, S., Nepal, S., and Xiang, Y. [http://arxiv.org/pdf/1803.05123.pdf Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks], 3 Jul 2018
+
Wang, D., Li, C., Wen, S., Nepal, S., and Xiang, Y. [https://arxiv.org/pdf/1803.05123.pdf Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks], 3 Jul 2018
  
Wang C. [http://arxiv.org/pdf/1803.00657.pdf Evolutionary Generative Adversarial Networks], 1 Mar 2018
+
Wang C. [https://arxiv.org/pdf/1803.00657.pdf Evolutionary Generative Adversarial Networks], 1 Mar 2018
  
Warde-Farley, D. and Goodfellow, I. [http://pdfs.semanticscholar.org/b5ec/486044c6218dd41b17d8bba502b32a12b91a.pdf Adversarial Perturbations of Deep Neural Networks], 23 Dec 2016
+
Warde-Farley, D. and Goodfellow, I. [https://pdfs.semanticscholar.org/b5ec/486044c6218dd41b17d8bba502b32a12b91a.pdf Adversarial Perturbations of Deep Neural Networks], 23 Dec 2016
  
Weng, T., Zhang, H., Chen, P., Yi, J., Su, D., Gao, Y., Hsieh, C., and Daniel, L. [http://arxiv.org/pdf/1801.10578.pdf Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach], 31 Jan 2018
+
Weng, T., Zhang, H., Chen, P., Yi, J., Su, D., Gao, Y., Hsieh, C., and Daniel, L. [https://arxiv.org/pdf/1801.10578.pdf Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach], 31 Jan 2018
  
Xiao, H. [http://pdfs.semanticscholar.org/6adb/6154e091e6448d63327eadb6159746a2710d.pdf Adversarial and Secure Machine Learning], 27 Oct 2016
+
Xiao, H. [https://pdfs.semanticscholar.org/6adb/6154e091e6448d63327eadb6159746a2710d.pdf Adversarial and Secure Machine Learning], 27 Oct 2016
  
Xie, C., Zhang, Z., Wang, J., Zhou, Y., Ren, Z., Yuille, A. [http://arxiv.org/pdf/1803.06978.pdf Improving Transferability of Adversarial Examples with Input Diversity], 11 Jun 2018
+
Xie, C., Zhang, Z., Wang, J., Zhou, Y., Ren, Z., Yuille, A. [https://arxiv.org/pdf/1803.06978.pdf Improving Transferability of Adversarial Examples with Input Diversity], 11 Jun 2018
  
Xie, C., Zhang, Z., Wang, J., Zhou, Y., Ren, Z., Yuille, A. [http://arxiv.org/pdf/1711.01991.pdf Mitigating Adversarial Effects Through Randomization] 28 Feb 2018
+
Xie, C., Zhang, Z., Wang, J., Zhou, Y., Ren, Z., Yuille, A. [https://arxiv.org/pdf/1711.01991.pdf Mitigating Adversarial Effects Through Randomization] 28 Feb 2018
  
Xu, W., Qi, Y., and Evans, D. [http://evademl.org/docs/evademl.pdf Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers], Feb 2016
+
Xu, W., Qi, Y., and Evans, D. [https://evademl.org/docs/evademl.pdf Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers], Feb 2016
  
Xu, W., Evans, D., and Qi, Y. [http://arxiv.org/pdf/1704.01155.pdf Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks], 5 Dec 2017
+
Xu, W., Evans, D., and Qi, Y. [https://arxiv.org/pdf/1704.01155.pdf Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks], 5 Dec 2017
  
Yampolskiy, R., and Spellchecker, M.S. [http://arxiv.org/ftp/arxiv/papers/1610/1610.07997.pdf Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures], Oct 2016
+
Yampolskiy, R., and Spellchecker, M.S. [https://arxiv.org/ftp/arxiv/papers/1610/1610.07997.pdf Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures], Oct 2016
  
Yan, Z., Guo, Y., and Zhang, C. [http://arxiv.org/pdf/1803.00404.pdf Deep Defense: Training DNNs with Improved Adversarial Robustness], 30 May 2018
+
Yan, Z., Guo, Y., and Zhang, C. [https://arxiv.org/pdf/1803.00404.pdf Deep Defense: Training DNNs with Improved Adversarial Robustness], 30 May 2018
  
Yan J., Qi, Y., and Rao, Q [http://downloads.hindawi.com/journals/scn/2018/7247095.pdf Detecting Malware with an Ensemble Method Based on Deep Neural Network], 18 Aug 2017
+
Yan J., Qi, Y., and Rao, Q [https://downloads.hindawi.com/journals/scn/2018/7247095.pdf Detecting Malware with an Ensemble Method Based on Deep Neural Network], 18 Aug 2017
  
Yuan, X., He, P., Zhu, Q., Bhat, R., and Li, X. [http://arxiv.org/pdf/1712.07107.pdf Adversarial Examples. Attacks and Defenses for Deep Learning], 5 Jan 2018
+
Yuan, X., He, P., Zhu, Q., Bhat, R., and Li, X. [https://arxiv.org/pdf/1712.07107.pdf Adversarial Examples. Attacks and Defenses for Deep Learning], 5 Jan 2018
  
Zane, C. and Markel, A. [http://www.dtic.mil/dtic/tr/fulltext/u2/a619747.pdf Machine Learning Malware Detection], 2015
+
Zane, C. and Markel, A. [https://www.dtic.mil/dtic/tr/fulltext/u2/a619747.pdf Machine Learning Malware Detection], 2015
  
Zhang, C., Bengio S., Hardt, M., Recht, B., and Vinyals, O. [http://arxiv.org/pdf/1611.03530.pdf Understanding deep learning requires rethinking generalization], 26 Feb 2017
+
Zhang, C., Bengio S., Hardt, M., Recht, B., and Vinyals, O. [https://arxiv.org/pdf/1611.03530.pdf Understanding deep learning requires rethinking generalization], 26 Feb 2017
  
Zhao, P., Fu, Z., Wu, O., Hu, Q., and Wang, J. [http://arxiv.org/pdf/1806.00580.pdf Detecting Adversarial Examples via Key-based Network], 2 Jun 2018
+
Zhao, P., Fu, Z., Wu, O., Hu, Q., and Wang, J. [https://arxiv.org/pdf/1806.00580.pdf Detecting Adversarial Examples via Key-based Network], 2 Jun 2018

Revision as of 07:27, 28 March 2023

Google search...

__________________________________________________________


Abadi, M. Chu , A. Goodfellow, I. McMahan, H. Mironov, I. Talwar, K. and Zhang, L. Deep Learning with Differential Privacy, 24 Oct 2016

Abhijith, Wallace, B., Akhavan-Masouleh, S., Davis, A., Wojnowicz, M., and Brook, J. Introduction to Artificial intelligence for security professionals, 12 Aug 2017

Abramson, M. Toward Adversarial Online Learning and the Science of Deceptive Machines, 13 Sep 2017

Agarap, A. and Pepito, F. Towards Building an Intelligent Anti-Malware System: A Deep Learning Approach using Support Vector Machine (SVM) for Malware Classification, 31 Dec 2017

Akhtar, N. and Mian, A. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, 26 Feb 2018

Al-Dujaili, A., Haung, A., Hemberg, E., and O'Reilly, U. Adversarial Deep Learning for Robust Detection of Binary Encoded Malware, 25 Mar 2018

Alkasassbeh, M. An empirical evaluation for the intrusion detection features based on machine learning and feature selection methods, 27 Dec 2017

Allen, G., and Chan T. Artificial Intelligence and National Security - Belfer Center Study, Jul 2017

Alzantot, M., Balaji, B., and Srivastava, M. Did you hear that? Adversarial Examples Against Automatic Speech Recognition, 2 Jan 2018

Amodei, D. and Olah, C. Concrete Problems in AI Safety, 25 Jul 2016

Anderson, H.S., Kharkar, A., and Filar, B. Evading Machine Learning Malware Detection, 27 Jul 2017

Anderson, H.S., Kharkar, A., Filar, B., Evans, D., and Roth, P. Learning to Evade Static PE Machine Learning Malware Models via Reinforcement Learning, 26 Jan 2018

Anderson, H.S., Woodbridge, J., and Filar, B. DeepDGA: Adversarially-Tuned Domain Generation and Detection, 6 Oct 2016

Arulkumaran, K., Deisenroth, M., Brundage, M., and Bharath, A. A Brief Survey of Deep Reinforcement Learning, 28 Sep 2017

Athalye, A. and Carlini, N. On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses, 10 Apr 2018

Bao, R., Liang, S., and Wang, Q. Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference, 21 May 2018

Barreno, M., Nelson, B., Sears, R., Joseph, A., and Tygar, J.D. Can Machine Learning Be Secure?, 21 Mar 2016

Bastani, O., Kim, C., and Bastani, H. Interpreting Blackbox Models via Model Extraction, 22 May 2018

Bauer, H., Burkacky, O., and Knochenhauer, C. [1], May 2017

Biggio, B., Nelson, B., and Laskov, P. Poisoning Attacks against Support Vector Machines, 25 Mar 2013

Biggio, B., Nelson, B., and Laskov, P. Support Vector Machines Under Adversarial Label Noise, 2011

Brundage et al. The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, Feb 2018

Bulò, S., Biggio, B., Pillai, I., Pellillo, M., and Roli, F. Randomized Prediction Games for Adversarial Machine Learning, 11 Nov 2017

Cao, X. and Zhenqiang Gong, N. Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification, 11 Jan 2018

Carbon Black Beyond the Hype: Security Experts Weigh in on Artificial Intelligence, Machine Learning, and Non-Malware Attacks, 2017

Carlini, N. and Wagner, D. Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods, 1 Nov 2017

Carlini, N. and Wagner, D. Audio Adversarial Examples: Targeted Attacks on Speech-to-Text, 5 Jan 2018

Carlini, N., Mishra, P., Vaidya, T., Zhang, Y., Sherr, M., Shields, C., Wagner, D., and Zhou, W. Hidden Voice Commands, 2016

Carlini, N. and Wagner, D. MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples, 22 Nov 2017

Carlini, N. and Wagner, D. Towards Evaluating the Robustness of Neural Networks, 22 Mar 2017

Chen, H. and Wang FY. Artificial Intelligence for Homeland Security, Jan 2005

Chen, P., Sharma, Y., Zhang, H., Yi, J., and Hsieh, C. EAD. Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples, 10 Feb 2018

Chen, S., Xue, M., Fan, L., Hao, S., Xu, L., Zhu, H., and Li, B. Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach, 31 Oct 2017

Chen, S., Xue, M., Fan, L., and Zhu, H. Hardening Malware Detection Systems Against Cyber Maneuvers. An Adversarial Machine Learning Approach, 13 Oct 2017

Chen, X., Liu, C., Li, B., Lu, K., and Song, D. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning, 15 Dec 2017

Conroy, N. Rubin V. and Chen, Y. Automatic Deception Detection: Methods for Finding Fake News, Aug 2017

Crawford, K. and Calo, R. There is a blind spot in AI research, 20 Oct 2016

Das, N., Shanbhogue, M., Chen, S., Chen, L., Kounavis, M., and Chau, D. ADAGIO: Interactive Experimentation with Adversarial Attack and Defense for Audio, 30 May 2018

Das, N., Shanbhogue, M., Chen, S., Hohman, F., Chen, L., Kounavis, M., and Chau, D. Keeping the Bad Guys Out: Protecting and Vaccinating Deep Learning with JPEG Compression, 8 May 2017

Das, N., Shanbhogue, M., Chen, S., Hohman, F., Li, S., Chen, L., Kounavis, M., and Chau, D. Shield: Fast, Practical Defense and Vaccination for Deep Learning using JPEG Compression, 19 Feb 2018

D’Avino, D., Cozzolino, D., Poggi, G., and Verdoliva, L. Autoencoder with recurrent neural networks for video forgery detection, 29 Aug 2017

Demontis, A., Biggio, B., Fumera, G., Giacintio, G., and Roli, F. Infinity-Norm Support Vector Machines Against Adversarial Label Contamination, 2017

Dowlin, N., Gilad-Bachrach, R., Laine, K., Lauter, K., Naehrig, M., and Wernsing, J. CryptoNets: Applying Neural Networks to Encrypted Data with High Throughput and Accuracy, 24 Feb 2016

Elsayed, G, Shankar, S., Cheung, B., Papernot, N., Kurakin, A. Goodfellow, I., and Sohl-Dickstein, J. Adversarial Examples that Fool both Human and Computer Vision, 22 May 2018

Elsayed, G., Goodfellow, I., and Sohl-Dickstein, J. Adversarial Reprogramming of Neural Networks, 28 Jun 2018

Everitt, T., Krakovna, V., Orseau, L., Hutter, M., and Legg, S Reinforcement Learning with a Corrupted Reward Channel, 19 Aug 2017

Evtimov, I., Eykholt, K., Fernandes, E., Kohno, T., Li, B., Prakash, A., Rahmati, A., and Song, D. Robust Physical-World Attacks on Deep Learning Visual Classification, 27 Jul 2017

Fawzi, A., Fawzi, H., and Fawzi, O. Adversarial vulnerability for any classifier, 23 Feb 2018

Folz, J., Palacio, S., Hees, J., Borth, D., and Dengel, A. Adversarial Defense based on Structure-to-Signal Autoencoders, 21 Mar 2018

Fredrikson, M., Jha, S., and Ristenpart, T. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures, 12 Oct 2015

Goodfellow, I., Papernot, N., Huang, S.,Duan, Y., Abbeel, P., and Clark, J. Attacking Machine Learning with Adversarial Examples, 24 Feb 2017

Goodfellow, I., Shlens, J., and Szegedy C. Explaining and Harnessing Adversarial Examples, 20 Mar 2015

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative Adversarial Nets, 10 Jun 2014

Gopinath, D., Katz, G., Pasareanu, C., and Barrett, C. DeepSafe: A Data-driven Approach for Checking Adversarial Robustness in Neural Networks, 2 Oct 2017

Goswami, G., Ratha, N., Agarwal, A., Singh, R., and Vatsa, M. Unravelling Robustness of Deep Learning based Face Recognition Against Adversarial Attacks, 22 Feb 2018

Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P Adversarial Examples for Malware Detection, 12 Aug 2017

Grosse, K., Papernot, N., Manoharan, P., Backes, M., and McDaniel, P Adversarial Perturbations Against Deep Neural Networks for Malware Classification, 16 Jun 2016

Grosse, K, Pfaff, D., Smith, M.T., and Backes, M. How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning Models, 16 Feb 2018

Grosse, K, Smith, M.T., and Backes, M. Killing Three Birds with one Gaussian Process: Analyzing Attack Vectors on Classification, 6 Jun 2018

Grosse, K., Manoharan, P., Papernot, N., Backes, M., and McDaniel, P. On the (Statistical) Detection of Adversarial Examples, 21 Feb 2017

Gu, T., Dolan-Gavitt, B., and Garg, S. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain, 22 Aug 2017

Hendrycks, D. and Dietterich, T. Benchmarking Neural Network Robustness to Common Corruptions and Surface Variations, 4 Jul 2018

Hicks, K., Hunter, A.P., Samp, L.S., and Coll, G. Assessing the Third Offset Strategy 2017

Hitawala, S. Comparative Study on Generative Adversarial Networks, 12 Jan 2018

Homoliak, I., Toffalini, F., Guarnizo, J., Elovici, Y., and Ochoa, M. Insight into Insiders: A Survey of Insider Threat Taxonomies, Analysis, Modeling, and Countermeasures, 4 May 2018

Hosseini, H., Chen, Y., Kannan, S., Zhang, B., and Poovendran, R. Blocking Transferability of Adversarial Examples in Black-Box Learning Systems, 13 Mar 2017

Hosseini, H. and Poovendran, R. Semantic Adversarial Examples, 16 Mar 2018

Hosseini, H., Xiao, B. and Poovendran, R. Google’s Cloud Vision API Is Not Robust To Noise, 20 Jul 2017

Jakubovitz, D. and Giryes, R. Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization, 23 Mar 2018

Lu, P., Chen, P., and Yu, C. On the Limitation of Local Intrinsic Dimensionality for Characterizing the Subspaces of Adversarial Examples, 26 Mar 2018

Hu, W. and Tan, Y. Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN, ..MalGAN 20 Feb 2017

Huang, L., Joseph, A., Neson, B., Rubinstein, B., and Tygar, J.D. Adversarial Machine Learning, Oct 2011

Ilyas, A., Engstrom, L., Athalye, A., and Lin, J. Black-box Adversarial Attacks with Limited Queries and Information, 7 Jun 2018

Ilyas, A., Jalal, A., Asteri, E., Daskalakis, C., and Dimakis, A.G. The Robust Manifold Defense: Adversarial Training using Generative Models, 26 Dec 2017

Jia, J. and Gong, N. AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning, 13 May 2018

Jin J., Dundar, A., and Culurciello, E. Robust Convolutional Neural Networks under Adversarial Noise, 25 Feb 2016

Kantarcioglu, M. and Xi, B. Adversarial Data Mining for Cyber Security, 28 Oct 2016

Kantchelian, A., Tygar, J.D., and Joseph, A. Evasion and Hardening of Tree Ensemble Classifiers, 27 May 2016

Kantchelian, A. Taming Evasions in Machine Learning Based Detection, 12 Aug 2016

Kashyap, A., Parmar, R., Agarwal, M., and Gupta, H. An Evaluation of Digital Image Forgery Detection Approaches, 30 Mar 2017

Kolosnjaji, B., Demontiz, A., Biggio, B., Maiorca, D., Giacinto, G., Eckert, C., and Roli, F. Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables, 12 Mar 2018

Kouzemtchenko, A. Defending Malware Classification Networks Against Adversarial Perturbations with Non-Negative Weight Restrictions, 23 Jun 2018

Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J. Adversarial Examples on Discrete Sequences for Beating Whole-Binary Malware Detection, 13 Feb 2018

Kreuk, F., Barak, A., Aviv-Reuven, S., Baruch, M., Pinkas, B., and Keshet, J. Deceiving End-to-End Deep Learning Malware Detectors using Adversarial Examples, 13 May 2018

Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial examples in the physical world, 11 Feb 2017

Kurakin, A., Goodfellow, I., Bengio, S., Dong, Y., Liao, F., Liang, M., Pang, T., Zhu, J., Hu, X., Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A., Huang, S., Zhao, Y., Zhao, Y., Han, Z., Long, J., Berdibekov, Y., Akiba, T., Tokui, S., and Abe, M. Adversarial Attacks and Defences Competition - Google Brain organized a NIPS 2017 competition, 31 Mar 2018

Laskov, P. and Lippmann, R. Machine Learning in Adversarial Environments, 28 Jun 2010

Lassance, C., Gripon, V., and Ortega, A. Laplacian Power Networks: Bounding Indicator Function Smoothness for Adversarial Defense, 24 May 2018

Lewis, L. Insights for the Third Offset: Addressing Challenges of Autonomy and Artificial Intelligence in Military Operations, Sep 2017

Li, Y. Generative Classifiers More Robust to Adversarial Attacks?, 19 Feb 2018

Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., and Zhu, J. Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser, 8 May 2018

Lin, Y., Liu, M., Sun, M., and Huang, J. Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight, 2 Oct 2017

Liu, Z., Liu, Q., Liu, T., Wang, Y., and Wen, W. Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples, 14 Mar 2018

Liu, Y., Chen, J., and Chen, H. Less is More: Culling the Training Set to Improve Robustness of Deep Neural Networks, 9 Jan 2018

Liu, Q., Liu, T., Liu, Z., Wang, Y., Jin, Y., and Wen, W. Security Analysis and Enhancement of Model Compressed Deep Learning Systems under Adversarial Attacks, 19 Mar 2018

Lu, J., Sibai, H., Fabry, E., and Forsyth, D. NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles, 12 jul 2017

Lu, P., Chen, P., Chen, K., and Yu, C. On the Limitation of MagNet Defense against L1-based Adversarial Examples, 9 May 2018

Luo, B., Liu, Y. Wei, L., and Xu, Q. Towards Imperceptible and Robust Adversarial Example Attacks against Neural, 15 Jan 2018

Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks, 19 Jun 2017

Maiorca, D., Biggio, B., Chiappe, M., and Giacinto, G. Adversarial Detection of Flash Malware: Limitations and Open Issues, 27 Oct 2017

Mayer, M. Norwegian Institute for Defence Studies, Oslo IFS Insights, Apr 2018

Meidan, Y., Bohadana, M., Shabtai, A., Ochoa, M., Tippenhauer, N., Guarnizo, J., and Elovici, Y. Detection of Unauthorized IoT Devices Using Machine Learning Technique, 14 Sep 2017

Meng, D. and Chen, H. MagNet: a Two-Pronged Defense against Adversarial Examples, 11 Sep 2017

Miller, D., Hu, X., Qiu, Z., and Kesidis, G. Adversarial Learning. A Critical Review and Active Learning Study, 27 May 2017

Moosavi-Dezfooli, S., Shrivastava, A., and Tuzel, O. Divide, Denoise, and Defend against Adversarial Attacks, 19 Feb 2018

Muñoz-González, L, Bissio, B., Demontis, A., Paudice, A., Wongreassamee, V., Lupu, E., and Roli, F. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization, 29 Aug 2017

Naseer, M., Khan, S., Porikli, F. Local Gradients Smoothing: Defense against localized adversarial attacks, 2 Jul 2018

Nataraj, L A Signal Processing Approach To Malware Analysis, Dec 2015

Naveiro, R., Redondo, A., Insua, D., and Ruggeri, F. Adversarial classification: An adversarial risk analysis approach, 21 Feb 2018

Nguyen A, Yosinski J, and Clune J. Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, 2 Apr 2015 Video

Nayebi, A. and Ganguli, S. Biologically inspired protection of deep networks from adversarial attacks, 27 Mar 2017

Nicolae, M., Sinn, M., Tran, M., Rawat, A., Wistuba, M., Zantedeschi, V., Molloy, I., Edwards, B. Adversarial Robustness Toolbox v0.2.2, 3 Jul 2018

North Atlantic Treaty Organization (NATO): Cooperative Cyber Defence Centre of Excellence; Minárik, T., Jakschis, R., and Lindström, L. 10th International Conference on Cyber Conflict CyCon X: Maximising Effects, 30 May 2018

North Atlantic Treaty Organization (NATO): Joint Air Power Competence Centre NATO Joint Air Power and Offensive Cyber Operations, Nov 2017

North Atlantic Treaty Organization (NATO): U.S. Department of Defense: U.S. Army Research Laboratory Initial Reference Architecture of an Intelligent Autonomous Agent for Cyber Defense, Mar 2018

North Atlantic Treaty Organization (NATO): U.S. Department of Defense: U.S. Army Research Laboratory - Research Group IST-152-RTG Toward Intelligent Autonomous Agents for Cyber Defense: Report of the 2017 Workshop, Apr 2018

Norton, A. and Qi, Y. Adversarial-Playground: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning, 1 Aug 2017

Ororbia II, A., Giles, C., and Kifer, D. Unifying Adversarial Training Algorithms with Flexible Deep Data Gradient Regularization, 29 Jul 2016

Papernot N., Goodfellow, I., Erlingsson, U., and McDaniel, P. Adversarial Examples in Machine Learning, 1 Feb 2017

Papernot N., Goodfellow, I., Sheatsley, R., Feinman, R., and McDaniel, P. Cleverhans v.1.0.0: an adversarial machine learning library, 14 Dec 2016

Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks, 14 Nov 2015

Papernot, N., McDaniel, P, Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. The Limitations of Deep Learning in Adversarial Settings, 24 Nov 2015

Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples, 19 Feb 2016

Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B. and Swami, A. Practical Black-Box Attacks against Machine Learning, 8 Feb 2016

Papernot, N., McDaniel, P., Sinha, A., and Wellman M. Towards the Science of Security and Privacy in Machine Learning, 11 Nov 2016

Papernot, N. et al. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library, 27 Jun 2018

Papernot, N., McDaniel, P., and Goodfellow I. Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples, 24 May 2016

Paudice A., Muñoz-González, L., Gyorgy, A., and Lupu, E. Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection, 8 Feb 2018

Prakash, A., Moran, N., Garber, S., DiLillo, A., and Storer, J. Deflecting Adversarial Attacks with Pixel Deflection, 30 Mar 2018

Radford, A., Metz, L. and Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks, 7 Jan 2016

Raghunathan, A., Steinhardt, J., and Liang, P. Certified Defenses against Adversarial Examples, 29 Jan 2018

Rahman, M., Azimpourkivi, M., Topkara, U., and Carbunar, B. Video Liveness for Citizen Journalism: Attacks and Defenses, Apr 2017

Ranjan, R., Sankaranarayanan, S., Castillo, Carlos D., and Chellappa, R. Improving Network Robustness against Adversarial Attacks with Compact Convolution, 22 Mar 2018

Ross, Andrew and Doshi-Velez, F. Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients, 26 Nov 2017

Rouhani, B., Riazi, M., and Koushanfar, F. CuRTAIL: ChaRacterizing and Thwarting AdversarIal deep Learning, 1 Apr 2018

Rouhani, B., Riazi, M., and Koushanfar, F. DeepSecure: Scalable Provably-Secure Deep Learning, 24 May 2017

Rubinstein, B., Nelson, B., Huang, L., Joseph, A., Lau, S., Rao, S., Taft, N., and Tygar, J.D. ANTIDOTE: Understanding and Defending against Poisoning of Anomaly Detectors, 2009

Samangouei, P., Kabkab, M., and Chellappa, R. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models, 18 May 2018

Santhanam, G. and Grnarova P. Defending Against Adversarial Attacks by Leveraging an Entire GAN, audio, 27 May 2018

Schneier, B. The Internet of Things is Wildly Insecure--and Often Unpatchable, 2014

Schneier, B. Security and the Internet of Things, 2017

Shafahi, A., Ronny H.W., Najibi, M., Suciu, O., Studer, C., Dumitras, T., and Goldstein, T. Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks, 3 Apr 2018

Sharif, M., Bauer, L., and Reiter, M. On the Suitability of $L_p$-norms for Creating and Preventing Adversarial Examples, 27 Feb 2018

Shen, S., Tople, S., and Saxena, P. AUROR: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems, 5 Dec 2016

Shokri, R., Stronati, M., and Shmatikov, V. Membership Inference Attacks Against Machine Learning Models, 31 Mar 2017

Singh T. and Kantardzic, M. Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains, 23 Mar 2017

Sitawarin, C., Bhagoji, A., Mosenia, A., Chiang, M., and Mittal P. DARTS: Deceiving Autonomous Cars with Toxic Signs, 31 May 2018

Šrndic, N. and Laskov, P. Practical Evasion of a Learning-Based Classifier: A Case Study, 2014

Stevens, R., Suciu, O., Ruef, A., Hong, S., Hicks, M., and Dumitras, T. Summoning Demons: The Pursuit of Exploitable Bugs in Machine Learning, 17 Jan 2017

Stokes, J., Wang, D., Marinescu, M., Mariono, M., and Bussone, B. Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models 16 Dec 2017

Stoica, I., Song, D., Popa, R., Patterson, D., Mahoney, M., Katz, R., Joseph, A., Jordan, M., Hellerstein, J., Gonzalez, J., Goldberg, K., Ghodsi, A., Culler, D., and Abbeel, P. A Berkeley View of Systems Challenges for AI, 15 Dec 2017

Sun, Z., Ozay, M., and Okatani, T. HyperNetworks with statistical filtering for defending adversarial examples, 6 Nov 2017

Svoboda, J., Masci, J., Monti, F., Bronstein, M., and Guibas, L. PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks, 31 May 2018

Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I. and Fergus, R. Intriguing properties of neural networks, 19 Feb 2014

Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. The Space of Transferable Adversarial Examples, 23 May 2017

Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. Ensemble Adversarial Training: Attacks and Defenses, 30 Jan 2018

Tretschk, E., Oh, S., and Fritz, M. Sequential Attacks on Agents for Long-Term Adversarial Goals, 5 Jul 2018

Tsuzuku, Y., Sato, I., Sugiyama, M. Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks, 22 May 2018

Uesato, J., O'Donoghue, B., Oord, A., and Kohli, P. Adversarial Risk and the Dangers of Evaluating Against Weak Attacks, 12 Jun 2018

U.S. Department of Defense Law of War Manual, Chapter XVI | Cyber Operations, 2015

U.S. Department of Defense: US Air Force Artificial Intelligence and National Security, 26 Apr 2018

U.S. Department of Defense: U.S. Army Cyber Institute at West Point and Arizona State University The New Dogs of War: The Future of Weaponized Artificial Intelligence, 2017

U.S. Department of Defense: U.S. Defense Science Board Report of the Defense Science Board (DSB) Task Force on Cyber Supply Chain, Apr 2017

U.S. Department of Defense: U.S. Defense Science Board Terms of Reference | Defense Science Board (DSB) Task Force on Counter Autonomy, 18 Jun 2018

U.S. Government: American Technology Council (ATC) Report to the President on IT Modernization, 2017

U.S. Government: White House 2018 White House Summit on Artificial Intelligence for American Industry, 10 May 2018

U.S. Department of Homeland Security (DHS) Artificial Intelligence White Paper | Science and Technology Advisory Committee (HSSTAC): Quadrennial Homeland Security Review Subcommittee, 10 Mar 2017

U.S. Department of Homeland Security (DHS) Narrative Analysis: Artificial Intelligence | National Protection and Programs Directorate - Office of Cyber and Infrastructure Analysis, July 2017

Uther, W. and Veloso, M. Adversarial Reinforcement Learning, Jan 2003

Vijaykeerthy, D., Suri, A., Mehta, S., and Kumaraguru, P. Hardening Deep Neural Networks via Adversarial Model Cascades, 12 Feb 2018

Waltzmann, R. The Weaponization of Information: The Need for Cognitive Security, testimony presented before the Senate Armed Services Committee, Subcommittee on Cybersecurity, 27 Apr 2017

Wang, Q. Guo, W., Zhang, K., Ororbia II, A., Xing, X., Giles, C., and Liu, X. Adversary Resistant Deep Neural Networks with an Application to Malware Detection, 27 Apr 2017

Wang, D., Li, C., Wen, S., Nepal, S., and Xiang, Y. Defensive Collaborative Multi-task Training - Defending against Adversarial Attack towards Deep Neural Networks, 3 Jul 2018

Wang C. Evolutionary Generative Adversarial Networks, 1 Mar 2018

Warde-Farley, D. and Goodfellow, I. Adversarial Perturbations of Deep Neural Networks, 23 Dec 2016

Weng, T., Zhang, H., Chen, P., Yi, J., Su, D., Gao, Y., Hsieh, C., and Daniel, L. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach, 31 Jan 2018

Xiao, H. Adversarial and Secure Machine Learning, 27 Oct 2016

Xie, C., Zhang, Z., Wang, J., Zhou, Y., Ren, Z., Yuille, A. Improving Transferability of Adversarial Examples with Input Diversity, 11 Jun 2018

Xie, C., Zhang, Z., Wang, J., Zhou, Y., Ren, Z., Yuille, A. Mitigating Adversarial Effects Through Randomization 28 Feb 2018

Xu, W., Qi, Y., and Evans, D. Automatically Evading Classifiers: A Case Study on PDF Malware Classifiers, Feb 2016

Xu, W., Evans, D., and Qi, Y. Feature Squeezing; Detecting Adversarial Examples in Deep Neural Networks, 5 Dec 2017

Yampolskiy, R., and Spellchecker, M.S. Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures, Oct 2016

Yan, Z., Guo, Y., and Zhang, C. Deep Defense: Training DNNs with Improved Adversarial Robustness, 30 May 2018

Yan J., Qi, Y., and Rao, Q Detecting Malware with an Ensemble Method Based on Deep Neural Network, 18 Aug 2017

Yuan, X., He, P., Zhu, Q., Bhat, R., and Li, X. Adversarial Examples. Attacks and Defenses for Deep Learning, 5 Jan 2018

Zane, C. and Markel, A. Machine Learning Malware Detection, 2015

Zhang, C., Bengio S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking generalization, 26 Feb 2017

Zhao, P., Fu, Z., Wu, O., Hu, Q., and Wang, J. Detecting Adversarial Examples via Key-based Network, 2 Jun 2018