A Review of Artificial Intelligence in Security and Privacy: Research Advances, Applications, Opportunities, and Challenges

Yazan Alaya Al-Khassawneh

Abstract


Artificial intelligence has the potential to address many societal, economic, and environmental challenges, but only if AI-enabled gadgets are kept secure. Many artificial intelligence (AI) models produced in recent years can be hacked by utilizing cutting-edge techniques. This issue has sparked intense research into adversarial AI to develop machine and deep learning models that can withstand various types of attacks. We provide a detailed summary of artificial intelligence in this paper to prove how adversarial attacks against AI applications can be mounted, covering topics such as confrontational knowledge and capabilities, existing methods for actually producing adversarial examples, and existing cyber defense models. In addition, we investigated numerous cyber countermeasures that could defend AI applications against these attacks and offered a systematic approach for demonstrating war strategies against machine learning and artificial intelligence. To safeguard AI applications, we emphasize the importance of understanding the intentions and methods of possible attackers. In the end, we list the biggest problems and most interesting research areas in the field of AI privacy and security.

Keywords


Applications; Artificial intelligence; Challenges; Opportunities; Privacy; Security

Full Text:

PDF

References


Abduljabbar, R., Dia, H., Liyanage, S., and Bagloee, S. A. (2019). Applications of artificial intelligence in transport: An overview. Sustainability, 11(1), 189.

Akhtar, N. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410-14430.

Alshahrani, A., Dennehy, D., and Mäntymäki, M. (2022). An attention-based view of AI assimilation in public sector organizations: The case of Saudi Arabia. Government Information Quarterly, 39(4), 101617.

Anh, T. T., Luong, N. C., Niyato, D., Kim, D. I., and Wang, L. C. (2019). Efficient training management for mobile crowd-machine learning: A deep reinforcement learning approach. IEEE Wireless Communications Letters, 8(5), 1345-1348.

Ba, J., and Caruana, R. (2014). Do deep nets really need to be deep?. Advances in Neural Information Processing Systems, 27, 1-9.

Bacchi, S., Oakden-Rayner, L., Zerner, T., Kleinig, T., Patel, S., and Jannes, J. (2019). Deep learning natural language processing successfully predicts the cerebrovascular cause of transient ischemic attack-like presentations. Stroke, 50(3), 758-760.

Bae, H., Jang, J., Jung, D., Jang, H., Ha, H., Lee, H., and Yoon, S. (2018). Security and privacy issues in deep learning. arXiv preprint arXiv:1807.11655.

Biggio, B., and Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317-331.

Biggio, B., Fumera, G., and Roli, F. (2010). Multiple classifier systems for robust classifier design in adversarial environments. International Journal of Machine Learning and Cybernetics, 1(1), 27-41.

Blum, A. L., and Langley, P. (1997). Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1-2), 245-271.

Bout, E., Loscri, V., and Gallais, A. (2021). How machine learning changes the nature of cyberattacks on IoT networks: A survey. IEEE Communications Surveys and Tutorials, 24(1), 248-279.

Boyd, M., and Wilson, N. (2017). Rapid developments in artificial intelligence: how might the New Zealand government respond?. Policy Quarterly, 13(4), 1-37.

Breiman, L. (1996). Bagging predictors. Machine Learning, 24(2), 123-140.

Caminero, G., Lopez-Martin, M., and Carro, B. (2019). Adversarial environment reinforcement learning algorithm for intrusion detection. Computer Networks, 159, 96-109.

Carlini, N., and Wagner, D. (2017). Towards evaluating the robustness of neural networks. 2017 IEEE Symposium on Security and Privacy (sp), 39-57.

Chan, P. P., He, Z. M., Li, H., and Hsu, C. C. (2018). Data sanitization against adversarial label contamination based on data complexity. International Journal of Machine Learning and Cybernetics, 9(6), 1039-1052.

David, S., Julian, S., Karen, S., Ioannis, A., Aja, H., Arthur, G., Thomas, H., Lucas, B., Matthew, L., Adrian, B., and Yutian, C. (2017). Lillicrap timothy p., hui fan, sifre laurent, van den driessche george, graepel thore, hassabis demis. Mastering the Game of Go Without Human Knowledge, Nat, 550(7676), 354-359.

Deng, L. (2014). A tutorial survey of architectures, algorithms, and applications for deep learning. APSIPA Transactions on Signal and Information Processing, 3, 1-29.

Dixit, P., and Silakari, S. (2021). Deep learning algorithms for cybersecurity applications: A technological and status review. Computer Science Review, 39, 100317.

Dunjko, V., and Briegel, H. (2018). Machine learning and artificial intelligence in the quantum domain: a review of recent progress. Reports on Progress in Physics, 81(7), 074001.

Ebert, C., and Louridas, P. (2016). Machine learning. IEEE Software, 33(5), 110-115.

Embley, D. W. (2004). Toward semantic understanding: An approach based on information extraction ontologies. Proceedings of the 15th Australasian Database Conference, 27, 3-12.

Guan, Y., and Ge, X. (2017). Distributed attack detection and secure estimation of networked cyber-physical systems against false data injection attacks and jamming attacks. IEEE Transactions on Signal and Information Processing Over Networks, 4(1), 48-59.

Hancock, J. T., Naaman, M., and Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89-100.

Hinton, G., Vinyals, O., and Dean, J. (2015). Distilling the knowledge in a neural network. Arxiv Preprint arXiv:1503, 2(7), 02531.

Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., and Wang, Y. (2017). Artificial intelligence in healthcare: Past, present and future. Stroke and Vascular Neurology, 2(4), 230-243.

Kurakin, A., Goodfellow, I., and Bengio, S. (2016). Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236.

Legg, S., and Hutter, M. (2007). A collection of definitions of intelligence. Frontiers in Artificial Intelligence and Applications, 157, 17.

Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., and Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access, 6, 12103-12117.

Malik, H., Srivastava, S., Sood, Y. R., and Ahmad, A. (2018). Applications of artificial intelligence techniques in engineering. Sigma, 1, 1-11.

Mäntymäki, M., Minkkinen, M., Birkstedt, T., and Viljanen, M. (2022). Defining organizational AI governance. 2022, AI and Ethics, 1-7.

Melis, M., Scalas, M., Demontis, A., Maiorca, D., Biggio, B., Giacinto, G., and Roli, F. (2022). Do gradient-based explanations tell anything about adversarial robustness to android malware?. International Journal of Machine Learning and Cybernetics, 13(1), 217-232.

Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., and Frossard, P. (2017). Universal adversarial perturbations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1765-1773.

Muhammad, I., and Yan, Z. (2015). Supervised machine learning approaches: A survey. ICTACT Journal on Soft Computing, 5(3), 946-952.

Nasteski, V. (2017). An overview of the supervised machine learning methods. Horizons. b, 4, 51-62.

Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., and Swami, A. (2017). Practical black-box attacks against machine learning. Proceedings of The 2017 ACM on Asia Conference on Computer and Communications Security, 3, 506-519.

Ren, K., Zheng, T., Qin, Z., and Liu, X. (2020). Adversarial attacks and defenses in deep learning. Engineering, 6(3), 346-360.

Risi, S., and Preuss, M. (2020). Behind deepmind’s alphastar ai that reached grandmaster level in starcraft II. KI-Künstliche Intelligenz, 34(1), 85-86.

Rosa, M., Feyereisl, J., and Collective, T. G. (2016). A framework for searching for general artificial intelligence. Arxiv Preprint Arxiv:1611.00685.

Song, H., and Montenegro-Marin, C. E. (2021). Secure prediction and assessment of sports injuries using deep learning based convolutional neural network. Journal of Ambient Intelligence and Humanized Computing, 12(3), 3399-3410.

Stuart, J. (2010). Artificial Intelligence A Modern Approach Third Edition. United States: Prentice Hall.

Tabassi, E., Burns, K. J., Hadjimichael, M., Molina-Markham, A. D., and Sexton, J. T. (2019). A taxonomy and terminology of adversarial machine learning. NIST IR, 2019, 1-29.

Thierer, A., Castillo, A. and Russell, R. (2017). Artificial intelligence and public policy. SSRN Electronic Journal, 2017, 1-57.

Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. (2017). The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453.

Uddin, S., Khan, A., Hossain, M. E., and Moni, M. A. (2019). Comparing different supervised machine learning algorithms for disease prediction. BMC Medical Informatics and Decision Making, 19(1), 1-16.

Wirtz, B. W., Weyerer, J. C., and Geyer, C. (2019). Artificial intelligence and the public sector—applications and challenges. International Journal of Public Administration, 42(7), 596-615.

Yang, Q., Liu, Y., Chen, T., and Tong, Y. (2019). Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2), 1-19.

Yuan, X., He, P., Zhu, Q., and Li, X. (2019). Adversarial examples: Attacks and defenses for deep learning. IEEE Transactions on Neural Networks and Learning Systems, 30(9), 2805-2824.

Zawacki-Richter, O., Marín, V. I., Bond, M., and Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International Journal of Educational Technology in Higher Education, 16(1), 1-27.

Zhang, J., and Li, C. (2019). Adversarial examples: Opportunities and challenges. IEEE Transactions on Neural Networks and Learning Systems, 31(7), 2578-2593.

Zhang, W. E., Sheng, Q. Z., Alhazmi, A., and Li, C. (2020). Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3), 1-41.

Zhang, W., Zhou, T., Lu, Q., Wang, X., Zhu, C., Sun, H., Wang, Z., Lo, S.K. and Wang, F. Y. (2021). Dynamic-fusion-based federated learning for COVID-19 detection. IEEE Internet of Things Journal, 8(21), 15884-15891.

Zhao, S., Blaabjerg, F., and Wang, H. (2020). An overview of artificial intelligence applications for power electronics. IEEE Transactions on Power Electronics, 36(4), 4633-4658.




DOI: https://doi.org/10.17509/ijost.v8i1.52709

Refbacks

  • There are currently no refbacks.


Copyright (c) 2022 Universitas Pendidikan Indonesia

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Indonesian Journal of Science and Technology is published by UPI.
StatCounter - Free Web Tracker and Counter
View My Stats