Essay: The Future of Artificial Intelligence in Cybersecurity
Introduction
Artificial Intelligence is an emerging technology that is intended to improve the quality of life for businesses and individuals. The purpose of AI is to automate tasks and enhance user experiences for various products and services. Though the most advanced and highly ambitious projects today are still in experimental stages, the current state of AI has increased in capability that involves decision-making, problem-solving, and creativity with no human interaction. (Benbya et al.) This includes autonomous vehicles such as Tesla’s Model 3 and applications such as OpenAI’s ChatGPT. Large organizations intend to implement these advanced capabilities of AI to improve processes, make decisions, and lower costs. As more organizations begin to utilize AI in their day-to-day operations, they will become dependent on the technology. This will present a challenge for cybersecurity experts as they begin to analyze data and threats generated by AI, which will become more diverse and complicated. This will also force cybersecurity experts to become dependent on AI to circumvent these threats. If AI’s evolution continues down its projected course with no safeguards put in place, the technology will evolve beyond the understanding and control of even the most skilled experts who are expected to maintain it.
Artificial Intelligence is rapidly evolving and will become an existential threat if safeguards aren’t put into place as society becomes dependent on a technology that could advance beyond human understanding. Tech industry leaders and some of the smartest minds in the world are warning that AI will become an existential threat. Hackers and threat actors will also exploit AI to cause harm to society, and according to Harris (2023), people do not have the same emotional response when viewing AI as an existential threat as opposed to other threats such as nuclear warfare or pandemics.
The Warnings
Tech industry leaders released a statement saying, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Among the many signatories include scientist Geoffrey Hinton who is regarded as the “godfather of AI”, recently resigned from Google and warned about the dangers of the technology. According to Hart (2023a), Elon Musk, Steve Wozniak, and hundreds of high-profile technologists, entrepreneurs, and researchers signed a letter calling on AI labs to immediately halt work on powerful AI systems, and urging developers to step back from the race to deploy more advanced products until experts can assess the risks AI poses to humanity. (Hart, R 2023b) During a BBC interview with Stephen Hawking back in 2014, Hawking predicted that AI could “spell the end of the human race.” Hawking also stated “It would take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded”(Colton and News, 2023).
From a cybersecurity perspective, AI algorithms will become more sophisticated and according to Balaban(2021), there is a lack of transparency in the design and practical application of these algorithms, creating the ability to be used for different purposes from what the applications were originally intended for. This adds to the possibility that applications intended to improve a product or service could be manipulated to be used for more nefarious purposes by threat actors. Furthermore, it is not a question of if hackers would be able to gain the ability to utilize more sophisticated AI models, it is when they are able to utilize AI to develop highly sophisticated attack vectors. This leads to the possibility that these AI powered attacks will be so advanced; no human will be able to defend against these threats without the help of AI.
AI Powered Cyber Threats
One of the challenges that cybersecurity experts face is developing Explainable Artificial Intelligence (XAI) as AI continues to rapidly evolve. Experts are able to develop AI models capable of defending against attacks, but there have been reported instances where analysts have been unable to explain or comprehend the mitigation efforts of an attack. It is important for cybersecurity analysts to understand the tools they are using to analyze threats and understand the mitigation processes. Implementing XAI models allows human users to comprehend, trust, and manage cyber-threats. Another challenge is that XAI models are susceptible to being exploited and could fall victim to adversarial attacks, which raises public security concerns. (Zhang et al., 2022)
AI is being implemented by businesses to improve processes and automate tasks, influencing cybersecurity in both a positive and negative way. Currently, AI has already been implemented into Signature Based Techniques, which is a type of Intrusion Detection System, which is capable of detecting malware. The downside to this technique is it is only capable of detecting known attacks. With the help of machine learning, this will soon change, leading to a positive impact by lowering costs, but the negative impact being AI could soon replace the need for human analysts. There is also currently a significant limitation of AI, which just like any other computer, is just code, which can be manipulated by threat actors to weaponize these AI systems and utilize them to destroy what it was originally designed to protect. (Ansari et al., 2022)
The Threats
Tech industry leaders and developers of AI are warning of potential future threats the technology imposes to society, even saying that events could become an existential threat. There are multiple ways that AI could evolve and be utilized to make it a threat to society. First, there is the possibility that AI could evolve beyond human understanding and control. AI will also become self-improving with the ability to make its own decisions that may not benefit human existence. (Hart, 2023)
One analogy that could give this claim more perspective is the AI Paperclip Maximizer Theory written by philosopher Nick Bostrom of Oxford University in 2003, where someone creates an advanced super intelligent AI machine with the sole purpose to manufacture paperclips. This machine would be capable of replicating itself and using its surroundings to gather resources to make more paper clips. As the machine continues to replicate itself, it doesn’t see a home as a human would see a home, it just sees opportunity to make more paperclips. The machine would scan humans and realize they either contain the resources it needs to continue its objective, or even see humans as a threat that could halt its objective and then decide that they must be destroyed in order to survive. Eventually this machine would consume entire cities and eventually start spreading outward into the universe with the sole intent of making as many paperclips as possible with no regard to human life. (Eliot, 2021)
In the cyber world, generative AI(GenAI) powered applications such as OpenAI’s ChatGPT and Google Bard, there have already been instances of these tools being used in both positive and negative ways. ChatGPT has already been used to spread misinformation on social media that disrupts societal beliefs, world politics, economies, and job markets (Gupta et al., 2023).
Students have utilized ChatGPT in the past to cheat in their course work, which lowers their skills and abilities needed when they enter their respective professions, creating a challenge for hiring managers looking for legitimate talent. Institutions have found a way to detect AI generated course work such as requiring students to submit their work using services like Turnitin. (Khalil & Er, 2023) If offending students manage to find ways to get around these detection tools and use AI to do their daily jobs after they graduate, this would add to the claim of society becoming dependent on AI, and in some cases, causing individuals to become dependent on AI to even do basic tasks within their profession. Furthermore, if students can leverage AI to become successful in the workplace, this means that large corporations will also be able to cut out the middleman and utilize AI to perform tasks which would replace millions of jobs.
AI being used to cheat in video games has recently emerged in gaming communities where AI and Machine Learning have allowed players to cheat in online competitive matches and even remain undetected in some instances. One example involves a game called Rocket League, in which the developer Psyonix and the majority of their player base had a belief that the game is cheat proof, and there is no way to cheat other than smurfing (higher ranked player competing against lower ranked players). Around February of 2023, it was discovered that hackers managed to exploit an AI and ML(Machine Learning) based program called RL Gym and figured out a way to have high level bots play the game in ranked matches, these AI bots are capable of challenging higher ranked players, thus allowing players to cheat for the first time in Rocket League history. (Writer) The purpose of this example is to convey that AI allows for outcomes previously thought to be impossible to achieve, to being executed with resiliency that becomes a great challenge for experts to mitigate.
Another aspect to these threats as AI continues to evolve, if scientists were to one day develop super-intelligent AI, humans have no idea when this would happen or what to do when it happens. According to Harris (2016), super-intelligent AI would operate on a different perception of time and would advance at a much faster rate than humans, potentially millions of times faster than the human mind. To put this into perspective, 20,000 years would pass during a period of one week at human-level of intellectual work. If, and when super-intelligent AI is to be achieved, the first deployment of this could cause a multinational uproar between world powers. For example, if the United States were to be the first country to deploy this kind of technology, other world powers may see the technology as a threat and this could potentially create conflicts between nations.
This leads to a more significant threat to society which is the implementation of AI being used in warfare and cyber-warfare. Imagine drones capable of making decisions to destroy human life when given the directive to do so with no human interaction. Furthermore, imagine drones with these capabilities being hacked or exploited by threat actors. According to Reed (2023), there is a debate whether or not AI and ML should be used in military operations, especially having the ability to make decisions that involve destruction and death. The use of AI in military equipment would require a high level of trust that would be needed in order to safely and effectively implement the technology into military operations. Therefore, it could be assumed, given the nature of other possible exploits AI is already susceptible to, these implementations could likely never be fully trusted.
Threat actors could utilize more sophisticated AI models to develop pathogens to cause pandemics, hack vital systems such as gaining access to health, government, and military systems. AI could also be used to manipulate people by spreading misinformation that could lead to wars between world powers causing loss of life (Hendrycks et al., 2023). Though Hawking’s predictions (Colton and News, 2023) is still currently considered speculative, if AI were to become self-improving to the point that it supersedes human abilities and control, agendas and biases implanted in AI models or devised on its own due to its self-improving nature, would be a detriment to society, possibly even becoming a threat to human life. Ultimately, as society becomes more dependent on AI, it could lead to AI vs AI events reducing the ability for humans to mitigate these threats on their own.
Conclusion
Artificial Intelligence is rapidly evolving and will become an existential threat if safeguards aren’t put into place as society becomes dependent on a technology that could advance beyond human understanding. AI can improve the quality of life for billions of people, but if improperly implemented with no safeguards in place, the technology could lead society down a destructive path that could even result in the extinction of humankind. Though some of the threats AI imposes are speculative at this time, there are aspects such as AI being used to spread misinformation and the use of AI in military operations that could cause conflicts between nations. It is important that society listens to the warnings and avoids becoming dependent on a technology that could lead to the detriment of humankind. Further research will be needed to determine the most likely outcomes on the future of AI and world leaders should be educated on the dangers it will impose on society so that society can come together and implement a useful tool that can be used to improve the future of humanity.
References
Ansari, M. F., Dash, B., Sharma, P., & Yathiraju, N. (2022). The Impact and Limitations of Artificial Intelligence in Cybersecurity: A Literature Review. International Journal of Advanced Research in Computer and Communication Engineering, 11(9). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4323317
Balaban, D. (2021). How AI is Mishandled to Become a Cybersecurity Risk. EWEEK. https://www.eweek.com/security/how-ai-is-mishandled-to-become-a-cybersecurity-risk/
Benbya, H., Davenport, T. H., & Pachidi, S. (2020). Artificial Intelligence in Organizations: Current State and Future Opportunities. Papers.ssrn.com, 19(4). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3741983
Colton, E., & News, F. (2023, May 1). Stephen Hawking warned AI could mean the “end of the human race.” New York Post. https://nypost.com/2023/05/01/stephen-hawking-warned-ai-could-mean-the-end-of-the-human-race/
Eliot, L. (2021). Boundaries Of AI And Law Per The Paperclip Maximizer Narrative. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3958561
Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L. (2023). From ChatGPT to ThreatGPT: Impact of Generative AI in Cybersecurity and Privacy. IEEE Access, 11, 80218–80245. https://doi.org/10.1109/ACCESS.2023.3300381
Harris, S. (2016). Can we build AI without losing control over it? Ted.com; TED Talks. https://www.ted.com/talks/sam_harris_can_we_build_ai_without_losing_control_over_it?language=en
Hart, R. (2023a). AI Could Cause Human “Extinction,” Tech Leaders Warn. Forbes. https://www.forbes.com/sites/roberthart/2023/05/30/ai-could-cause-human-extinction-tech-leaders-warn/?sh=26af2d6c49f9
Hart, R. (2023b). Elon Musk And Tech Leaders Call For AI “Pause” Over Risks To Humanity. Forbes. https://www.forbes.com/sites/roberthart/2023/03/29/elon-musk-and-tech-leaders-call-for-ai-pause-over-risks-to-humanity/?sh=7098522d6dfc
Hendrycks, D., Mazeika, M., & Woodside, T. (2023, June 26). An Overview of Catastrophic AI Risks. ArXiv.org. https://doi.org/10.48550/arXiv.2306.12001
Khalil, M., & Er, E. (2023). Will ChatGPT get you caught? Rethinking of Plagiarism Detection. ArXiv:2302.04335 [Cs]. https://arxiv.org/abs/2302.04335
Reed, A. R. (2023). Uncertainty Quantification: Artificial Intelligence and Machine Learning in Military Systems. Air & Space Operations Review, 2(1), 3–15.
Writer, K. R. G. (2023, January 11). Rocket League cheaters are currently using AI in ranked matches. VG247. https://www.vg247.com/rocket-league-cheaters-are-currently-using-ai-in-ranked-matches
Zhang, Z., Hamadi, H. A., Damiani, E., Yeun, C. Y., & Taher, F. (2022). Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research. IEEE Access, 10, 93104–93139. https://doi.org/10.1109/access.2022.3204051