© 2023 CoolTechZone - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Artificial Intelligence in Cyber Warfare: The End of Humanity Imminent?

1

The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”

Stephen Hawking

Cyberwarfare is a new form of warfare waged through computers, enhanced software, the internet as well as malware to further national agendas and improve the international game of power.

With the improvement of technology, the means of war have also changed with better and more efficient “weapons" being used in cyberspace. The latest player in the world of cyberwarfare is Artificial Intelligence that adds to the efficiency of such cyberattacks.

AI eliminates the human element and allows a high-performance system to take care of all the details of cyberspace attacks.

While AI integrated warfare has many benefits, we need to also stay informed of the drawbacks of the technology. AI potentially has the capability to bring down upon a globally disastrous phenomenon that could destroy the whole world's economy and even lead to the extinction of mankind as we know it.

In the below article, we explore just this concept; discuss cyber warfare and the latest weapon in its arsenal, artificial intelligence. I further discuss the possibilities of cyber warfare attacks using AI and finally look at a small ray of hope at preventing a global AI disaster.

 

Cyber Warfare

Stuxnet, crowned the harbinger of modern cyber warfare, was the first-ever instance of any form of malware to bring physical damage across international boundaries. While there are a few other occurrences in the past of such similar cyber-attacks across international boundaries, they don’t come close to the damage or popularity of Stuxnet.

Allegedly begun as a US-Israel joint operation against Iran, the Stuxnet targeted flaws in Windows computers with a programmable Logic Controller unit. The objective of the operation, code-named "Olympic Games", was to bring down the nuclear power of Iran. Stuxnet managed to cause damage to thousands of centrifuges at the Natanz uranium enrichment facility.

The term cyber warfare itself is used to refer to attacks similar to the aforementioned, where one entity attacks another similar entity to bring damage to the latter's information, financial, and physical resources. Usually, such entities could be nation-states, APT groups or even hacktivist groups.

Process of classifying activities in Cyberspace into Cyberwarfare and a few others

Image Source – “Cyber warfare: Issues and challenges” scholarly paper [by M Robinson, K Jones, and H Janicke]

In 2001, Alford L, in his paper entitled 'Cyber Warfare: A New Doctrine and Taxonomy’, defined cyberwarfare as:

Any act intended to compel an opponent to fulfil our national will, executed against the software controlling processes within an opponent’s system.

Jeffrey Carr, in his book entitled ‘Inside Cyber Warfare: Mapping the Cyber Underworld’ offers yet another definition for cyberwarfare:

Cyber warfare is the art and science of fighting without fighting; of defeating an opponent without spilling their blood.

Both these quotes look at a different aspect of cyber warfare, the former at the objective of cyberwarfare and the latter at the art of cyber warfare itself.

Artificial Intelligence: A New Weapon?

When you hear the term Artificial Intelligence or AI, the first thing that comes to mind is humanoid robots taking over the planet and destroying the human race as we know it. The base of many science fiction movies, artificial intelligence is a rapidly growing field of computer science attempting to recreate human-like intelligence in a computer system.

Historical Definitions of AI:

  • Systems That Think Like Humans.

The automation of activities that we associate with human thinking, activities such as decision making, problem solving, and learning.

—Bellman, 1978

  • Systems That Act Like Humans.

The art of creating machines that perform functions that require intelligence when performed by people.

—Kurzweil, 1990

  • Systems That Think Rationally.

The study of computations that make possible to perceive, reason, and act.

—Winston, 1992

  • Systems That Act Rationally.

The branch of computer science that is concerned with the automation of intelligent behavior.

—Luger and Stubblefield, 1993

Selected Definitions:

  • Automated systems: “A physical system that functions with no (or limited) human operator involvement, typically in structured and unchanging environments, and the system’s performance is limited to the specific set of actions that it has been designed to accomplish ... typically these are well-defined tasks that have predetermined responses according to simple scripted or rule-based prescriptions.”
  • Autonomy: “The condition or quality of being self-governing in order to achieve an assigned task based on the system’s own situational awareness (integrated sensing, perceiving, and analyzing), planning, and decision making.”

Source – “Artificial Intelligence and National Security” report by Congressional Research Service

While AI machines are not going to take over the planet for a few more years, they are quite sophisticated and advanced already. There are systems already in existence that can mimic human activities as simple as image identification, speech recognition etc., or as complex as medical diagnosis, vulnerability analysis etc.

This potential of AI to mimic and enhance such activities has also kept various national governments interested in the field, with a few even making giant leaps. For instance, Iran has been investing in robotics and AI for over the past 10 years, which is now being integrated with the Iranian military operations making them more destructive and efficient.

A recent example of such developments would be the Loyal Wingman, a pilotless fighter-like jet by Boeing Co, which was tested successfully by the Royal Australian Air Force (RAAF) earlier this year. The combat drone is built to be semi-autonomous, which means that it doesn't require a pilot to manoeuvre and can run combat missions on its own.

The Combat ready drone Loyal Wingman by Boeing Co tested by Royal Australian Air Force (RAAF)

Image Source – boeing.com

The drone has piqued the interest of the US and UK, which could be potential buyers of the combat drone in the near future.

The US has also invested in military AI through the Pentagon's Defense Advanced Research Projects Agency (DARPA), which has been funnelling money into fundamental AI research.

While these applications of AI still involve some sort of physical warfare, more recent research is looking to shift the warfare more towards cyberspace. Like the quote mentioned in the previous section by Jeffrey Carr, research in cyber warfare using AI aims to defeat or attack an enemy without spilling any blood.

While the development of AI that can hack or attack the cyberinfrastructure of an enemy nation-state is still not fully developed, there are other autonomous applications that can be used for cyberwarfare.

Autonomous Vulnerability Scanning

Vulnerability scanning is at the core of all threat intelligence and helps secure a system from all flaws exploitable by a hacker. If you can find a vulnerability in a system, you can definitely compromise it and cause harm or extract sensitive information from it.

With the latest developments in AI and machine learning, the process of scanning vulnerabilities has also moved to be autonomous. Tools such as SIEM or antiviruses are already semi-autonomous and can find potential vulnerabilities in the system.

Vulnerability Scanning Algorithm used to analyze SQL injection and XSS attacks

Image Source – “Testing and comparing web vulnerability scanning tools for SQL injection and XSS attacks” scholarly paper [by J Fonseca, M Vieira and H Madeira]

Newer research has delved deeper into the machine learning algorithms making it possible to make such systems completely autonomous.

The scholarly papers, “False Positive Analysis of software vulnerabilities using Machine learning" [by S Gowda, D Prajapathi, Singh, and SS Gadre], "Context-Aware Software Vulnerability Classification Using Machine Learning” [by G Siewruk, and W Mazurczyk] and “Application of Logistic Regression in WEB Vulnerability Scanning” [by L Tao and Z Long-Tao] explore just this concept of integrating machine learning with vulnerability analysis.

Being able to find vulnerabilities in the enemy cyberspace is detrimental to any form of cyberwarfare between nation-states. If I can simply run a system that constantly scans another country's government resources for vulnerabilities, I could spend the rest of my time developing exploits or malware for further use.

Deepfakes

While deepfakes aren't precisely a form of attack against anyone, they can be used to manipulate officials of nations to take offensive action against someone or another nation. Yes, I understand the result of such a deepfake isn't technically CYBERwarfare, but since the cause is in the cyberspace, I’m counting it.

Deepfake to make a guy look like Tom Cruise – Original vs Deepfake comparison

 Image Source – cybersafetycop.com

Most recently, the leaders of the UK, Latvia, Estonia, and Lithuania were tricked by a couple of "pranksters" who used deepfake technology to impersonate Leonid Volkov, chief of staff of the Russian opposition.

With the deepfake scam, Kuznetsov and Stolyarov, better known as Vovan and Lexus, have managed to trick their way into secret meetings, video conferences, and sometimes even direct phone calls with quite a few Western politicians and celebrities.

This real-life "prank", or whatever you'd like to call it, shows how manipulative deepfake technology can be.

Imagine this. One fine day, a video surfaces where Vladimir Putin or Joe Biden or Kim Jong-un threaten an imminent nuclear strike. While it might be proven to be fake in the long run, it would definitely cause a worldwide disaster until such a time. World governments would panic, with some even taking matters into their own hands through pre-emptive strikes.

The people would panic, too, causing mass gatherings, riots, and a total downfall of the entire world economy.

To read more about deepfakes, read our article, Deepfake crimes: How Real and Dangerous They Are in 2021?

Sentience

The basis of most modern sci-fi movies, sentient AI is the end goal of quite a few researchers or “mad scientists” out in the world.

Alan Turing’s famous quote about sentience in machines

Image Source – becominghuman.ai

It all started when Alan Turing, in his scholarly paper entitled "Computing Machinery and Intelligence", proposed the question, please format as quote “Can machines think?” While there have been somewhat similar notions of sentient computers in the past, it was Alan Turing his ‘Imitation Game’ that took the notion to the next level.

Sentience basically is the capacity to be aware of feelings and external sensations/stimuli and respond to them accordingly. For humans, external stimuli could mean one of the 5 senses, namely – eyes, ears, nose, tongue, and skin. A system that can do this could be classified as a sentient system.

More advanced capabilities would involve a well-defined thought process, free will, and the capacity to respond to actions through experience.

When a computer that humans created achieves sentience, the results might be very unfavourable to the survival of the humans. Such sentient systems would quickly figure out the Darwinian theory of survival of the fittest and quickly eliminate the human component from the picture.

But I guess this is too deep into science fiction, where the line between fiction and reality gets too blurry.

A Ray of Hope

However, things aren't as bad as they seem to be with regard to artificial intelligence in cyberspace. In an interview with Paul Scharre, award-winning author of Army of None: Autonomous Weapons and the Future of War, he stated that the US Pentagon and various stakeholders are quite cautious while approaching AI.

They only approve certain things, such as autonomous missile defence systems, but nothing as potentially dangerous as sentient AI.

The Artificial Intelligence and National Security report by the Congressional Research Service claims that the present family of machine learning algorithms will reach their peak over the next 10 years. The advancement of AI beyond this point will require significant leaps in technology such as high-power computing and quantum computing systems.

In the past, such plateaus in the advancement of AI were known as ‘AI Winters’ when the development in the field significantly slowed down.

Furthermore, the popularity of AI is also based on how the users perceive its disadvantages and how much they trust it. With AI becoming more and more common in daily life, even simple shortcomings will not be tolerated by most people, which leads to a loss of trust in the system and diminish future willingness to use them.

This concept is explored in the paper entitled “Artificial Intelligence and Life In 2030: One Hundred Year Study on Artificial Intelligence

Artificial Intelligence also has a myriad of applications in cyberspace, such as threat hunting, vulnerability detection, and even malware analysis. The current advantages of AI, paired with a lack of technology for a sophisticated system, should ensure any form of Cyberwar with AI that would wipe out the world population or cause a disaster on a global scale.

Conclusion

In our article, we looked at the concept of cyber warfare, along with the latest weapon in its arsenal, Artificial Intelligence. We looked at how AI could lead to a world-impacting attack or disaster and went on to look for a ray of hope amongst all the darkness in the future.

AI is a rapidly growing field of science and has much to offer to humankind. Involving AI in cyber warfare might sound like a good idea but should be used cautiously due to all the disastrous risks it can cause.

If you enjoyed reading this article or disagree with the ideas presented here, please let us know by leaving your opinions below in the comments.


Comments

Michael Chan
prefix 1 year ago
Great article.



Though AI can be used for offensive cyber warfare, it is still a human trigger event. For every attacker, there will be a defender. The defending side can also leverage on advance of AI to defend.



The current world thinking that AI potentially can wipe out the human race. But yet to see realistic scenario how this may happen. Human has been wiping out a civilization because of resources grab. What will be the benefits for AI systems to wipe out human if AI assuming they think and behave like human? My personal view is human will wipe out themselves because AI will be created and directed by us to do the good and bad stuffs. Human greed will drive AI to achieve the human goals. Just like human greed in wiping out civilizations.
Leave a Reply

Your email address will not be published. Required fields are marked

Cool Tech ZoneCyber Security Labs & News