Subscribe Us

AI-Based Cyberwarfare Is A Two-Way Street

Fire – good or bad? The internet – good or bad? Artificial Intelligence – same question.

If you answered “both”, you’re correct. It all depends on what you use it for. And in the case of cyberwarfare, the prize goes to the one that uses AI the best. 

Whether that’s the White Hats or Black Hats is yet to be determined, but wars are won by the battles and both sides are digging in their heels for the fight. 

AI as Cyber Weapon

Like any piece of high-powered technology, there are significant benefits to both sides. Here are some of the ways threat actors are leveraging AI for their purposes, and even some ways they might:

1. Mass AI-produced malware | There was something in the ballpark of 10,000 new ransomware strains discovered in the first six months of 2022 alone. Was AI a part of that prolific hike? Take your best guess. AI can exponentially force-multiply the sheer amount of attack damage a threat actor can do. Putting it into context, prominent cybersecurity expert Mikko Hyppönen postulates, “How do you get it on 10,000 computers? How do you find a way inside corporate networks? How do you bypass the different safeguards? ...All of that is manual." Whereas on the other hand, "All of this is done in an instant by machines.”

2. AI attacks targeting autonomous vehicles | The European Union warns that AI makes self-driving cars “highly vulnerable to a wide range of attacks”. Says the report, “The attack might be used to make the AI ‘blind’ for pedestrians by manipulating for instance the image recognition component in order to misclassify pedestrians. This could lead to havoc on the streets.” In this case, AI doesn’t even need to be leveraged as an attack weapon – poorly protected AI serves as the vulnerability

3. AI-powered ransomware | Given their machine learning capabilities, AI-generated ransomware strains can learn what it takes to be sneakier than ever. As noted in Cybertalk, “Unlike older hacks, AI-powered ransomware can mimic normal system behaviors and blend in without drawing suspicion. The embedded virus then trains itself to strike when someone accesses a device, taking control of any software the user opens.”

4. ChatGPT and insider AI threats | As the use of generative AI models opens up and gains widespread use, users would be wise to look before they leap. AI keeps no secrets, and anything shared with it can (and might) be used against you. Bigger corporations have found this out the hard way and have even banned their employees from using it at work. According to cybersecurity firm Cyberhaven, “Because the underlying models driving chatbots like ChatGPT grow by incorporating more information into their dataset...[they] can expose source code, sensitive data, & confidential information you share with it.” 

5. AI-driven phishing and reconnaissance | It is interesting to note that AI learns so well that it can even outdo the most experienced threat actors. Noted Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Law, "Security experts have noted that AI-generated phishing emails actually have higher rates of being opened ...than manually crafted phishing emails.” Additionally, he expounds that “AI can be used to identify patterns in computer systems that reveal weaknesses in software or security programs, thus allowing hackers to exploit those newly discovered weaknesses.” The unspoken argument there is that they do it better, more efficiently, and exponentially faster than humans. 

AI as Cyber Defense

Fortunately, there are ways security defenders can fight back. The good thing about AI is that it’s a genie in a bottle, granting wishes of the same scale and magnitude to whoever rubs the lamp. Luckily, security experts haven’t been sleeping on the job. Here are some ways AI is being used to boost cybersecurity effectiveness and fight back at scale. 

1. Waking up to AI | An IBM report notes that 93% of executives already use or are planning to use AI and ML in their tech stacks. The gauntlet has been thrown and C-levels are picking up the challenge. That’s great, because security culture starts at the top. Once boardrooms and budget are behind AI-based security, a lot of necessary technologies can get green-lighted. 

2. Behavioral-driven protection | Finding known exploits is hard enough. The trouble is worse when you don’t know what you’re looking for. When it comes to ferreting out never-before-seen threats, Hari Ravichandran, CEO and Founder of cybersecurity firm Aura, notes, “AI has the ability to make inferences, recognize patterns and perform proactive actions on the user's behalf, extending our ability to shield ourselves from online threats.” By using machine learning to spot malicious patterns of behavior, AI can see what we can’t. 

3. Batting down bots | It takes one to know one, and the same may be said of machines. AI can learn the difference between good bots and bad bots, reducing false positives and upping security. Dovetailing into the anomaly-spotting abilities mentioned above, Mark Greenwood head of data science at Netacea, explains, “To truly differentiate between [bots], businesses must use AI and machine learning to build a comprehensive understanding...By looking at behavioral patterns...we can unpick the intent of their website traffic, getting and staying ahead of the bad bots.”

4. Responding at scale | Today’s devices crank out data by the petabyte, an amount impossible to sift through by humans. AI, with its machine learning capabilities, enables small teams to respond at scale by automating remediation and performing autonomous threat detection, investigation, and response measures. Even if we weren’t in the middle of a never-ending cyber talent crisis, the malicious output made possible by AI far exceeds even the capacity of the largest SOCs.

5. AI-driven security platforms | Solutions like Network Detection and Response (NDR), Endpoint Detection and Response (EDR), and Extended Detection and Response (XDR) are some of the most prominent ways in which the above AI security functions (and more) are being leveraged against AI-based threats. These platforms offer central management and user-friendly GUIs so that even developing teams can make a difference. Much like traditional attack methods, AI-driven exploits aren’t picky. They’ll hit whoever is most vulnerable, and that means no organization is safe. 

Who Wore it Better?

While there are many hot takes, the ultimate result between good artificial intelligence and bad is one that will be decided battle by battle. AI cyberwarfare is a two-way street, and the technology lends itself to be of equal use by each side. And while the issue remains complicated, it’s too late to put the cat back in the bag. 

As David van Weel, NATO’s Assistant Secretary-General for Emerging Security Challenges, stated, “Artificial intelligence allows defenders to scan networks more automatically, and fend off attacks rather than doing it manually. But the other way around, of course, it's the same game.” 

About Author:

An ardent believer in personal data privacy and the technology behind it, Katrina Thompson is a freelance writer leaning into encryption, data privacy legislation and the intersection of information technology and human rights. She has written for Bora, Venafi, Tripwire and many other sites. 

Post a Comment

0 Comments