Enhanced cyber threats and methods of attack
There are several attack methods and vectors where threat actors can benefit from gen AI to cause material disruptions to targets.
Phishing
Ever get an email from a purported Nigerian prince promising you a long-lost inheritance? This is one of the first iterations of phishing-based attacks. Since such early versions, phishing has become increasingly sophisticated (and lucrative for hackers). Bad actors can create a lot of havoc through phishing by prompting people to give away their information, whether it’s their identity or bank account. There are also sophisticated ways to contaminate email in order to infiltrate software a victim has access to as an employee of a company or as an individual (i.e. credentials).
One reason traditional phishing attacks are so popular is because it's a game of numbers. Attackers can choose to send email blasts to hundreds of thousands of people or more, trying multiple sites with tweaks and variations. While most people won’t fall for the scam, some people will click through. And that’s the yield that attackers are going after because it generates economic value. Moreover, the more sophisticated attacks will social engineer and harvest contextual details to better obfuscate the intent (for example, attackers can make emails appear to be coming from a connection a victim confides in and have context related to their line of work or roles.
Add gen AI to the equation and there are a few ways to make this scam more advanced. First, gen AI allows attackers to automate email blasts so that instead of sending thousands of emails it becomes possible to send billions of emails. Second, through learning, it becomes possible to make the emails infinitely more believable–they can be fine tuned to sound more relatable and human. In addition, gen AI is quickly advancing in the more sophisticated realms of voice, image, and video simulation (deep fake technology).
In this way it will become more difficult for recipients to distinguish whether the voicemail or message they have received is truly from their friend, family, or acquaintance. In a universe where it’s nearly impossible to discern what’s real and what’s not, how do you protect yourself? The market is ripe for a solution that goes beyond current anti-phishing software.
Phishing is all about credibility. It's rare for someone to follow a link from an email claiming they've been chosen to inherit a vast fortune from a mysterious Nigerian prince. However, many people would engage with a reputable businessperson who appears to have a legitimate LinkedIn profile and a professional company website, especially if this person is willing to have live Zoom or phone conversations. The risk increases significantly if someone believes they know the person on the other side of the line - for example their boss.
The use of deepfake technology, combined with the ability to quickly generate convincing websites through no-code generative AI tools, poses a significant threat to organizations. For instance, just last week, a scammer used deepfake technology to impersonate the CFO of a leading company to deceive a financial employee into transferring $25 million during a fake Zoom meeting. Traditional mail-protection solutions are obsolete and fail to address this level of threat adequately.
This situation presents two key opportunities for addressing the problem: in the short term, deepfake detection technologies can provide a temporary fix, so long as the underlying deepfake models have flaws. For a more long-term solution, developing watermarking technologies to verify media was produced by a certified device is essential.
Viruses and malware
Another common form of attack aims at the endpoint itself whether that’s a smartphone, a desktop, a laptop, or server. An end point is contaminated when malware prompts the device to install malicious code. In order to remain undetected by protective systems within the endpoint, hackers evolve known malicious code through various renditions, to the point that they are no longer recognized as malicious by defensive cyber software. Traditionally, this process takes time, and many attackers choose to simply go on the dark web, where they can purchase malware developed by others, to contaminate a site, a company, or an individual.
Endpoint protection software works by detecting anomalies in the mutation code. If it determines that the code is unharmed, then it creates a static signature confirming that the code has been checked and it’s safe.
However, with gen AI, malware can be trained to morph in such a fast manner that it remains undetected by the underlying endpoint protection platforms. While this “polymorphic malware” essentially contains the same DNA as traditional malware, it can generate mutations at a much greater velocity and magnitude. As a result, a valid static signature that is created at one moment may no longer be valid ten minutes later. And that means that with the development of polymorphic malwares, the static signature detection used in traditional antivirus software will become obsolete.
Even as traditional antivirus software made way to “next generation Anti-virus” and subsequently Extended Detection and Response (“XDR”) software, those platforms will need to leapfrog further to handle the velocity of polymorphic malwares.
In addition to traditional threats, gen AI poses new challenges to other existing security systems. Here we’ll look at the supply chain, and attack surface and pen testing.
Supply Chain
Imagine that you are a subscription-based customer of a software company that gets attacked. How can you make sure that as a customer you are protected? From the company perspective, how can you make sure that every component inside of your organization is secured and isn't vulnerable?
The same thing goes for your code base, as evident in Solarwinds’ Log4j attack. How do you make sure that the code base you're using is protected? This could be in the form of a SaaS product you have bought, or it could be in the form of your open source components that others are using as part of their code base. Indeed, the majority of written code is built on open source material. And so how do you make sure that the open source code you’ve used as part of your software has protection? How do you make sure that you don't have vulnerabilities?
The supply chain attack vector is top of mind right now when it comes to gen AI. For example, 46%(!) of GitHub code is generated by Copilot in a generative AI module. This creates somewhat of a black box problem because developers don’t entirely understand how their code was actualized. The code base could prompt vulnerabilities that developers would be unable to solve; in fact developers may not even be aware of the vulnerability in the first place.
The inherent skill gap also creates a problem. Here, there's a layer of abstraction that goes into a process that is orchestrated by machines. If 50% of your code base is auto generated, you will have no idea how to solve emerging problems. In other words, if there’s a vulnerability within your code base, how will you know where the root of the problem is? Without being able to pinpoint the specific vulnerability within its unit there isn’t a way to solve the issue without replacing significant portions/ the entirety of your code. But of course, as mentioned, you may not be able to detect a vulnerability in the first place.
On a positive note, there is an opportunity to improve supply chain security vis-a-vis gen AI. Today, security patching and remediation is a nagging problem for security organizations, causing endless security tickets and a never-ending marathon of updates, patches and remediation. Using gen AI, one can further automate this process and bring it up to speed with the velocity of code fixes.
Possible solutions: The industry will have to standardize around code attribution and software bill of materials solutions to help assure the integrity of the code. Moreover, vulnerability scanners will need to expand to add explainability and remediation capabilities to address the remediation skill gaps.
Attack Surface and Pen Testing
We've talked about gen AI’s impact on endpoint protection email phishing attacks and contamination. Attack surface management and pen testing is one way to make sure that there are no open attack vectors or attack surfaces like those above within a given organization.
By testing the perimeter of an organization’s digital footprint to see whether you can infiltrate its virtual walls, organizations can make sure digital assets within a company are secured. This simulation of how an attacker might attempt to access an organization's digital assets is called a pen test or penetration testing and in a broader sense, an organization orchestrating attacks on its footprint are sometimes labeled external attack surface management, or red-teaming (analogous to red-teams in a capture the flag simulation).
Gen AI increases the threat because agents can be programmed to penetrate an organization quicker and more easily. Consider this: typically an external attacker will attempt to find vulnerabilities by way of the easiest target. Then, they’ll use trial and error to slowly make their way further into the organization until they get to a point that’s valuable. It’s not a “one-and-done” form of attack, but rather a multi-step process.
With gen AI, the whole process can be automated and expanded. Instead of one method at a time, gen AI can institute multiple methods to find a vulnerability. And once it infiltrates the main system, it can continue to dig into the organization at an accelerated pace, automating each step as it heads towards critical assets, doing the most damage much faster than today.
In conclusion, we believe that gen AI technology has the potential to disrupt most parts of the existing cybersecurity markets. The level and number of threats, such as phishing and malware is going to rise dramatically, raising the need for more sophisticated defense technologies. Conversely, remediation solutions are likely also going to leverage gen AI to improve dramatically, resulting in higher walls of defense and less mistakes.
So far, we’ve examined how gen AI impacts the traditional cyber security segments. In the second part of our blog, we’ll discuss new attack surfaces and the broader impact LLM usage introduces to the security, trust and privacy community.
Thank you to Dan Pilewski whose research project as an intern at Cervin provided the foundation for this blog.