How generative AI can help and hurt cybersecurity
image for illustrative purpose
From countless news articles and posts on your social media feed to all-new tools built into your favourite software, artificial intelligence is everywhere. Although the technology is not new, generative AI has created a recent buzz with the November 2022 release of ChatGPT, a large language model (LLM) that uses prompt engineering to generate various outcomes for users.
Almost immediately after ChatGPT’s release, similar generative AI tools launched, such as Google’s Bard and Microsoft’s Copilot, and the adoption of using AI to generate content, videos, photography, code, and more, has spread like wildfire.
As AI has grown in popularity, concerns are being raised about the risks involved with using the technology. Cybercriminals have already found ways to exfiltrate data from different AI tools, including using platforms like WormGPT, an AI model trained on malware creation data and used for ill intent or to generate malicious code.
Why generative AI security matters now
Artificial intelligence dates back to the 1960s with the creation of the first AI chatbot, ELIZA, developed by Joseph Weizenbaum. So why is generative AI so popular now, more than 50 years later? The introduction of ChatGPT in late 2022 accelerated the development of generative AI and gave the world access to the powerful tool. With many software corporations developing their own AI programs, security teams may be caught off guard when these tools are released and might not be aware of how to combat the risks they present.
Microsoft Copilot, which is currently in an early-access phase, has the benefit of learning your organization in addition to its LLM design. Some use cases include Copilot joining your Teams meeting and taking notes in real time, it helping triage emails in Outlook and create replies, and even analyzing raw data in Excel for you.
Copilot is being called the most powerful productivity tool on the planet, and if you've ever used gen AI tools, you probably can see why it's being called that. Imagine having a little ChatGPT that's built into all of your Office apps like Word, PowerPoint, Excel, and Microsoft Teams.
In addition to Copilot’s abilities, there are several aspects of gen AI tools that security teams can benefit from, including enhancing cybersecurity operations, threat detection, and defense mechanisms.
Other beneficial uses of generative AI
Blue team defenders: Just as a threat actor may use AI tools for harm, businesses can use them for good.
Malware analysis: Generative AI can assist in generating variants of known malware samples, aiding cybersecurity professionals in creating more comprehensive malware detection and analysis systems.
Deception and honeypots: Generative AI can help create realistic decoy systems or honeypots that appear enticing to attackers. This allows security teams to monitor and analyze attack techniques, gather threat intelligence, and divert attackers away from real assets.
Automated response generation: When an attack is detected, generative AI can assist in generating automated responses to mitigate the threat. This can include generating firewall rules, deploying countermeasures, and isolating compromised systems. It can help save time for analysts responding to the threats as well.
Adaptive security measures: Generative AI can aid in developing security mechanisms that adapt to evolving threats. By continuously learning from new attack techniques, these systems can evolve and improve their defense strategies over time.
Visualizing attacks: Generative AI can assist in visualizing complex attack patterns and behaviours, making it easier for security analysts to understand how attacks are executed and identify patterns that might not be immediately apparent.
There are two sides to every story. While generative AI offers many benefits in addition to those listed above, there are also challenges and risks associated with the tool.
Combatting the malicious side of AI in cybersecurity
Generative AI introduces several security risks that need to be carefully considered when implementing and using the technology. According to research conducted by Forrester, security is a top hurdle for companies adopting AI, with 64 per cent of respondents reporting they don’t know how to evaluate the security of generative AI tools.
One of the top concerns of Microsoft Copilot is how its security model uses permissions and can access all the files and information that a user can. The problem is most users in an organization already have too much access to information they shouldn’t. As the adoption of AI tools develops, humans will get lazier and potentially over-trust AI to do security checks they should be doing. For example, an employee could ask Microsoft Copilot to generate a proposal using existing documents and meeting notes, eliminating hours of work for an employee. They might skim the result and think it is fine, but sensitive information from the original documentation could sneak its way in if it isn’t thoroughly reviewed.
Other risks associated with generative AI
Cyberattack campaigns on demand: Attackers can harness generative AI to automate the creation of malware, phishing campaigns, or other cyber threats, making it easier to scale and launch attacks.
No tool-proofing: AI tools also run the risk of being manipulated to produce incorrect or malicious outputs. Some AI tools have ethical standards in place to help combat improper use of the product, but threat actors have found ways around them.
Leaking sensitive information: Generative AI models often learn from large datasets, which might contain sensitive data, depending on what information is shared. If not properly handled, there's a risk of inadvertently revealing confidential information through generated outputs. AI models can store this information too, making your sensitive data accessible to anyone who accesses your user account with different AI tools.
Intellectual property theft: Generative models often pull in a massive amount of publicly available information, including exposed proprietary data. There's a real risk that generative AI could infringe upon others’ intellectual property rights and be subject to lawsuits.
Identity risk and deepfakes: Generative AI can be used to create convincing fake images, videos, or audio clips, leading to identity theft, impersonation, and the creation of deepfake content that can spread misinformation. The tools can also make phishing campaigns seem more human and appeal to their target. An image of the Pope wearing a Balenciaga jacket went viral before it was shared that the image was created using AI tools, proving the likelihood of AI imagery and deepfake videos being believable is at an all-time high.
Confidently navigate the AI playing field
If you wait until a data breach occurs to start implementing security measures around AI, you’ll be coming from behind. One of the first steps leaders can take to address concerns around employees using generative AI is to properly train them on what is acceptable to share and what isn’t. Some people may find it harmless to include customer data in ChatGPT prompts, for example, but this is exactly the type of action threat actors are hoping your employees take. All it takes is one employee accessing a fake ChatGPT site and entering sensitive information for your company to be at risk.
As new generative AI tools are released, companies must educate their teams on how to properly use them and stay aware of the security concerns as they are discovered. Having a Data Security Platform (DSP) in place can also prevent employees from having access to sensitive data they shouldn't have in the first place.
In closing, there is no denying that AI has taken the world by storm, and the technology will continue to evolve in the years to come. Understanding the benefits and risks of AI, training staff on how to use AI tools safely and effectively and setting clear guidelines for what data can and cannot be shared are essential first steps for organisations that want to use AI securely.
(The author is Country Manager of Varonis)