In July, a cybercrime tool called WormGPT was discovered active in business email compromise (BEC) attacks. The tool uses generative artificial intelligence (AI) technology to help refine emails to be used in phishing or BEC attacks. Using generative AI technology to refine social engineering campaigns is becoming increasingly common. However, WormGPT takes it to the next level. The tool markets itself as a blackhat alternative to GPT models designed specifically for malicious activities. In short, it is like ChatGPT, without the pesky ethical barriers that might stop an attacker from creating a social engineering campaign using the tool.

Emails from WormGPT are persuasive and strategically cunning, which shows its potential for developing sophisticated phishing and BEC attacks.

It has been a little under a year since ChatGPT made its public debut and turned generative AI from a topic of discussion inside technology startups toward the dinner tables of everyone. At the time of its debut, it was the fastest growing application in history. While its impact is still being and will continue to be measured, there is little doubt of its rapid ascent. We are still in the early stages of the transformation generative AI tools will have on industry and society, but it is hard to overestimate its potential for both motive and malice.

When it comes to malice, the world is already seeing generative AI’s impact on social engineering and fraud. Many are reporting a dramatic increase in online fraud and social engineering attempts that can be directly attributed to the ease and availability of generative AI tools. Generative AI make social engineering campaigns almost effortless to create and tougher to spot thanks to the way these tools can help clean up mistakes like typos and bad grammar. This is a trend that is likely to continue.

Sign Up For Our Articles

"*" indicates required fields

When we discuss generative AI, it is important to note both the promises and perils of these tools. The promises are vast. They can help organizations with efficiency and the quality of their team members’ work. Organizations that effectively harness the power of generative AI will gain an increasingly broad advantage over their competitors. It is why building an organizational AI strategy is so important. However, organizations must weigh the promises alongside the peril that generative AI could bring. It is important to understand the risk of these tools as much as it is their capabilities.

There are five types of major risks that I want to highlight around generative AI tools:

Cybersecurity Risk: As mentioned above, generative AI will make social engineering campaigns easier to create and tougher to spot. The amount of cyberattacks that every organization is going to face is going to increase. And the sophistication of these attacks is going to grow as well. Organizational networks are going to need to get stronger, with an eye toward building zero trust, and incorporating robust employee awareness training as well as cybersecurity-focused AI tools to help improve defenses.

Bias & Disinformation Risk: The bias inherent in AI systems has been widely documented. Their potential ability to cause harm to individuals and marginalized groups has caused concern enough that governments and regulatory bodies are starting to weigh in. Any AI system that is used to in some way rank a human, judge a person or their output, or offer an opportunity to one person over another requires human oversight because a margin for error exists and the harm this error could cause is very real.

Data Leakage Risk: In May, electronics giant Samsung banned the use of ChatGPT for internal employees after internal, sensitive data was accidently leaked by the company through the tool. If it has happened to Samsung, it has probably happened to others. A consideration that must be taken by everyone using generative AI tools is that when data is inputted into a prompt, there is very little understanding of what happens to that data, especially in open-source tools such as ChatGPT. Will that data be used to train the model? Who can see the data? Where does it go? That’s not always entirely clear.

Transparency Risk: AI systems, especially deep learning systems like generative AI, are black boxes. It is difficult to understand how they derive their decisions and what causes the system to generate a specific piece of content based on a prompt or output a decision based on a set of criteria. This can be a bit disquieting. It also highlights why it is important to be transparent when an AI tool is in use.

Infringement Risk: There are mounting legal cases around copyright and generative AI and growing alarm about infringement and ownership. Because the parent companies of these AI tools scraped the internet to train their models, content creators, such as writers, artists, comedians, musicians, poets, bloggers, journalists, actors, and many more, are concerned that their legal intellectual property has been ripped from their control and used to educate a tool that could one day replace them. While you are likely not going to get a copyright infringement notice for creating an image on MidJourney and posting it on to the internet, the murkiness of intellectual property rights and AI certainly needs to be a consideration for any organization building an AI strategy.

When examining the risks around AI tools, you must consider how to mitigate or lessen these risks as you build an organizational AI strategy. Below are five ways to do so:

Build an AI-Focused RIVERS OF INFORMATION®: Ensure you are keeping up to date on the latest in AI by consuming high quality content about AI trends. This requires building a strong RIVERS OF INFORMATION® that is filled with knowledgeable and informative sources reporting on AI. FPOV can help guide you in the process in building such a River by offering you either the sources around AI to add to your own River or by offering you the tools and processes to build an AI-centered River for both you and your team members. Also, make sure you are trying out the tools yourself. Many generative AI tools are low cost or free. Roll up your sleeves and personally try them out. It is the best way for you understand a rapidly evolving field that will become integral to your organization, no matter what industry you are in.

Incorporate AI into Your Cybersecurity: While the bad guys are using AI tools to develop next level social engineering campaigns, the good guys are incorporating AI into cybersecurity tools to bolster network defenses and assist overworked security professionals in organizational defense. One example is Microsoft Security Copilot, which the company announced earlier this year. Security Copilot is designed to help network defenders understand the threats outside and inside their networks with greater clarity while offering tools to help them explain defense efforts at the c-suite and board level. This is just one example. Countless other vendors are incorporating AI into new or existing tools to help mitigate increasing digital threats. Understand how these tools can improve your cybersecurity and utilize them when you can. Also, as I mentioned above, it is critical to educate your team members about the novel threats from sophisticated social engineering attacks generated using artificial intelligence. Finally, ensure you have a robust incident response plan in place. You do not want to get caught off guard if the worst happens, which, at some point, it likely will.

Build Human Oversight into AI Processes: Ensure that any AI tool being used, especially one being used in processes like hiring and job performance, has the proper human oversight above it. Make sure that behind any AI-driven decision-making, a human is checking the decisions for error and potential bias or unintended discrimination. Also, when using AI tools, it is important to be transparent. If you use ChatGPT to help you write a blog, do you need to disclose this fact? That is up to you. (In the spirit of disclosure, this one did not). However, if you are using an AI tool in the hiring process, that is probably an area you should be forthcoming to job candidates unless disclosure could put you in jeopardy. It is a good idea to err on the side of being more transparent when using these tools.

Develop an AI Acceptable Use Policy: In an organizational AI strategy, it is critical to guide your team members on how they can and cannot use AI tools. This is where an AI Acceptable Use policy comes in. Make sure your team members understand the risks posed by AI tools, such as data leakage and copyright concerns. This way they know how to protect the organization while still utilizing AI tools to improve their output and efficiency in their day-to-day tasks. We have developed an AI Acceptable Use Policy guide and we can share that with you if you reach out to us at info@fpov.com. This will help you get started building one within your own organization.

Develop an AI Ethics Policy: An AI Ethics Policy is next level to an AI Acceptable Use Policy. However, it can provide immense value if you take the time to craft one. It can help you understand your motivations for using AI while also guiding you in areas like transparency and combating potential AI bias. This could even be a document you share with the world. You can tell your constituents that this is how we plan to use AI and this is our plan for combating the risks its usage could have. Much like a data privacy policy is today, an AI ethics policy may become standard in the near future.

AI tools will be transformative within your organization. That’s almost without doubt, because eventually these tools will transform nearly everything. It is important to learn to utilize their power. However, it is also important to understand the risks your organization faces from using AI. Let’s be clear, you need an organizational AI strategy. However, this strategy must counter the risks from AI as it should reap the rewards.

About the Author

Hart Brown is the CEO of Future Point of View and the Security and Risk Practice Lead. He is a widely known expert and trusted advisor in the governance of risk and resilience with over 20 years of experience across a broad spectrum of organizations in both the public and private sectors. He is a Certified Ethical Hacker and a Qualified Risk Director. Learn more about Hart Brown.