Share on FacebookShare on TwitterShare on LinkedinShare via emailShare via Facebook Messenger

The Four Horsemen of Generative AI (and How to Avoid a Nightmare)

Being Chief Information Security Officer comes with a lot of responsibility. On a daily basis, I am responsible for protecting our users, the product we make, the company, and the data that lives at the center of our work, all while building a world-class system that operates around the clock to seek out threats and annihilate them before they can cause any harm. I love that I get to do this work. There are a core group of inherent threats and risks that we assume in this role, but that’s what keeps things exciting: outsmarting bad actors and finding better ways to protect our users. Is it scary? It can be—but it shouldn’t ever be a nightmare.

Someone recently asked me what keeps me awake at night (aka what are my nightmares made of), and it got me thinking about what the real and perceived threats are in our current digital age. As a result of a lot of careful planning and hard work from my team at Grammarly, the list of things that keep me awake is very, very short. But the game changer, and we all know it, is generative AI. 

Join Grammarly
Shape the way millions of people communicate.

Generative AI and the fear of the unknown

Generative AI feels like it is everywhere because it actually is everywhere. Less than one year ago, GPT reached one million users in a fraction (1/15th to be exact) of the time as its closest comparison, Instagram. 

Great, so it’s everywhere. Now what? IT leaders around the world are now faced with an entirely new set of really scary possibilities that we have to prepare to defend against. This threat vector is different. 

You may not know the nitty-gritty details of how we become SOC2 compliant, but I am willing to bet that you are aware of the dangers of generative AI and the threats posed by training on your data. Right? Right. Generative AI is not only in the products we use but it is also in our news cycle, and top of mind for anyone who is remotely interested in a world that uses technology. But it shouldn’t be scary—at least not in the way you’re being told to be scared. 

Not so scary: Generative AI’s overhyped threats

We’re seeing growing worries around the threats of generative AI, some of which are credible, but many I believe are overhyped today. If I am going to call the real threats the Four Horsemen, let’s call these three the Three Stooges of generative AI: data leakage, IP exposure, and unauthorized training. Before you disagree, allow me to explain why I think these three points are distracting you from the real challenges we are facing today: 

  • Data leakage: Without allowing the third party to train on your confidential data, data leakage attacks remain theoretical against the well-known large language models (LLM) out there, with no large-scale demonstration of practical attacks
  • IP exposure: Barring any training, IP exposure risk remains similar to non-generative-AI-powered SaaS applications, such as online spreadsheets
  • Unauthorized training: Allowing users to opt out of their data being used to train generative AI is becoming an industry standard—mitigating sensitive data training concerns that were prevalent mere months ago 

The Four Horsemen of generative AI

What should you really be focusing on to make sure your organization is prepared to handle the new reality we are living in? I’ll warn you—this is where it actually gets scary. 

Grammarly has been the leading AI writing assistance company for over 14 years, and my work these past few years has been to help our company get ghost-proof around credible threats. I call these nightmare-level threats the Four Horsemen of Generative AI: Security Vulnerabilities, Third Party Risk, Privacy and Copyright, and Output Quality. 

Security vulnerabilities 

With so many people jumping on the generative AI bandwagon and coming up with different models, we find ourselves facing new security vulnerabilities—from the predictable to the frighteningly easy to miss. 

LLMs are susceptible to an emerging array of security vulnerabilities (check out OWASP LLM Top 10 for a comprehensive list), and we need to ensure that every perimeter remains fortified. A reliable LLM provider must explain what first- and third-party assurance efforts, such as AI red-teaming and third-party audits, have gone into their offerings to mitigate LLM security vulnerabilities. Do your due diligence. A secure perimeter means nothing if you leave the locks open. 

Privacy and copyright

With the legal and privacy environment around generative AI evolving, how safe are you from regulatory action against your provider or yourself? We have seen some pretty strong reactions in the EU, based only on the provenance of training data sets. Don’t find yourself waking up to a nightmare for you and your legal team. 

Generative AI tools are based on patterns in data, but what happens when that pattern is lifted and shifted to you from someone else’s work? Are you protected, as the user, if someone accuses you of plagiarism? Without the right guardrails in place, this could go from inconvenient headache to terrifying reality. Protect yourself and your customers from the start by looking into provider copyright commitments. 

LLM third-party provider risks

Many of the third-party LLMs are new and will have some access to confidential data. And, while everyone is integrating with these technologies, not all of them are mature. Large players such as cloud providers that are already part of our risk profile are able to help us mitigate risk in a way that smaller SaaS companies are not able (or willing) to. If I could give you one piece of advice, I would say to be careful about who you invite to the party. Your responsibility to your customer begins long before they use your product. 

Output quality 

Generative AI tools are highly responsive, confident, and fluent, but they can also be wrong and mislead their users (e.g. a hallucination). Make sure you understand how your provider ensures the accuracy of generated content. 

What’s worse? Generative AI tools can create content that may not be appropriate for your audiences, such as words or expressions harmful to certain groups. At Grammarly, that is the worst outcome our product can have, and we work very hard to look out for it and protect against it. 

Make sure you know what guardrails and capabilities your provider has in place to flag sensitive content. Ask your provider for a content moderation API that enables you to filter content that is not appropriate for your audience. Your audience’s trust depends on it. 

Don’t run from it: Move with confidence toward generative AI 

Build the best in-house security squad possible

Invest in a great in-house AI security team, similar to the cloud security teams we have all built in the past 10 years as we embraced cloud computing. Internal expertise will help you outline how each of these real threats might relate to your business/product/consumer, and which tools you will need to properly protect yourself. 

Train your team of experts. And then train them to red-team. Have them run through AI model–based attacks (e.g., jailbreak, cross-tenant breakout, and sensitive data disclosures, etc.) and tabletop exercises of high-impact scenarios, so you know how to handle the threats that are lurking. 

Empower (and arm) your employees 

As you bring generative AI into the enterprise, consider your employees as your second line of defense against the security threats outlined. Enable them to help protect your company by providing generative AI safety training and clear guidance on an acceptable use policy. 

Research shows that 68 percent of employees admit to hiding generative AI use from their employers. Pretending otherwise will not make the Four Horsemen of generative AI disappear. Instead, I recommend that you build a paved road that allows the Horsemen to bypass your company. 

To learn how we built that road at Grammarly, please check out my session at this year’s Gartner IT Symposium/XPo. In my talk, I will cover a detailed framework for safe generative AI adoption. Our white paper on this topic will be released on October 19.

To learn more about Grammarly’s commitment to responsible AI, visit our Trust Center.

Your writing, at its best.
Works on all your favorite websites
iPhone and iPad KeyboardAndroid KeyboardChrome BrowserSafari BrowserFirefox BrowserEdge BrowserWindows OSMicrosoft Office
Related Articles
Writing, grammar, and communication tips for your inbox.