In today’s fast-paced digital world, the ability to quickly and effectively communicate is paramount. As a result, more organizations are turning to AI-driven platforms to enhance their messaging and content. And although AI can generate polished text effectively, it occasionally struggles with grasping context, raising questions about its reliability. Striking this balance between harnessing AI’s potential and avoiding contextual pitfalls is a significant challenge.

Grammarly’s research and engineering teams are committed to continuously improving AI solutions by refining models to better understand the nuances of language and context. Simultaneously, they are keenly aware of AI’s limitations. This allows them to help organizations using Grammarly’s AI tools avoid the ethical dilemmas that result from using AI in situations where it can cause harm.

Reducing harmful content isn’t black-and-white

When it comes to creating socially responsible AI solutions, one of the first steps is to eliminate the potential for AI to generate overtly toxic language. AI developers prevent overtly toxic language through ML and keyword-matching methods, such as compiling comprehensive lists of offensive keywords and feeding them into AI models with instructions to avoid using them. This process helps ensure that the resulting AI-generated content and writing suggestions are free from harmful language.

But what about writing suggestions that are appropriate in one situation but problematic in another? Imagine you’re writing a heartfelt condolence note to a colleague, and you want to improve it with AI before clicking send. A language model designed to help you write with positivity might suggest using more optimistic language even though, in this context, a positive tone would not be appropriate and could even be considered offensive to the recipient. 

Grammarly refers to communication that is sensitive in one context but not in another as “delicate text.” This might be content where people share challenges regarding their mental health issues or discuss their experience of losing a loved one. Although these texts may not contain offensive language, they contain topics that are emotionally and personally charged. 

Delicate text is nuanced, which means that applying AI-powered writing suggestions to this type of text could be problematic and, in the worst case, dangerous. 

Grammarly’s proprietary technology detects the nuances of harmful content 

Over the past few years, broader research has been conducted to identify and regulate overtly sensitive text from making its way into AI-generated text; however, few studies have addressed a broader range of sensitive content, including delicate text. Grammarly’s research team recently addressed this gap for the first time.

The Grammarly team created a taxonomy of delicate text and used expert annotators to label data accordingly. The annotators not only identified delicate text based on the meanings of individual keywords, but they also labeled the level of riskiness on a scale from 1 to 5. Texts categorized as more emotional, personal, charged, or referencing a greater number of delicate topics were considered higher risk.

This annotated data was then used to train a model to recognize instances of delicate text. This proprietary technology, called Seismograph, is used by Grammarly’s engineering and product teams to limit instances of delicate text in contexts where it could potentially cause harm. Seismograph, as its name suggests, helps us to detect tremors in language anomalies and minimize the potential damage delicate text might otherwise cause.

How Grammarly uses its proprietary technology to reduce delicate content 

Grammarly uses Seismograph in a variety of ways to improve product offerings and ensure that AI-powered suggestions are serving up the best results. 

Grammarly uses Seismograph to test new offerings.

Grammarly uses Seismograph to test new product offerings before they are launched. Engineers and product managers use Seismograph to gain a better understanding of how various parts of the product interact with delicate text and identify and mitigate any potential risks prior to release.

Grammarly uses Seismograph to reduce harm from products in market.

Grammarly also employs Seismograph directly in the user interface to limit certain features from activating in instances with higher-risk delicate text. For example, if a manager writes a condolence note to an employee who recently lost a loved one, Seismograph might detect delicate text and limit Grammarly’s tone suggestions.

Ultimately, Seismograph provides businesses with peace of mind that Grammarly’s AI suggestions won’t toe the line and put the company at risk of offending or causing harm to a customer, a partner, an employee, or other important stakeholder. 

It’s time to confidently move forward with generative AI 

The integration of technology like Seismograph into AI-driven tools like Grammarly represents a significant step forward in responsible AI practices. With Grammarly, organizations can confidently harness the power of AI for content creation and communication while minimizing the possibility of unintended harm. 

To learn more about Grammarly’s commitment to responsible AI, visit our Trust Center

 

Ready to see Grammarly
Business in action?