Share on FacebookShare on TwitterShare on LinkedinShare via emailShare via Facebook Messenger

A Framework for Industry Responsibility and Accountability in the Age of Generative AI

Last month, on the heels of announcing our new generative AI product features, I attended the SXSW conference in Austin, Texas. The energy around AI at SXSW was palpable, and the appetite for conversations around responsible AI was inspiring. I channeled this spirit during a fireside chat on responsibility at The Grammarly AI Hub, and the topic grounded my SXSW talk, “The Future of AI: From Artificial to Augmented.”

As we roll out Grammarly’s generative AI assistance to our users and customers, the responsible deployment of AI remains top of mind. I’ve previously written about Grammarly’s approach to augmented intelligence—the idea that AI is successful only when applied in a way that augments and empowers people to reach their potential. Augmented intelligence underpins our product development philosophy and informs the TRUE framework, which I shared at SXSW to demonstrate how Grammarly’s Product team approaches all the technologies we leverage—including generative AI.

Responsible AI frameworks like TRUE can guide the development of products that augment people across industries, technologies, and use cases. When these frameworks are deployed at scale, we can create a future where individuals and businesses realize their full potential, build deeper connections, and drive results. I believe the TRUE framework, which anchors on four core principles—Trust, Responsibility, User Control, and Empathy—can move us in the direction of this future.

Here’s a look at how we’re applying the TRUE framework to ensure that our deployment of generative AI fulfills our promise of augmenting our customers while prioritizing their autonomy, privacy, and security.

Trust: Putting security and privacy standards and practices first

Grammarly’s abiding commitment to privacy and security leads our TRUE framework and is the basis of our business model. Aligning our incentives with those of our customers is core to building trust. We make money by selling subscriptions—we do not sell customer data to third parties for advertising or training. In addition, we de-identify and anonymize user data, keeping it only for as long as it is needed to provide and enhance our service.

If privacy is all about how we protect user rights to control and access their data, security is all about how we safeguard that data. We’re deeply committed to security—and with fourteen years of investment under our belt, we think of security as our most important product feature at the heart of our product ecosystem.

We operate with a security-obsessed culture, upholding best-in-class policies and practices such as single sign-on for all Grammarly Business accounts, third-party penetration testing, and embedding our in-house security experts with our Product and Engineering teams. These safeguards enable us to earn third-party attestations and certifications. We’re maintaining these high standards as we deploy new generative AI features in our products.

Responsibility: Building and improving AI systems that reduce bias and promote fairness

The results of a forecasting competition led by Jacob Steinhardt, a professor at the University of California, Berkeley, suggest that machine learning capabilities are progressing more quickly than their ability to perform reliably with new and unexpected datasets. This means that technologists have a responsibility to implement guardrails that make AI safer.

Our Responsible AI team does this daily by building with quality datasets and employing tactics to ensure the algorithms in our products do not perpetuate bias or stereotypes. We use internal technologies to prevent Grammarly’s generative AI from interacting with sensitive topics, mitigating the risk of safety concerns.

Part of responsible AI development is recognizing that no system is perfect. That’s why we provide multiple mechanisms for users to report issues in our products and have a rigorous operational process in place to quickly and consistently address problems and improve our models.

User Control: Helping people and businesses reach their full potential while respecting their autonomy

As AI-powered communication assistance becomes more advanced, there’s no replacing the human voice. Communication is incredibly personal, and it’s vital to us that our customers always remain in control and that the technology we provide helps people and businesses reach their highest potential.

We carefully designed Grammarly’s generative AI assistance to provide options and suggestions for consideration—always ultimately deferring to the customer to decide what is best for them. This was not accidental—respecting our users’ autonomy helps ensure that our technology serves its purpose of augmenting them. With Grammarly’s generative AI, we’re helping people save time on their everyday writing, empowering them to spend more of their valuable energy devising big ideas, developing creative strategies, and collaborating on the most impactful work across their organizations.

Empathy: Walking in our customers’ shoes to understand and respond to their real needs

We believe that a crucial way to minimize unintended outcomes from new technology is to focus on addressing the specific challenges people face. We constantly seek new technologies to deepen the value we deliver to our customers, and this was one motivation for integrating generative AI into our products.

For example, Grammarly’s generative AI assistance can help people overcome the “blank page” problem—something 84% of Grammarly users told us they wanted support with. Additionally, we heard from our users that they were struggling with the volume of emails they received and were overwhelmed by the task of responding quickly and efficiently. In response to that pain point, we created generative AI features that enables people to respond quickly to emails in a contextually relevant, personalized way.

As we introduce new use cases into Grammarly’s product offerings, this intention will remain at the forefront.

The future of AI is augmented

While I’ve shared just one example of how Grammarly recently used the TRUE framework, I hope that anyone developing products with new technologies will replicate or find inspiration in this approach. I’m confident that the future of AI is in augmented intelligence—and I’m encouraged by the conversations at SXSW and across the industry about keeping people at the center of product decisions.

I want to live in a world where we use AI to make us better, empowering individuals and teams to reach their potential and drive big impacts for businesses, organizations, and industries. As we adapt and learn as a society, frameworks like TRUE can help us make that future a reality.

In my new role as Grammarly’s incoming CEO, our promise to augment our customers is as important to me as ever—and I can’t wait to continue building on this legacy as Grammarly enters a new era with generative AI.

Join Grammarly
Shape the way millions of people communicate!

Your writing, at its best.
Works on all your favorite websites
iPhone and iPad KeyboardAndroid KeyboardChrome BrowserSafari BrowserFirefox BrowserEdge BrowserWindows OSMicrosoft Office
Related Articles
Writing, grammar, and communication tips for your inbox.