Our Commitment to the Responsible Innovation and Development of AI

Leveraging technological advances like generative AI enables us to help people communicate more effectively in more ways. We do so with an ongoing commitment to privacy, security, and ethics.

Jump to section:
Commitment to Responsible AI
Our Principles in Practice

Partnering With Our Community

Additional Trust and Privacy Resources
Hands typing on a laptop with pen and paper

Commitment to Responsible AI

At Grammarly, we're guided by the belief that AI innovations should enhance people's skills while respecting personal autonomy and amplifying the intelligence, strengths, and impact of every user.

Innovating to serve the needs of people

We take a values-driven approach to building AI-enabled communication assistance technology. We leverage AI and other technologies to address actual challenges people face in communicating their ideas and being understood as intended.
Profile view

Developing a product with intention

We build products and models with checks and balances to prioritize privacy, safety, and fairness. We rigorously evaluate our work to anticipate its impact on our users and communities.
Padlock icon

Safeguarding user data and trust

People and organizations trust us with their words, and we earn their trust by putting data security and privacy at the core of our business and our product. With more than a decade developing best-in-class AI communication assistance, we will always go to great lengths to ensure user data is encrypted, private, and secure.

Ensuring user autonomy

We put users in control of their experience. AI is a tool that can augment communication, but it can’t do everything. People are the ultimate decision-makers and experts in their own relationships and areas of expertise. Our commitment is to help every user express themselves in the most effective way possible.

Our Principles in Practice

Promising we never sell user data

When you use Grammarly’s products, you’re trusting us to handle your personal information with care. This means that we do not and will not sell your data. We make money by selling subscriptions to our services.

Filtering content to reduce harm

Using a combination of technologies, we filter generative AI and natural language suggestions with the aim of preventing issues such as hate speech from arising. Our integrations and models help generate more effective text and reduce risks in input and output.

Mitigating bias and fostering inclusion

We are committed to building models using quality data sets that undergo bias and fairness evaluations. We design and develop products with our team of analytical linguists, who apply research and expertise to minimize bias and apply user feedback.

Practicing deliberate software development

Every new feature goes through a rigorous risk-assessment process, including a hands-on review by our linguists to identify potential risks. Following the assessment, feature teams are required to make updates.

Partnering With Our Community

When you share with us generated content or suggestions that you believe to be offensive, you help make our product better for all users. Together, we’ll make generative AI technology safer and more inclusive.

Encountering harmful or inaccurate content

If you encounter content or suggestions that you believe to be incorrect or harmful, please report them by clicking the flag in the lower-right corner of the Grammarly window and choosing your preferred option. Your input enables us to continually monitor and make improvements over time, ensuring our products promote inclusive, accurate communication.
Harmful content product animation
Two people sitting in front of a laptop in a classroom

Practical applications for the classroom

Students can improve their communication skills and career outcomes using AI-powered tools to help with brainstorming and ideation. Each institution or educator can help clarify the role of AI-enabled technology in their classrooms, and students should maintain their commitment to academic integrity.