Our Commitment to the Responsible Innovation and Development of AI
Leveraging technological advances like generative AI enables us to help people communicate more effectively in more ways. We do so with an ongoing commitment to privacy, security, and ethics.
Jump to section:
Commitment to Responsible AI
Our Principles in Practice
Partnering With Our Community
Additional Trust and Privacy Resources
Jump to section:
Commitment to Responsible AI
Our Principles in Practice
Partnering With Our Community
Additional Trust and Privacy Resources
Commitment to Responsible AI
At Grammarly, we’re guided by the belief that AI innovations should enhance people’s skills while respecting personal autonomy and amplifying the intelligence, strengths, and impact of every user.
Innovating to serve the needs of people
We take a value-driven approach to building AI-enabled communication technology. We leverage AI and other technologies only to address actual challenges people face in communicating their ideas and being understood as intended.
Developing a product with intention
We build products and models with checks and balances to prioritize privacy, safety, and fairness. We rigorously evaluate our work to anticipate its impact on our users and communities.
Safeguarding user data and trust
People trust us with their words, and we earn their trust by putting data security and privacy at the core of our business and our product. With 15 years of experience developing best-in-class AI communication assistance, we will always go to great lengths to ensure user data is encrypted, private, and secure. Users can control whether their content is used to improve our models, and we do not allow any partners or third parties to use our customers’ content to train their models or improve their products.
Ensuring user autonomy
We put users in control of their experience. AI is a tool that helps augment communication, but it can’t do everything. People are the ultimate decision-makers and experts in their own relationships and areas of expertise. Our commitment is to help every user communicate effectively and reach their full potential. We give users the context to choose whether to accept or reject a suggestion, and users can always choose to turn off various AI features.
Our Principles in Practice
Promising to never sell
user data
When you use Grammarly’s products, you’re trusting us to handle your personal information with care. This means that we do not and will not sell your content. We make money by selling subscriptions to our services.
Filtering content
to reduce harm
Using a combination of technologies, we filter generative AI and natural language suggestions to address issues, such as hate speech, should they arise. Our integrations and models help generate more effective text and reduce risks in input and output.
Mitigating bias and
fostering inclusion
We are committed to building models using quality data sets that undergo bias and fairness evaluations. We design and develop products with our team of analytical linguists who apply research and expertise to minimize bias and apply user feedback.
Practicing deliberate
software development
Human expertise is woven through each part of our product development process to ensure we can continually evaluate and improve each system’s performance, accuracy, and reliability. Every new feature goes through a rigorous risk-assessment process, which includes hands-on human review.
Partnering With Our Community
When you share generated content or suggestions with us that you believe to be offensive, we get better. Together, we’ll make generative AI technology safer and more inclusive.
Encountering harmful or inaccurate content
If you encounter content or suggestions that you believe to be incorrect or harmful, please report them by clicking the flag in the lower-right corner of the Grammarly window and choosing your preferred option. Your input enables us to continually monitor and make improvements over time, ensuring our products promote inclusive, accurate communication.
Harmful content product animation
Practical applications for the classroom
Students can improve their communication skills and career outcomes using AI-powered tools to help with brainstorming and ideation. In addition, they can use an AI detector and create an Authorship report to transparently show if their work was AI-generated or human-written. Each institution or educator can help clarify the role of AI-enabled technology in their classrooms, and students should maintain their commitment to academic integrity.
The Enterprise Guide for Responsible AI
Learn how to deploy AI in a way that maximizes its potential for your business without compromising your organization's ethics or safety.
Additional Trust and
Privacy Resources
Trust Center
Visit our Trust Center to explore the measures we take to protect your information, including globally recognized safety and privacy compliance standards.
Technical Specifications
Visit our Technical Specifications page to learn more about how Grammarly processes user content and the efforts we make to protect your data.
Frequently Asked Questions
What is Grammarly’s AI?
Grammarly’s AI transforms how people communicate, making it easier to be clear, confident, and productive—no matter where you work or write. Grammarly’s AI assistance is available for use wherever Grammarly works across over 500,000 applications and websites.
You can use our AI features to compose, generate ideas, rewrite, and reply in an instant. Our AI is contextually aware and accounts for personal voice, offering relevant and personalized suggestions that respect user agency and authenticity. To start using Grammarly’s AI features, click the lightbulb icon, then type a prompt. You can also choose from suggested prompts Grammarly offers based on your unique context.
Organizations and individuals should evaluate whether AI or a specific output is appropriate for their use cases. These evaluations can vary widely between companies, departments, and users. That’s why Grammarly’s AI solutions foster user agency, helping to ensure that any AI output aligns with each user’s guidelines and policies.
You can use our AI features to compose, generate ideas, rewrite, and reply in an instant. Our AI is contextually aware and accounts for personal voice, offering relevant and personalized suggestions that respect user agency and authenticity. To start using Grammarly’s AI features, click the lightbulb icon, then type a prompt. You can also choose from suggested prompts Grammarly offers based on your unique context.
Organizations and individuals should evaluate whether AI or a specific output is appropriate for their use cases. These evaluations can vary widely between companies, departments, and users. That’s why Grammarly’s AI solutions foster user agency, helping to ensure that any AI output aligns with each user’s guidelines and policies.
What key principles guide Grammarly’s responsible AI development, and how were they formed?
At Grammarly, our goal of improving communication to help people reach their full potential has always been the heart of our product. When defining our responsible AI guiding principles, we began with our commitment to safeguarding users’ thoughts and words. By considering industry guidelines, user feedback, and expert consultations, we established our guiding pillars: transparency, fairness, user agency, accountability, and privacy and security. These themes serve as our North Star, guiding everything we build.
For more information, download Grammarly’s Responsible AI white paper.
For more information, download Grammarly’s Responsible AI white paper.
What process does Grammarly use to evaluate the risks associated with developing AI features?
Grammarly has a comprehensive AI risk assessment process for launching new AI products and features. This process evaluates potential issues related to privacy, security, fairness, safety and the overall implications of these features, including their potential for abuse.
The Grammarly Responsible AI (RAI) team is hands-on with product development, extending beyond other responsible AI teams that may focus primarily on research or policy decisions. We have developed a sophisticated machine learning solution combined with human review to assess new features. The RAI team is involved in every feature that is shipped. Each feature receives a risk categorization according to criteria developed by our team, and launches are not approved unless the recommended mitigations are implemented.
For more information, download Grammarly’s Responsible AI white paper. You can visit Grammarly’s Trust Center to learn about our user-first approach to privacy and security.
The Grammarly Responsible AI (RAI) team is hands-on with product development, extending beyond other responsible AI teams that may focus primarily on research or policy decisions. We have developed a sophisticated machine learning solution combined with human review to assess new features. The RAI team is involved in every feature that is shipped. Each feature receives a risk categorization according to criteria developed by our team, and launches are not approved unless the recommended mitigations are implemented.
For more information, download Grammarly’s Responsible AI white paper. You can visit Grammarly’s Trust Center to learn about our user-first approach to privacy and security.
What are ways in which Grammarly fosters user agency?
At Grammarly, we believe AI should enhance team skills while respecting personal autonomy. Users should always be in control of their interactions with AI.
To promote transparent and responsible AI use, Grammarly empowers users to take control of the AI suggestions it offers. Users have the option to accept or dismiss these suggestions, and Grammarly provides explanations to help users make informed decisions about whether to incorporate a suggestion into their writing. Additionally, users can customize the types of suggestions they receive through their settings. Our commitment is to empower every user to use AI to communicate as effectively as possible.
For more information, download Grammarly’s Responsible AI white paper.
To promote transparent and responsible AI use, Grammarly empowers users to take control of the AI suggestions it offers. Users have the option to accept or dismiss these suggestions, and Grammarly provides explanations to help users make informed decisions about whether to incorporate a suggestion into their writing. Additionally, users can customize the types of suggestions they receive through their settings. Our commitment is to empower every user to use AI to communicate as effectively as possible.
For more information, download Grammarly’s Responsible AI white paper.
What is Grammarly’s approach to embracing accountability in responsible AI development?
Grammarly is committed to responsible AI accountability in multiple ways. One such example is by actively acknowledging and addressing gender bias in our autocorrect features, the learnings of which we shared openly on our company blog. We continuously monitor model performance to ensure our products meet our responsible AI standards. This commitment to ongoing improvement reflects our accountability in designing AI that supports fair and unbiased communication for all users.
In addition to our rigorous risk assessment process, we have a robust support team that collects instances of adverse model outcomes. This allows us to review and iterate on incorrect or harmful information our product might generate. We understand that our users rely on us to ensure the suggestions are helpful, and our team works hard to ensure that we take ownership and accountability when we get it wrong.
For more information, download Grammarly’s Responsible AI white paper.
In addition to our rigorous risk assessment process, we have a robust support team that collects instances of adverse model outcomes. This allows us to review and iterate on incorrect or harmful information our product might generate. We understand that our users rely on us to ensure the suggestions are helpful, and our team works hard to ensure that we take ownership and accountability when we get it wrong.
For more information, download Grammarly’s Responsible AI white paper.
What are ways in which Grammarly preserves privacy and security?
One of the most important pillars of responsible AI is upholding privacy and security to protect all users, customers, and their companies’ reputations. At Grammarly, all users can control whether their content is used to improve our products, and we do not allow any third-party AI processors we work with to store our users’ data or use it to train their models. We never sell user content, either.
In addition, we maintain a thorough vendor review process, repeated regularly, to conduct due diligence before engaging with any processors and subprocessors. At Grammarly, much of our security work is focused on preventing adversarial attacks on our users. We rely on a red team of experts with experience on both sides of the security fence to identify adversarial security approaches and vulnerabilities and help us formulate mitigation strategies that help keep our product safe.
For more information, please visit Grammarly’s Trust Center to learn about our user-first approach to privacy and security.
In addition, we maintain a thorough vendor review process, repeated regularly, to conduct due diligence before engaging with any processors and subprocessors. At Grammarly, much of our security work is focused on preventing adversarial attacks on our users. We rely on a red team of experts with experience on both sides of the security fence to identify adversarial security approaches and vulnerabilities and help us formulate mitigation strategies that help keep our product safe.
For more information, please visit Grammarly’s Trust Center to learn about our user-first approach to privacy and security.
What actions is Grammarly taking to assess and mitigate bias in its generative AI models to avoid potential harm, ensure fairness and equity, and promote responsible and ethical use?
We have implemented automated measures to evaluate quality, fairness, and safety in our AI products. We aim to resolve any high-severity issues we discover so that they do not appear to our users. Furthermore, we monitor user feedback and regularly conduct human evaluations so we can continually improve our approach.
At Grammarly, we build our models with safety and fairness as a priority, from how we sample and label data to how we train models, design prompts for large language models, and post-process AI output. We use a combination of technologies to filter AI suggestions to minimize potentially harmful content and ensure our products engage with user texts appropriately. Our proprietary machine learning content-filtering solution is specifically designed to prevent our models from interacting with user texts in contexts that could potentially cause harm.
At Grammarly, we build our models with safety and fairness as a priority, from how we sample and label data to how we train models, design prompts for large language models, and post-process AI output. We use a combination of technologies to filter AI suggestions to minimize potentially harmful content and ensure our products engage with user texts appropriately. Our proprietary machine learning content-filtering solution is specifically designed to prevent our models from interacting with user texts in contexts that could potentially cause harm.
What are the potential limitations of generative AI systems?
Generative AI systems have several inherent limitations and risks, including AI hallucinations, misinformation, bias, and potential security and privacy risks.
These limitations highlight the importance of developing responsible AI systems whose goal is to address and mitigate these risks. Grammarly’s Research and Engineering teams are dedicated to continuously enhancing AI solutions by refining models to better understand the nuances of language and context while remaining keenly aware of the limitations of AI. Our approach to responsible AI development enables us to help organizations that are using Grammarly’s AI tools avoid ethical dilemmas that may arise from using AI in scenarios where it could cause harm.
These limitations highlight the critical role of human judgment, especially when dealing with complex or sensitive material. If you encounter any issues or have suggestions, please report them through our Help Center or by clicking Provide feedback from the Grammarly button.
These limitations highlight the importance of developing responsible AI systems whose goal is to address and mitigate these risks. Grammarly’s Research and Engineering teams are dedicated to continuously enhancing AI solutions by refining models to better understand the nuances of language and context while remaining keenly aware of the limitations of AI. Our approach to responsible AI development enables us to help organizations that are using Grammarly’s AI tools avoid ethical dilemmas that may arise from using AI in scenarios where it could cause harm.
These limitations highlight the critical role of human judgment, especially when dealing with complex or sensitive material. If you encounter any issues or have suggestions, please report them through our Help Center or by clicking Provide feedback from the Grammarly button.
How can I customize my user experience and AI recommendations, report issues, or turn off specific features?
Customize your experience
Grammarly empowers users to control the output of our AI suggestions. Users can choose to accept or dismiss the suggestions provided, and we offer explanations to help users make informed decisions about whether to incorporate a suggestion into their writing. Additionally, users can customize the types of suggestions they receive in their settings. Our commitment is to empower every user to use AI to express themselves as effectively as possible.
Report issues and feedback
Grammarly actively reviews user feedback and has a support team collecting reports of adverse model outcomes. This process enables us to review and improve any incorrect or harmful information our product might generate. We understand the importance of providing helpful suggestions, and our team is committed to taking ownership and accountability when we fall short.
If you encounter any issues or have suggestions, please report them through our Help Center or by clicking Provide feedback from the Grammarly button.
Turn off specific features
You can set additional restrictions for your team regarding the domains and apps Grammarly can access, along with other admin settings. If you no longer wish to use Grammarly’s generative AI features, go to your account settings, navigate to the Feature Customization page, and turn off the generative AI settings.
Grammarly empowers users to control the output of our AI suggestions. Users can choose to accept or dismiss the suggestions provided, and we offer explanations to help users make informed decisions about whether to incorporate a suggestion into their writing. Additionally, users can customize the types of suggestions they receive in their settings. Our commitment is to empower every user to use AI to express themselves as effectively as possible.
Report issues and feedback
Grammarly actively reviews user feedback and has a support team collecting reports of adverse model outcomes. This process enables us to review and improve any incorrect or harmful information our product might generate. We understand the importance of providing helpful suggestions, and our team is committed to taking ownership and accountability when we fall short.
If you encounter any issues or have suggestions, please report them through our Help Center or by clicking Provide feedback from the Grammarly button.
Turn off specific features
You can set additional restrictions for your team regarding the domains and apps Grammarly can access, along with other admin settings. If you no longer wish to use Grammarly’s generative AI features, go to your account settings, navigate to the Feature Customization page, and turn off the generative AI settings.