In just under six months, generative AI has gone from a novelty to a workplace staple. In fact, a recent survey of US workers showed that 70% of people are using generative AI tools like ChatGPT at work. Whether or not organizations are ready for generative AI, it’s already here.

With so much momentum behind this emerging technology, how can organizations get a handle on it to not only mitigate risks but also to drive strategic value across the organization? During a recent Grammarly Business webinar, Amit Sivan, Grammarly Business head of product, and Timo Mertens, Grammarly head of ML and NLP products, delved into this topic. They provided a practical approach for organizations to harness generative AI in a way that moves their business into a higher state of operations. 

When mismanaged, generative AI exacerbates the issues it’s supposed to solve

Generative AI helps individuals go from zero to one by doing work alongside them—catalyzing new ideas, creating content from scratch, and refining messaging to make it more effective. It offers exciting potential for businesses to improve individual productivity, strengthen decision-making, increase innovation, and enhance customer experiences. 

But AI and generative AI don’t inherently deliver the above-mentioned benefits. In fact, when mismanaged, generative AI can create the opposite effects: slowed productivity, stunted creativity, and stalled progress. Mertens and Sivan addressed four pitfalls that can cause organizations to fall behind and shared solutions to avoid them. 

1 When AI doesn’t apply organizational and situational context, it becomes an unreliable crutch for employees 

Large language models have become extremely powerful. While on the surface, it may appear as if the model always completes a task correctly, it’s common for models to generate false responses or “hallucinations.” 

One of the primary reasons that hallucinations happen in business applications is that the model doesn’t understand organizational knowledge and context. Large language models are trained on texts found on the internet but not on texts in your company’s internal systems. “It knows what China’s GDP is, for example, but won’t be able to give you an answer as to what your Q3 revenue projections are,” Mertens said.

When employees use generative AI solutions in an uncontrolled way, they will either waste a lot of time correcting hallucinations or, worse, blindly using the output of the model. In this scenario, AI becomes an unreliable crutch for employees. Rather than augmenting them with the information they need and helping them to craft a message, the tool replaces them—often with poor results. 

Organizations should focus on solutions that are able to integrate with the knowledge management systems of the business and can learn how employees behave based on situational and personal contexts while still maintaining privacy and security standards. 

2 A proliferation of inconsistent and disjointed generative AI tools leads to generic content rather than unique outputs

Many generative AI solutions only work in one application. For example, an intelligent document editor helps with writing documents, but it doesn’t help you to then craft an impactful email. Or a smart meeting assistant can summarize meeting notes, but it can’t update your team in a Slack channel.

When organizations deploy a variety of generative AI solutions that each only work within one system, the business ends up with a proliferation of inconsistent solutions and generic, boilerplate content. In the long term, as more organizations adopt generative AI, this could result in what Mertens referred to as a “sea of sameness,” where content is undifferentiated and void of the brand’s unique personality and point of view. 

Businesses should focus on AI solutions that span the most important applications where employees do their work. The solutions should also adapt to incorporate the communication style of the organization and each employee, ensuring consistency while also perpetuating uniqueness. 

3 AI that doesn’t get better the more it’s used will plateau in its ability to improve workflows and employee outputs 

Mertens noted that many generative AI solutions are not reaching their full potential because the models do not retrain with new data to get better over time. This is because it has become much harder to improve the underlying model. 

To enhance an underlying large language model, there are two options: Improve the way you prompt the model or use fine-tuning. Prompting can be difficult because it’s more like an “art form than a science,” Mertens explained. Meanwhile, fine-tuning is especially challenging because there are often multiple models at play. “Figuring out which model to improve and fine-tune is not an easy challenge, but more importantly, defining what good looks like is really difficult,” Mertens said. For example, what does “better” mean when a model is generating a blog post? Does it mean it’s more factual and complete or more conversational and natural? “It’s unrealistic [to assume] that individual employees or decision makers can reason about this…it’s pretty difficult to define,” he said. 

Businesses should focus on solutions that embrace feedback loops between the model and employees. “At Grammarly, we have world-class linguists who obsess over how to even define the quality of communication and writing… and we have entire teams that think about how to improve these models based on what users experience across their workflows,” Mertens said. 

4 If mismanaged, AI opens up the business to security threats and harmful content  

The uncontrolled use of generative AI opens up businesses to serious security and privacy threats. Sivan likened the generative AI rush to the time when IT teams were navigating the challenge of individuals using their personal devices at work. Organizations that ignored personal device usage or tried to enact policies to simply ban personal devices faced challenges in actually enforcing it because the pull motion was so strong for individuals who wanted to untether from their desktops. 

Similarly, individuals are seeing the immediate and captivating benefits of generative AI and going to different websites to capture the advantage. This exposes the organization to serious security and privacy threats. 

Even with the controlled use of generative AI (where the business provides sanctioned tools to employees), organizations need to be careful and scrutinize providers exhaustively. Businesses should work with longstanding AI leaders that have dedicated teams focused on privacy and security and a reputation for keeping user and company data private and secure.

AI providers should also be dedicated to responsible development, meaning they are focused on eliminating harmful content that perpetuates biases, spreads misinformation, and erases originality, autonomy, and creativity instead of strengthening it. 

Bring generative AI safely into your organization 

Generative AI opens up a new future for organizations to move into a higher state of operations where the bounds of productivity are expanded, and individuals are able to focus on higher-value work. Grammarly Business is shaping the AI-connected enterprise through industry-leading security, privacy, and responsible AI, helping individuals to better access and communicate information across their organization. 

To learn more about Grammarly Business, visit 

Ready to see Grammarly
Business in action?