Recent advancements in generative AI have opened up boundless possibilities for organizations to revolutionize how work gets done. Although the technology holds great promise for organizations to accelerate employee productivity, unlock creativity, save time, and reduce costs, applying it to workflows also carries great responsibility.
During a Grammarly-hosted panel at SXSW, AI experts discussed the implications of generative AI from a security and user trust perspective as well as the importance of responsible AI development. Hilke Schellmann, ethical AI reporter and New York University journalism professor, led the discussion with Rahul Roy-Chowdhury, Grammarly’s CEO and CISO;Ellie Kemery, SAP Design’s principal design research expert and research ethics leader; and Suha Can, Grammarly’s chief information security officer.
Amid the generative AI buzz, organizations cannot lose sight of security
Over the past few months, burgeoning generative AI startups have entered the enterprise arena. “When you look at the providers of generative AI and the companies that are out there, a lot of them are super new. They have evolved from being a researcher to being a mass-market tech provider. The speed at which this took place means that they may not have enterprise-grade security practices in place,” Can said. Organizations that integrate these new technologies into employee workflows might face compliance consequences, as well as data leaks and user privacy risks.
Longer-standing enterprise AI providers like Grammarly, which has also recently entered the generative AI space, are paving the way for responsible standards. Grammarly has applied its fourteen years of investment in security to GrammarlyGO, its generative AI product. “I view [new standards for generative AI] as an iteration of the current policies and practices that we already have in place,” Can said. These standards for AI providers include:
- Enterprise-grade security controls validated by third-party certificates (ISO, SOC 2, GDPR, and HIPAA)
- Data-minimization policies that limit the length of time user data can be retained
- De-identification algorithms to maintain user privacy
This security foundation also enables Grammarly to invest in understanding, detecting, and defending against new kinds of attacks introduced by generative AI, such as model inversion, model poisoning, and prompt injection. These types of attacks interact with the model to create misinformation or leak the intelligence that the model was built upon. Grammarly’s investment in understanding these new threats will create a higher bar for generative AI security and safety standards moving forward.
Organizations have a responsibility to deploy generative AI safely
Generative AI has massive potential to transform businesses, but AI leaders warned that those who embrace the technology without a strategy will not be successful. Organizations must look for responsible AI providers that will enable them to confidently deploy the technology across their organizations. “The conversation we need to have is: What is the intentional way we can build and deploy AI systems, and what are the principles we use to do that?” Roy-Chowdhury said.
Grammarly’s approach to building AI centers on “augmented intelligence,” which hinges on the idea that AI is successful when it augments humans. “We are squarely in the camp that we want to deploy AI systems that help users, that improve their capabilities, and reflect their unique talents and skills to give them an opportunity to reach their full potential,” Roy-Chowdhury said. This framework carries four tenets:
- Trust: Commit to best-in-class security and privacy practices to ensure user data is encrypted, private, and secure.
- Responsibility: Build models with quality datasets, run bias and fairness evaluations, and constantly improve the technology with user feedback.
- User control: Build AI that is explicitly focused on helping humans. For Grammarly, this means that its AI-powered suggestions reflect the user’s authentic voice and that the user is in control of their experience.
- Empathy: Deeply understand user problems and harness technology to solve real problems, rather than creating technology for the sake of novelty. For Grammarly, this means understanding the real challenges associated with the entire life cycle of communication (concept, composition, revision, and comprehension) and addressing them with technology.
Generative AI readiness for the enterprise
How do companies move fast with generative AI, capture the benefits, and ensure the right level of security and responsibility with generative AI investments? As new generative AI tools proliferate in the workplace, organizations have a responsibility to scrutinize the technology from a security, compliance, and user trust perspective. Leaders must also contemplate how the technology will be safely deployed across the organization to empower its users and drive favorable results. “AI is the most transformative technology of our lifetime,” Roy-Chowdhury said. “We want these tools to help us. Let’s not have AI be done to us . . . we are in charge here. Let’s not forget that.”
To learn more about Grammarly’s generative AI solution, click below to join the GrammarlyGO waitlist.