
Whether your users are writing short paragraphs or pages of text, you can integrate the Grammarly Text Editor SDK to suggest improvements for grammar, clarity, and more. To offer the same high-quality experience no matter the word count, the Grammarly for Developers engineering team is always seeking out ways to optimize the SDK’s performance.
In this article, we’ll cover three techniques the team uses to speed up the SDK and Text Editor Plugin so suggestions render quickly.
Background: How the integration works
To understand how we optimize the SDK, let’s first review how the integration works in your app. You can use the SDK to add the Grammarly Text Editor Plugin to one or more text editors or text fields in your application. Then, as your users type in those editors, the plugin sends their text to Grammarly’s back-end and retrieves suggestions.
To show users where Grammarly has a suggestion, the plugin underlines the relevant text in the editor. The plugin doesn’t edit your DOM in any way—instead, it renders underlines in an invisible layer on top of the viewport using shadow DOM. This means we need to continually update the positions for underlines so that they stay synced with the text.
Problem: Long documents can present performance challenges
Rendering underlines in real time as a user is actively editing a document is a complex engineering problem. Suppose a user inserts a word—all subsequent underlines need to shift accordingly. Updating the underline positions efficiently is especially challenging when working with long texts, where a large number of calculations need to be made.
Solution: Performance optimizations
We’ve made three performance optimizations that help us show suggestions quickly for long texts.
1 Prioritize underlines in the viewport
The naive approach to rendering underlines would be to rerun the entire document through the Grammarly back-end each time there’s an edit and update all the underlines at once.
This approach clearly doesn’t scale to long texts. To get around this problem, we prioritize rendering the underlines that are visible to the user: that is, the underlines in the editor’s viewport. The viewport will always be a fixed size, regardless of the document’s length. Of course, the user may scroll, so subsequent underlines get prioritized according to their distance from the viewport.
This approach translates to other optimizations in our SDK. For example, we only worry about mouse events related to underlines within the viewport.
2 Scan for edits centered at the cursor
To update underlines, we need to know when the text in the editor has changed. One strategy for detecting changes would be to continually compute the diffs between the current document and a previous version from seconds ago. However, for very long documents, diffs would be expensive to calculate.
Instead of taking diffs between edited versions of a document, we recognized that most of the time, the user will either be inserting or deleting characters from the cursor position. This became the basis for a heuristic that substantially improved performance. We can search for changes from the cursor position and update underlines accordingly rather than diffing the entire document.
For instance, if a user deletes one word, we can detect that change at the current cursor position and adjust underlines accordingly.
3 Batch operations to reduce rendering work done by the browser
To avoid introducing any lag into your app, we want to avoid holding the main UI thread for more than 5 milliseconds. If a process takes longer, we have timekeeping logic to yield the CPU back to the main thread to process the next animation frame.
We need to use these 5 ms very efficiently to render updates quickly for long texts. Initially, we discovered a problem: When we made any style changes, such as drawing underlines in a new location, we would trigger a “layout” phase in the browser’s pixel pipeline, causing it to do significantly more work. We needed to keep these layout cycles to an absolute minimum.
We realized that we could batch up operations as follows:
- First, we do a series of measurement operations, such as
getBoundingClientRect()
. This is necessary to pinpoint the right underline position and size, since we don’t have direct access to information about the style or size of the text in the editor. - Then, we do one big “mutate” operation where we redraw the underlines. This ensures that we have at most one layout phase each time we do 5 ms of work.
Results
We monitor performance in simulated text-editing sessions on a large text (8,000+ words), and at the 99th percentile, the CPU time used by the SDK is less than 5 ms per animation frame. Due to prioritizing underlines in the viewport, we expect that underlines in the viewport will typically render within one frame of scrolling in practice.
Want to let us know how you’re using the SDK and what the performance has been like? You can start a discussion in our GitHub repository—we’re there to answer any questions you might have.