Robots and English
There’s a harsh reality we need to face—a robotic, AI-driven Shakespeare is nowhere in sight. No robot will write verse that influences English the way Bard’s did anytime soon. You won’t find an AI spitting rhymes like Rakim or Nas, either.
But if your standards aren’t too high, there is some AI-constructed poetry you can read today. Take an AI that uses the recurrent neural network language model technique, feed it thousands of romantic novels to learn language from, give it a starting sentence and an ending sentence, instruct it to fill the gap between them, and you’ll get something like this:
This AI, designed by Google, Stanford University, and the University of Massachusetts, isn’t supposed to be the world’s first artificial poet—it’s just a side effect. And the AI’s output isn’t even this good a lot of the time. But if you keep in mind that the AI generated all of the sentences except the first and last on its own, it’s impressive that they all make sense and have a common theme. Apps, AI, and robots are very far from understanding language in the same way we do, but the things they can do are amazing.
Parsey McParseface is an English language parser Google built and released earlier this year, along with the code for SyntaxNet, a framework for a syntactic parser. If you put a sentence into Parsey McParseface, it will analyze it, identify the parts of speech, and determine their functions. This isn’t the first parsing algorithm, but it might be the most accurate. According to Google, Parsey does its thing with a 94 percent accuracy rate.
We know that virtual assistants can recognize what we’re saying, most of the time. Good proofreading software can catch more than just spelling mistakes and can have a noticeable impact on a person’s writing. But do you think machines could ever read your lips? If you’ve ever tried it yourself, you’ll know how hard it is, and even people who know how to read lips are only successful half of the time. LipNet, a neural network architecture under development by Oxford University, can achieve up to 93.4 percent accuracy. This can be very helpful to people with hearing impairments, but it can also help all of us communicate with machines better. If for nothing else, than to be sure they understand when we tell them they should stop trying to write poetry.