Man v Machine: on everyday language tools, ‘Deep Learning’ and the urge to replace humans

By   John Bond 2 min read

Will technology ever become ‘learned’ enough to replace human intelligence in the publishing industry?

The English language is incredibly complex. Throughout the ages it has been enriched and challenged by thousands of writers, thinkers and speakers, and as a result it boasts one of the largest vocabularies in the world. But its advanced morphology and syntax create significant obstacles when it comes to artificially replicating the linguistic abilities of the human mind.

We are all familiar with the problems online translation tools run into when tasked with anything more complicated than a basic sentence. According to an article by David Bellos, this is because Google Translate, like most translating tools, is merely a search engine, scanning already-translated material for correlating information. While this ability to find tenuous counterparts to words in a different language may help us gauge the general meaning of a sentence, hardly anyone would rely on this entirely – least of all those of us who work with books.

As if to prove why not, translator Esther Allen performed a tongue-in-cheek experiment using Google Translate while translating an Argentine classic from Spanish to English. Over the six years she worked on Antonio Di Benedetto’s Zama, Allen periodically typed the first line of the book into Google Translate and recorded how the translation changed with the growing availability of online reference points. She noted that the translations felt ‘less and less lucky’ with each attempt, without ever coming close to getting it right, and certainly without capturing the book’s singular tone.

In 2013, scientists at Stanford made forays into advanced forms of artificial intelligence and produced an algorithm that interacts with language in ways that had previously seemed impossible. Machine learning thus entered the stage of ‘Recursive Deep Learning’, which aims to progressively give computers the ability to understand things in a human-like way, without the need for numerous rules constructed by human experts.

The scientists developed a program called NaSent, short for Neural Analysis of Sentiment, which analyses public expressions of opinions in order to create references that help computers identify sentiments. Using this algorithm, a computer was able to analyse film reviews and identify how much the critic enjoyed or disliked a film. According to Deep Learning scientist Yoshua Bengio, NaSent was essentially ‘trained to move beyond words to phrases and sentences, and to capture the sentiments of these word combinations’.

On an everyday level, grammar checkers like Grammarly are also proliferating. These often rely on patterns, performing better or worse depending on whether a particular solution resides in the finite possibilities the program has on file. While some developers have recently begun focusing their research on devising algorithms for context, linguist Geoffrey Pullum echoes the stance of many when he asserts that ‘accepting the advice of a computer grammar checker on your prose will make it much worse, sometimes hilariously incoherent’.

It makes you wonder whether initiatives like Stanford’s NaSent will give way to more ambitious projects that will enable computers to understand the meaning of sentences and words in all their different contexts. But even if this technology were to be developed and combined with translation and grammar-checking tools, would it render them more useful in a professional context? Or is our insistence on teaching machines to become human ultimately a doomed attempt at making ourselves obsolete?

John Bond
John Bond
John has been involved in publishing for more than thirty years. He held senior positions at Penguin and at HarperCollins, where he was on the main board for nine years, running sales, marketing and publishing divisions including the 4th Estate imprint and their stable of award-winning authors such as Hilary Mantel, Jonathan Franzen and Nigel Slater. He co-founded Whitefox in 2012 on the principle that the future of successful publishing would be based upon external managed services and agile, creative collaboration with the highest quality specialists. Nothing that has happened since has dissuaded him of this view.