A Montreal-based company called Lyrebird has developed speech synthesis technology which can almost perfectly copy someone’s voice. And it can do so with very little data, needing only one minute of audio to extract the DNA of a human voice. The name is well chosen, since the Australian lyrebird is famous for its ability to almost perfectly mimic sounds in its environment.
“We are able to learn a new voice with as little data because our model is able to capture similarities between the new voice and all the voices it already knows,” explains Alexandre de Brebisson, one of the Ph.D. students involved in the system’s development. “Our models understand the underlying variables that make [one] voice different from another.”
Though it might be difficult at first to think of any legitimate uses for this new technology, after it was revealed to the public, suggestions came flooding in about how it could be used for different applications.
One suggestion was that audio book companies use the technology to have books read by famous people or family members. This idea is likely to be very popular with parents who have to travel a lot and would like to be able to read to their children when they’re away from home, as well as others who live far from elderly parents and cannot physically be present to read to them.
Another pretty cool suggestion was that this tech could be used to have avatars speak with their players’ voices in online and video games.
Yet another potential use would be for medical companies to create devices that could be used by patients with speech difficulties to speak more clearly.
And what if someone else could deliver a speech for you using the technology to replicate your voice?
Though some ideas are certainly better than others, one can easily see how many different ways this system could actually be used.
While some of these ideas really do sound interesting, and the technology would seem to offer endless possibilities, it may potentially have a darker side, too. For instance, the potential exists for all kinds of evidence to be fabricated to sway the results of criminal investigations. How often have we heard about cases where the verdict in a murder trial was heavily influenced by a 911 recording? And there are many other ways in which someone could potentially “steal” another person’s voice and cause real damage to that person’s reputation. [RELATED: Find out why technology pioneer Elon Musk believes that AI is “more dangerous than nukes.”]
The truth is, this type of technology dehumanizes us, since our voices are an integral part of what differentiates us from others. With the development of this and other types of artificial intelligence, we move further and further away from recognizing the individuality and uniqueness of each human being. While at this stage, the development of these types of technology may be unavoidable, that doesn’t make it any less sad for the human race.