PropertyValue
?:author
?:datePublished
  • 2017-08-01 (xsd:date)
?:headline
  • Did Facebook Shut Down an AI Experiment Because Chatbots Developed Their Own Language? (en)
?:inLanguage
?:itemReviewed
?:mentions
?:reviewBody
  • It is probably not a coincidence that two of the top-trending news stories of July 2017 were, in the first case, a warning from billionaire tech entrepreneur Elon Musk that artificial intelligence (AI) poses an existential threat to human civilization, and, in the second case, the announcement that an AI experiment sponsored by Facebook was, according to some sources, shut down after researchers discovered that the chatbots they programmed had begun communicating with one another in a private language of their own invention. Musk, who has previously warned that the development of autonomous weaponry could lead to an AI arms race, told the National Governors Association on 15 July 2017 that the risks posed by artificial intelligence are so great that it needs to be proactively regulated before it's too late. Once there is awareness, Musk said, people will be extremely afraid, as they should be. Whether he meant it to or not, in some people's minds Musk's warning conjured up images of Skynet, the fictional AI network in the Terminator film series that became self-aware and set out to destroy the human race in the interests of self-preservation. Cue the creepy chatbot stories. Albeit prompted by a somewhat dry 14 June blog post by Facebook's Artificial Intelligence Research team (FAIR) describing an inroad in the development of dialog agents (AI systems designed to communicate with humans), the news that chatbots were found to be communicating with each other via a private language received more and more sensationalized treatment in the press as the summer wore on. In a report published the day before Musk gave his speech to the governors, Fast Co. Design delivered a fascinating account of the FAIR team's experiment with nary a hint of dystopian fear-mongering: The key word is seemingly, for in this instance the agents' neologisms were simple, straightforward, and easily decipherable: The article notes that the researchers chose not to let the bots continue developing a private language in favor of programming them to stick to plain English, given that the whole point of the research is to improve AI-to-human communication. That decision took on a more and more sinister vibe as more and more venues reported the story, however, as exemplified in this small sampling of the dozens of blurbs shared via social media: As to the claim that the project was shut down because the bots' deviation from English caused concern, Lewis said that, too, misrepresents the facts: The main thing lost in all the hubbub about dialog agents inventing their own language, Lewis said, is that the study produced significant results in terms of its core mission: training bots to negotiate with people, a task that requires both linguistic and reasoning skills: We asked, finally, if Lewis and his colleagues see anything inherently dangerous in letting AI systems develop their own languages. He said no. While it is often the case that modern AI systems solve problems in ways that are hard for people to interpret, they are always trying to achieve the goals that were given to them by people. William Wisher, who wrote the Terminator films (among others) and who was part of a panel about artificial intelligence and its future at the 2017 San Diego Comic-Con, weighed in on the Skynet scenario, telling us: (en)
?:reviewRating
rdf:type
?:url