?:reviewBody
|
-
Speaking in a sinister tone, Facebook CEO Mark Zuckerberg boasts in an online video that whoever controls the data, controls the future. In another video, former US president Barack Obama calls his successor, Donald Trump, a total and complete dipshit. Both videos are fake. They were made to highlight the dangers posed by fake videos featuring well-known people saying and doing outrageous things. Welcome to the world of deepfakes – a merging of deep learning and fake – which use machine learning and artificial intelligence to create fake videos. From the world of porn to the Pentagon Deepfakes first came to wide public attention in 2018. But they had their origin – as did much now commonly used technology , like e-commerce, live streaming and video cams – in the murky world of pornography. A basic internet search for deepfakes plus the names of celebrities like Daisy Ridley, Emma Watson, Taylor Swift or Katy Perry returns multiple not safe for work links to a wide variety of pornography websites with famous women allegedly involved in sex acts. Deepfake porn originally surfaced on the internet in 2017. Since then the release of free software that has made it relatively easy for anyone to fake video. But it’s not only video that can be altered: a tool developed by a group of scientists can alter dialogue in a video, simply by editing a script. So concerned is the US government about the implications for national security that the House Intelligence Committee recently held hearings into deepfakes, while the US Department of Defense has stepped up efforts to combat them. The emergence of deepfakes has set off an arms race among researchers and technicians to build tools to combat faked videos. AI researchers outnumbered But many top artificial intelligence researchers say they are outgunned. University of California Berkeley computer science professor Hany Farid told the Washington Post that researchers are lagging behind, largely because there are so few of us. He said the good guys are outnumbered probably to the tune of 100 to 1. Farid is leading research to develop a biometric tool that maps facial data. This includes mannerisms that are distinct to an individual, like how they move their heads, bodies and hands while speaking. But it is time-consuming work. While deepfakes are not yet a major problem, Farid said it is only a matter of time before they are widely deployed in politics. If you look at ... how sophisticated and convincing and compelling these fake videos are, it’s just a matter of time. Whether it’s [the] 2020 [US election], then the next election. But he said a bigger problem is the issue of trust. What happens when we enter a future where we simply don’t believe anything we read, hear or see hear online? How do we have a democracy, how do we agree on basic facts of what’s happening in the world? Other experts urge caution Farid’s team is just one of several around the world building tools to fight deepfakes, even as the technology used to make them continually improves. Claire Wardle, the head of research at First Draft News , an organisation aiming to address challenges relating to trust and truth in the digital age, has said that she is not yet overly concerned about deepfakes. Maybe I’m being naive, but this isn’t what I’m worried about at all, she wrote in a blog for Niemann Labs earlier this year. Academics and technologists agree that we’re roughly four years away from the level of sophistication that could do real harm, and there is currently an arms race afoot to produce tools to effectively detect this type of content. What she said she is very worried about is the drip, drip, drip of divisive hyper partisan memes on society. I’m particularly worried because most of this content is being shared in closed or ephemeral spaces, like Facebook or WhatsApp groups, SnapChat, or Instagram Stories. As we spend more time in these types of spaces online, inhabited by our closest friends and family, I believe we’re even more susceptible to these emotive, disproportionately visual messages. An escalation in information warfare Her sentiments were echoed by Ben Nimmo, a s enior Fellow for Information Defense at Atlantic Council's Digital Forensic Research Lab, who was in the forefront of unmasking Russian bots that interfered in the US elections. At the moment, we haven’t seen deepfakes used, he said in a recent email interview. The Russian government has run plenty of shallow fakes, like manipulated images, which have been caught out. Deepfakes would be yet another escalation in the information warfare. It’s probably only a matter of time. But deepfakes are nevertheless a risk because they could lead to journalists making mistakes, he warned. Journalists must be aware of the problem of deepfakes and always look for corroborating sources, he said. Ultimately, though, they’ll need to develop a stronger relationship with the tech platforms, who have the best technical expertise and who have a big stake in not letting their platforms be taken over by fakes. Journalists go back to basics Kyle Findlay, who played a key role in identifying Twitter bots which helped sow racial tensions in South Africa, told Africa Check: For now, deepfakes have statistical patterns present in them that make them identifiable by machines. Over time, these might be smoothed over by the makers. He said the war against deepfakes will turn into an evolutionary arms race. Tools for detection will arise and be circumvented. We might need to supply journalists with automated 'image provenance' tools, like the plug-ins that you use for reverse image search to automatically trace back the ‘share’ trail of all media to their sources. But his advice is ultimately non-technical and just good old-fashioned journalism. Treat everything with suspicion. Focus on names you trust and insist on visible trails linking the media that you are viewing back to those trusted sources.
(en)
|