Fake News’ Dystopian Future Is Born – In Seattle

© GETTY IMAGES
 
Last week, Radiolab—the ever-enlightening podcast and public radio show from WNYC—rebroadcast an episode from August last year called “Breaking News.” The title is a double entendre: “breaking news,” in this case, refers as much to destroying the news as it does breaking news stories, and the technology that is enabling this destruction is being developed right here in Seattle.
 
In 2016, Adobe (creator of blockbuster photo editing software Photoshop and the audio editing software Audition) developed a software program that enables users to edit audio using text. At the time, ArsTechnica’s Sebastian Anthony broke down how it works: “The tech, dubbed VoCo (voice conversion), presents the user with a text box. Initially the text box shows the spoken content of the audio clip. You can then move the words around, delete fragments, or type in entirely new words. When you type in a new word, there’s a small pause while the word is constructed—then you can press play and listen to the new clip.”
 
Adobe demonstrated the effects at a conference in 2016, hosted by Jordan Peele:

,

,

As Radiolab notes in the story, the people developing this tech are frequently myopic about its potential repercussions. Adler interviewed Ira Kemelmacher-Shlizerman—a Facebook researcher and professor at UW’s School of Computer Science who works on this sort of tech—about the potential downsides. She said, essentially, that it’s her job to build the tech, and other peoples’ job to consider the implications.
 
“I think that if people know this technology exists, they will become more skeptical,” she said. “I don’t know. But if people know fake news exists, if they know fake text exists, fake videos exist, fake photos exist, then everyone is more skeptical in what they read and see.”
 
But we already know that people aren’t skeptical of what they read: Just this week, fake news stories that spread online included a principal at a West Virginia school implementing a Halal menu in the school cafeteria, the mysterious death of a CDC doctor who warned that flu shoots were causing a pandemic, and Buzz Aldrin, who apparently revealed the existence of aliens. None were true, but that hardly matters at this point in time. People believe what they read; they will certainly believe audio they hear and video they see. So what’s going to happen when someone pieces together a video of Donald Trump declaring war on North Korea and it actually looks and sounds just like him?
 
Ira Kemelmacher-Shlizerman did not immediately respond to a request for comment (I’ll update this post if she does), but her attitude reminds me of nearly everyone I’ve interviewed in tech. When, for example, I talked to virtual reality developers about the potential downsides of VR, more than one reminded me that people were wary of television when it first came out, too.
 
And then, when I pointed out that television has, indeed, lead to great harm (do you think Donald Trump would be in the White House if not for reality TV?), they all shrugged it off. As Kemelmacher-Shlizerman told Adler when he asked if she is afraid of the technology that she herself is developing, she said, after a long pause. “I’m a technologist. I’m a computer scientist, so, not really… I’m not worried too much.”
 
Perhaps the rest of us should be. For his part, Adler is wary, but optimistic. He told me that he thinks this technology could easily be weaponized. But, he added, he’s also hopeful we’ll adapt. “I think we’re going to figure this stuff out,” he said. “I think there will be a painful period that we as media consumers are going to go through, but I think we will make it out the other end okay. But it could be rough for a while.”

 

Source: the stranger

more insights