Illustration by Alex Castro / The Verge
Clara Labs’ Maran Nelson on why movies like Her and Ex Machina miss the point.
CEOs of artificial intelligence companies usually seek to minimize the threats posed by AI, rather than play them up. But on this week’s episode of Converge, Clara Labs co-founder and CEO Maran Nelson tells us there is real reason to be worried about AI — and not for the reasons that science fiction has trained us to expect.
Movies like Her and Ex Machina depict a near future in which anthropomorphic artificial intelligences manipulate our emotions and even commit violence against us. But threats like Ex Machina’s Ava will require several technological breakthroughs before they’re even remotely plausible, Nelson says. And in the meantime, actual state-of-the-art AI — which uses machine learning to make algorithmic predictions — is already causing harm.
“Over the course of the next five years, as companies continue to get better and better at building these technologies, the public at large will not understand what it is that is being done with their data, what they’re giving away, and how they should be scared of the ways that AI is already playing in and with their lives and information,” Nelson says.
AI predictions about which articles you might want to read contributed to the spread of misinformation on Facebook and the 2008 financial crisis, Nelson says. And because algorithms operate invisibly — unlike Ava and other AI characters in fiction — they’re more pernicious. “It’s important always to give the user greater control and greater visibility than they had had before you implemented systems like this,” Nelson says. And yet, increasingly, AI is designed to make decisions for users without asking them first.
Clara’s approach to AI is innocuous to the point of being dull: it makes a virtual assistant that schedules meetings for people. (This week, it added a bunch of integrations designed to position it as a tool to aid in hiring.) But even seemingly simple tasks still routinely trip up AI. “The more difficult situations that we often interact with are, ‘Next Wednesday would be great — unless you can do in-person, in which case we’ll have to bump it a couple of weeks based on your preference. Happy to come to your offices.’”
Even a state-of-the-art AI can’t process this message with a high degree of confidence — so Clara hires people to check the AI’s work. It’s a system known as “human in the loop” — and Nelson says it’s essential to building AI that is both powerful and responsible.
Nelson sketches out her vision for a better kind of AI on Converge, an interview game show where tech’s biggest personalities tell us about their wildest dreams. It’s a show that’s easy to win, but not impossible to lose — because, in the final round, I finally get a chance to play and score a few points of my own.
,
Source: The Verge