AI was once the stuff of science fiction and theoretical research before quietly working behind the scenes of online services. Now, we’re starting to see the early stages of AI become widespread across the consumer market. As we hand over more and more management of our daily lives to algorithms, will our old ideas of personal responsibility continue to make sense?
,
,
Technology’s effect on culture and society can be subtle. When presented with a new toy or service that makes our lives easier, we’re quick to embrace and normalize it without thinking through potential consequences that only emerge years down the line.
Privacy was the first casualty
Take personal privacy, for example. When Facebook was rocked by the Cambridge Analytica scandal, it may have dominated the headlines and set the chattering classes ablaze, but amid all the outrage, what struck me as the most common reaction outside of media pundits and tech enthusiasts was indifference.
But why did people shrug and say “so what?” to such a massive leak of our personal information? To its use in sleazy advertising and to manipulate the results of important elections?
,
,
Multiple privacy scandals aren’t enough to bring down Facebook. / © Leah Millis/Reuters
Perhaps because the technical processes behind it all are too complex for most individuals to have a clear idea of exactly how it happened. The user license agreements for all the different services a person can use are themselves dense and opaque, and we don’t have the time to read, let alone understand all of them. In fact, a study has shown that in order to read all of the privacy policies you encounter, you’d need to take a month off from work each year.
Yet many of us agreed to this Faustian bargain anyway, and gave up our privacy because say, Facebook or Google’s services (among others), were too good not to use. Plus, all our friends (or our competitors, in a business context) were using it, and who wants to fall behind?
The question of how we got here is still being explored, but the fact remains: personal privacy in 2018 isn’t what it used to be. Expectations are different, with many of us perfectly happy to give up information to corporations at a level of intimacy that would have shocked previous generations. It’s the price we pay for entry into the world of technology, and for the most part, we’re happy to do it.
You can urge people to use VPNs and chat on Signal all you want, but for the most part, the cultural shift has already happened: protecting privacy isn’t a concern for most people. Not enough for them to take any active steps, as much as one could complain.
Personal responsibility will be next, thanks to AI
AI horror stories usually invoke fears of it becoming conscious and somehow turning against humanity. But the more realistic anxiety is that machine ‘intelligence’ cannot really regard us all. Like any tool, it serves to make a task easier, faster, more efficient. But the further that tool is from a guiding human hand, the fuzzier the issue of personal responsibility becomes.
Privacy is one thing, but responsibility becomes serious, and can be quite literally a matter of life and death. When something AI-powered goes wrong and causes harm, who bears responsibility? The software engineers, even if the machine ‘learned’ its methods independently of them? The person who pushed the ‘on’ button? The user who signed a now-ubiquitous stream of dense legalese without reading it to get quick access to a service?
Self-driving cars are at the forefront of this ethical dilemma. For example, an autonomous vehicle developed by Nvidia is taught how to drive via a deep learning system using training data collected by a human driver. And to its credit, the technology is amazing. It can keep in its lane, make turns, recognize signs and so on.
All good, so long as its doing what it’s supposed to. But what if an autonomous car decides to suddenly turn into a wall or drive into a lake? What if swerves to avoid crashing into a pedestrian, but ends up killing its passenger in the process? Will the car have its day in court?
,
,
Feeling relaxed in a self-driving car. / © AndroidPIT
As things stand now, it could be impossible to find out why or how accidents happen, since the AI can’t explain its choices to us and even the engineers that set it up won’t be able to follow the process behind every specific decision. Yet, accountability will be demanded at some point. it could be that this issue will keep autonomous vehicles off the market until it’s perfectly resolved. Or, it could be that the technology becomes too exciting, so convenient and so profitable, that we release it first and ask the difficult questions later.
Imagining AI involved in a car accident is a dramatic example, but there are going to be more areas of our lives in which we will be tempted to give over responsibility to the machine. AI will diagnose our diseases, and ‘decide’ who lives or dies, make multi-million dollar trading calls, and make tactical choices in war zones. We’ve already had problems with this, such as people with asthma being wrongly graded as low risk by an AI designed to predict pneumonia.
,
,
It’s important to get the right answers. / © Screenshot: AndroidPIT
As AI becomes more advanced, it’ll probably make the best decisions…99.9% of the time. That other 0.01% of the time, perhaps we’ll just shrug like we did with the Facebook privacy scandal.
Smart assistants and apps will take on more responsibility
Let’s zoom in a little closer, onto the individual. At Google I/O, the Mountain View colossus showcased a couple of ways for AI to make our lives a little easier. Virtual assistants have entered the mainstream in the last year or so, becoming a key part of many American’s homes. Google’s Duplex demo showed how you can delegate booking appointments to Assistant, having the robot make a phone call for you and book a haircut or a restaurant reservation. Google also wants to use Duplex for automated call centers, conjuring an amusing scenario of two robots having a conversation with human language.
,
,
AI took the center stage at Google’s I/O event. / © Screenshot: AndroidPIT
Sounds, cool, right? Except, well, there’s a certain level of trust you give your virtual assistant when you let it act as your proxy like this. Communication over these tasks may sound simple, but is actually fraught with potential problems.
For example, when we speak to each other, we pick up on subtle cues in our voices and attitudes to get an impression, human to human, of who we’re talking to and act appropriately. Even with that, you know how easy it is to mortally offend someone by accident and cause an argument or an outrage.
Where does the responsibility lie, however, when a virtual assistant says something perceived as offensive or embarrassing? If virtual assistants are somehow prevented from saying potentially offensive things, even ironically or as a joke or criticism, is that ‘your’ voice being censored? It’s going to take a lot more than ‘ums’ and ‘ahs’ for AI to really be able to talk for us.
Watch Google Duplex in action at the Google I/O 2018 demo:
,
Source: Android Pit