Artificial intelligence holds a unique place in our collective psychology. Endless imaginings of superintelligent machines and their impact on the human race have been created in films, books and even songs.
Some of these versions are incredibly dark – the short story I Have No Mouth and I Must Scream tells of a frustrated hyper-intelligent machine that endlessly and brutally tortures the few remaining humans for consigning it to perpetual boredom.
Marvin the Paranoid Android, of A Hitchhiker’s Guide to the Galaxy, finds itself in a similar predicament, though expresses it in a somewhat more passive way. Films like I, Robot present a more practical story about the troubled relationship that people might have with intelligent machines.
These are just stories, but they tell us a lot about the fears that surface when it comes to computers that can think. As it moves closer to becoming a reality, the race to achieve ethical artificial intelligence becomes less of a thought exercise and more of an urgent task.
We are many years away from artificial general intelligence – that is, a machine with all the capacity of humans but with infinitely more computing power. What we are seeing is rapid development in fields such as machine learning, deep learning, neural networks and natural language processing.
These have a plethora of real-world applications, and in China, these applications are being used and developed at a far more rapid rate than anywhere else. That’s according to Ajit Jaokar, Principal Data Scientist and course director for the University of Oxford’s Data Science for the Internet of Things course.
The reason for this, Jaokar claims, is the access to huge data sets that Chinese government bodies and private companies enjoy, thanks to its comparatively relaxed data privacy laws. Given the huge amounts that the Chinese government invests into AI, and the depth of talent that exists in the country, this places China in a unique position.
As recent events have shown, unfettered access to personal data is a political, social and moral tinderbox. With China making much faster progress than the west, where does the situation stand? Is the west able to keep up while maintaining privacy rights? What ethical questions come into play as AI gathers pace? Ultimately – is it all worth it?
The dark side of smart cities
One of the consequences of artificial intelligence (or vice versa, depending on which way you look at it) is the evolution of smart cities. In China, these aren’t far-fetched visions of the future, but a modern-day reality.
From a western point of view, clearly, it’s not something which people are comfortable with
Speaking at Smart IoT London in March, Jaokar introduced the audience to some of the schemes that already exist in China. An initiative that recently gained some media traction was the punishment of jaywalkers in Shenzhen by automatically putting their faces on billboards, as well as texting them a fine – both enabled by hugely advanced facial recognition and the vast numbers of closed-circuit television cameras in the city.
A second scheme, familiar to any viewers of Black Mirror, is the introduction of a social credit system, which will be mandatory for Chinese citizens by 2020. This will assess a multitude of behaviours, and publicly rank each citizen. This will then affect their ability to get a job, or a mortgage, or catch public transport.
As Jaokar notes, this principle of ‘once untrusted – always restricted’ is somewhat dystopian. “From a western point of view, clearly, it’s not something which people are comfortable with,” he says.
He is not one to pass judgement, however. Instead, what Jaokar believes is that the different set of circumstances has produced a different set of results, arguing that “some of the big thinkers in China are doing some amazing work from a technological point of view, but one thing they do have an advantage with is the availability of data.”
A combination of factors means that this speed of development won’t be possible in the West. Though many civil society groups would argue otherwise, our data is well protected in comparison to countries like China.
Not only that but because some organisations like Cambridge Analytica have found less than ethical uses for their data, there is a roadblock in terms of public perception. So, says Jaokar, people who want to achieve great things with data are unable to, and that perhaps is “a bit sad, because one or two people have misused it in a big way.”
An East-West AI arms race?
That is not to say that artificial intelligence research and development in the west is stagnant. Canada, one of the West’s most liberal democracies, has witnessed world-leading work in the field, thanks to researchers in deep learning such as Yoshua Bengio, Yann LeCun and Geoff Hinton, who performed ‘foundational and groundbreaking work’ in the country, according to Peter van Beek, a professor at Cheriton School of Computer Science, University of Waterloo.
According to van Beek, the Canadian government is putting ‘significant resources’ into deep learning research and reinforcement learning research by funding three major research centres in Canada – the Alberta Machine Intelligence Institute, the Vector Institute and the Montreal Institute for Learning Algorithms.
He also notes, however, that there ‘absolutely’ would be a backlash if this type of technology was misused. What this demonstrates is that it is possible for great breakthroughs to be made in artificial intelligence in a country renowned for its civil rights record and where the culture is thoroughly anti-authoritarian – a stark contrast to China.
Canada is led by a young, charismatic and technology-friendly Prime Minister in the shape of Justin Trudeau. His European counterpart is arguably Emmanuel Macron, the French president. Macron has previously spoken of the importance of a Europe-wide big data strategy, and in a recent announcement and interview with Wired, he spoke about his passion for pursuing AI in an ethical way, and for the benefit of his country.
In the interview, Macron noted the technological turning point at which he finds himself: “I think artificial intelligence will disrupt all the different business models and it’s the next disruption to come. So I want to be part of it. Otherwise I will just be subjected to this disruption without creating jobs in this country.”
The input-output problem of AI
Job loss is one of the major questions that surrounds the ethical implementation of artificial intelligence, but it is arguably symptomatic of a wider problem about who it benefits, and how.
A key part of this is the argument about the inherent bias in data. As we enter a revolution in social and civil rights, in which ‘traditional’ systems are increasingly disrupted, issues have been raised around the bias that may accidentally be built into the data that ultimately becomes a machine’s ‘brain.’ If data is flawed, and becomes the foundation of an AI system that could have a huge impact on our lives, we may find ourselves in a deeply problematic world, run by machines that favour one set of characteristics over another.
It’s no longer useful just to have ethical approval of a system once it’s done and deployed – it has to be considered from the beginning
According to Bertie Muller, chair of the Society for the Study of Artificial Intelligence and Simulation of Behaviour, we need to change how we deal with flawed and potentially biased data. “There are pressing problems now that concern bias in data and diversity in the people dealing with AI.”
“Generally, the idea that needs to be adopted by the industry is an ethical design right from the very start. So, it’s no longer useful just to have ethical approval of a system once it’s done and deployed – it has to be considered from the beginning and it has to be continuously considered.”
It’s clear that the problem with intelligent machines is people. Without careful checks and balances, we could find ourselves using data that is inherently biased to feed machines which would themselves become biased. And without serious consideration and action, we might also find ourselves at the whim of corporations and governments.
Francois Chollet, an artificial intelligence researcher in Google (though writing in a personal capacity) wrote in a recent blog post that AI poses a threat given the possibility of ‘highly effective, highly scalable manipulation of human behaviour.’
He also stated that continued digitization gives social media companies an ever-increasing insight into our minds, and ‘casts human behaviour as an optimization problem, as an AI problem: it becomes possible for [them] to iteratively tune their control vectors in order to achieve specific behaviours.’
Collaboration and transparency
Interdisciplinary work is of huge importance in this area, because AI just affects everything
We’ve seen what is starting to happen in China, where lax privacy rules have led to some slightly Orwellian schemes. And we’ve seen that, though it is behind, AI in the West is still developing quickly, and is being encouraged by world leaders. It’s also clear that there are major concerns in the west about what might become of this.
What if we allowed those concerns to win over, and the figurative handbrake was applied? That might not be such a bad thing, argues Jaokar. “I think slower is better. Prior to being in the AI and ML field, I was mostly in telecoms. If you look at 2001/2002 when the Japanese had better technology, everybody was talking about how far ahead the Japanese systems were.
“But the Japanese system didn’t migrate to the West. What actually happened was the iPhone. It took another five or six years for Steve Jobs to create something which was more suitable to the western ecosystem, and that worked. So, if we have an algorithm developed in a country that has very good access to data, it still won’t translate here.”
Perhaps, then, Europeans might find themselves lagging behind, but with an ultimately preferable end result. One way that might be achieved is through a collaborative approach. Muller is a strong proponent of this, arguing that interdisciplinary work is of “huge importance in this area, because AI just affects everything.”
The French president makes a similar point, arguing that ethical artificial intelligence is everyone’s responsibility: ‘If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution.’
A Brave New World?
Leaders like Macron who have grand plans for a collaborative and transparent artificially intelligent world might be the guiding light that helps steer us away from an Orwellian society.
There may be another consequence, though. Jaokar thinks it’s far more likely that in the West we will actually find ourselves in a society initially imagined by Aldous Huxley in Brave New World. Rather than finding ourselves overpowered by a manipulative state that knows everything about us, as predicted by Orwell, we will simply surrender ourselves to it.
Technology makes our lives infinitely easier than we could ever have imagined, and as more processes become automated, it’s not unthinkable that we could sink into a state of comfortable obedience.
Perhaps Macron’s call to action is a rebuke of Huxley’s vision of the future, in which the ‘all-powerful executive of political bosses and their army of managers control a population of slaves who do not have to be coerced, because they love their servitude.’ Man-made collaboration will create the best of man-made machines.