Are you interested in booking me as a speaker? Please! I generally have optimistic stories about the rise of digital technology and Artificial intelligence, but I’m not naïve. I also mention the possible threats. Would you like to see more about what I do as a speaker? Then have a look here and here. Do you have any further questions? Please do not hesitate to contact me. I also like to give international talks.
My book ‘De digitale butler – Kansen en bedreigingen van kunstmatige intelligentie’ [English: ‘The digital butler – opportunities and threats posed by artificial intelligence] has been in stores since October 2017.
Jarno Duursma – Author Digital technology – TEDx Speaker – Futurist.
Indeed, artificial intelligence will produce many sweet fruits for us as a society, but there are considerable concerns as well. This blog aims to offer a comprehensive overview of the risks of artificial intelligence.
12 risks of artificial intelligence
Artificial intelligence is a subject that captures the imagination of many people. Of course, this is largely the result of the many Hollywood films that have appeared about this concept. You often see science fiction-like doom scenarios in Hollywood films. They are almost always exaggerated, yet an increasing number of alarming reports have appeared on artificial intelligence, fuelled by the qualitative growth spurt of this new technology. (Especially the qualitative improvement of machine learning and deep learning).
Science fiction is becoming reality. This is because smart computer systems become increasingly adept at remembering and reading what we as people are capable of – this includes skills such as looking, listening or speaking. And they learn to discover patterns and rules from huge amounts of data. These systems easily have the upper hand in some areas. This has quite a few consequences.
Disruption
In my view, Artificial Intelligence (AI) will be the most disruptive technology over the next decade. The quality of this technology has improved considerably in a number of areas in recent years, with all the consequences this entails. Smart software systems gain an ever better understanding of who we are, what we do, what we want and why we want it. A world full of opportunities opens up. Chatbots, smart virtual assistants and autonomous intelligence software assistants will increasingly come to our aid. AI systems will bring us prosperity, time savings, convenience, insights and comfort. We will become used to a personal assistant that is available 24 hours a day and knows what we need, before we know it ourselves. Just as it is difficult to imagine living without the Internet, a decade from now will see the same scenario in terms of your personal assistant.
Also, smart AI systems will provide us with insights that we believed would never be possible, and they will provide answers to questions the existence of which we were not aware of. AI systems are faster, are never tired, learn from examples and from each other and are considerably smarter than humans in specific domains. This is not a futuristic idea but reality.
Specific examples include the following: smart computer systems are better able to recognise art forgery than human experts. Another system is able to recognise dementia even before a medical specialist considers this option. An artificial intelligence system recognises skin cancer sooner than a medical professional, while another system is able to do something similar with nail fungus. Researchers from Stanford are able to predict voting behaviour in elections based on Google Street View images, while an algorithm that is fed data from the Apple watch is able to predict diabetes. Facebook knows when you are dating someone before you have manually indicated this on the platform. Amazon has a patent on ‘predictive shipping’, where they are able to send you a package before you know you want it. The predictive value of AI will be very extensive.
However, we cannot close our eyes to the potentially negative scenarios: President Putin of Russia recently said that the frontrunner in the field of artificial intelligence would be likely to become the leader of the world. And what to think of the AI system that claims to be able to say something about someone’s sexual orientation based on facial recognition technology? How should we deal with this kind of new technology?
It is therefore sensible to have a close look at a potentially powerful technology such as artificial intelligence: this should include both the positive and the less positive sides. Here we go.
12 risks of artificial intelligence
1. A lack of transparency
Many AI systems were built with so-called neural networks serving as the engine; these are complex interconnected node systems. However, these systems are less capable of indicating their ‘motivation’ for decisions. You only see the input and the output. The system is far too complex. Nevertheless, where military or medical decisions are involved, it is important to be able to trace back the specific data that resulted in specific decisions. What underlying thought or reasoning resulted in the output? What data was used to train the model? How does the model ‘think’? We are currently generally in the dark about this.
2. Biased algorithms
When we feed our algorithms data sets that contain biased data, the system will logically confirm our biases. There are currently many examples of systems that disadvantage ethnic minorities to a greater degree than is the case with the white population. After all, when a system is fed discriminatory data, it will produce this type of data. Garbage in, garbage out. And because the output is from a computer, the answer will tend to be assumed to be true. (This is based on the so-called automation bias, which is the human tendency to take suggestions from “automated decision-making systems” more seriously and ignore contradictory data created by people, even if it is correct). And when discriminatory systems are fed new discriminatory data (because that is what the computer says) it turns into a self-fulfilling prophecy. And remember, biases are often a blind spot.
Companies still have too little expertise at their disposal to be able to properly assess these data sets and filter out any assumptions and biased data. The most vulnerable groups are disadvantaged by these systems even more than usual. Inequality will increase. In the worst-case scenario, algorithms will choose the winners and the losers. It is similar to the talking sorting hat from the Harry Potter film: nobody knows exactly what happens inside, but you will just have to accept the truth.
Nonsense? Many convicts have been sentenced by a non-transparent and technically incorrect system, while predictive policing disadvantages the vulnerable in society.
And how can we ascertain that our data sets (on which we rely to an ever greater extent) are not contaminated on purpose by hostile governments or other parties with malicious intent?
In short, we will have to avoid ending up in a ‘computer says no’ society to an ever greater extent, where people rely too much on the output of smart systems without knowing how the algorithms and data achieved their result.
3. Liability for actions
A great deal is still unclear about the legal aspects of systems that become increasingly smart. What is the situation in terms of liability when the AI system makes an error? Do we judge this like we would judge a human? Who is responsible in a scenario in which systems become self-learning and autonomous to a greater extent? Can a company still be held accountable for an algorithm that has learned by itself and subsequently determines its own course, and which, based on massive amounts of data, has drawn its own conclusions in order to reach specific decisions? Do we accept an error margin of AI machines, even if this sometimes has fatal consequences?
4. Too big a mandate
The more smart systems we use, the more we will run into the issue of scope. What is the extent of the mandate we give our smart virtual assistants? What are and aren’t they allowed to decide for us? Do we stretch the autonomy of smart systems ever further or should we stay in control of this at any cost, such as is preferred by the European Union? What do and don’t we allow smart systems to determine and implement without human intervention? And should a preview function perhaps be installed in smart AI systems as standard? The risk exists that we transfer too much autonomy, without the technology and preconditions being fully developed, and without us remaining aware over time where we have outsourced the relevant tasks and for what reason. Indeed, there is a risk that we increasingly end up in a world we no longer understand. We must not lose sight of our interpersonal empathy and solidarity as there is a real risk we leave difficult decisions (e.g. employment dismissal) to ‘smart’ machines too easily because we consider this to be too difficult ourselves.
5. Too little privacy
We create 2.5 quintillion bytes of data each day (which is 2.5 million terabytes, where 1 terabyte is 1,000 gigabyte). Of all digital data in the world, 90 per cent has been created in the last two years. A company requires substantial amounts of pure data to allow for the proper functioning of its smart systems. Apart from high-quality algorithms, the strength of an AI system also lies in having high-quality data sets at one’s disposal. Companies that are involved in artificial intelligence are increasingly turning into Greedy Gus when it comes to our data: it is never enough and anything is justified to achieve even better results. The risk, for example, is that companies create an ever more clearly defined profile of us with ever greater precision, and that these resources are also used for political purposes.
The result is that our privacy is being eroded. However, when we subsequently protect our personal privacy, said companies will simply use similar target groups; people that look very much like us. And our data is resold en masse, with an increasing loss of awareness as to who receives it or for what purposes it is being used. Data is the lubricating oil of AI systems and our privacy is at stake in any event.
And not unimportantly: technology will have eyes to see. Cameras can easily be fitted with facial recognition software. Our gender, age, ethnicity and state of mind can be measured with smart software. This is not the future, this type of software already exists. A dynamic advertising billboard in the Dutch city of Utrecht was switched off because the spy software installed on these billboards had given rise to public outrage. Face, voice, behaviour and gesture analysis also results in ever more clearly defined profiles. The use of smart cameras allows for real-time profiling. Smart systems are better able to determine our state of mind than our partner or family members. This is not something from the future: this already exists. And it is readily and generally available as open source software. The government is happy, businesses are happy. Bye privacy.
A number of these options have been introduced in China. Some police officers wear glasses with facial recognition technology featuring a database with facial pictures of thousands of ‘suspects’. However, bear in mind that you are easily labelled a suspect in China when you make certain political statements in public. A so-called social credit system already exists in China: a rating system where you are judged on the basis of certain behaviour. People with higher scores receive privileges. In addition, the country has a very extensive network of surveillance cameras with image recognition or facial recognition software.