AI involves huge amounts of (personal) data
There is a growing use of Chatbots and Artificial Intelligence (AI) in business that perform automated tasks that used to be performed by humans. Some examples of AI applications are in: image, voice or facial recognition, customer communication (chatbots) and automated decision-making. AI applications need to be “fed” and “trained” with huge amounts of (personal) data, which raises privacy concerns. This blog addresses the main question: How does AI relate to Privacy Law?
Firstly: how ‘intelligent’ is AI really?
Despite the general perception that AI algorithms are somehow “neutral” and “objective”, they can often reproduce and even enlarge existing, unknown prejudice or incompleteness in data.
The main reason is that AI applications rely on the input of very large amounts of data of good quality, in order to ‘recognize’ a very specific image or pattern.
A practical example:
A certain image recognition tool of a supermarket required the input of at least 20,000,000 images to recognize only 20,000 supermarket products. And still there were errors: everything that had a square shape, was ‘seen’ as a bag of nuts, even if it was a box of breakfast cereals. If the image of a product was slightly different, because of a small sticker or different shape, then the AI tool could not recognize the product at all.
No one will lie awake of a mismatched product in a supermarket. But what if AI will be used to make important decisions about people, such as recruitment decisions? In that case, a seemingly small error in an AI tool can be a serious problem.
Cases where AI and Chatbots failed
There have been cases in which the use of AI led to surprising and negative outcomes. Here are three examples in which the use of AI (unknowingly) led to discrimination:
Example 1: AI beauty contest turned ugly
The first international beauty contest judged by “machines”, Beauty.AI, was supposed to use objective factors such as facial symmetry and wrinkles, to identify the most attractive contestants. After more than 6,000 people from over 100 countries submitted photos, the “robot juries” selected the winners. The creators were surprised to see that there was one factor linking the winners: the robots did not like people with dark skin. This was despite the fact that many people of color submitted photos.
Example 2: Chatbot Tay was a bad boy
Microsoft released the “millennial” chatbot named Tay in 2016, and the AI behind the chatbot was designed to get smarter, the more millennials engaged with the AI chatbot. But it quickly began using racist language and even promoted neo-Nazi views on Twitter.
Example 3: AI recruiter did not hire women
Amazon built a recruitment tool to review job applicants’ resumes with the aim of automating the search for top talent. However, Amazon realized that its AI recruiter was not rating candidates in a gender-neutral way. Since most resumes came from men, Amazon’s system taught itself that male candidates were better, and penalized resumes that had the word “women” in it.
Privacy issues to consider if you want to use AI
When AI is used to identify people, or to make decisions about people, this requires the input of large amounts of personal data, which can be problematic from a Privacy law point of view, for at least the following reasons:
- Collecting large amounts of personal data is contrary to the principle of ‘data minimization’ of the European General Data Protection Regulation (GDPR), which requires that the amount of personal data should be limited to what is strictly necessary;
- Combining and analyzing large sets of personal data to get new insights, is contrary to the GDPR principle of ‘purpose limitation’. Purpose limitation means that personal data may only be used for pre-defined, specific purposes and not for other purposes;
- Certain sensitive personal data, such as data about someone’s ethnicity, health, religion or criminal background, may not be collected unless there is a legal exception. There are very limited legal exceptions under the GDPR to process such sensitive data.
- The GDPR requires that the personal data are processed in a transparent and accurate manner. However, where AI systems are ‘trained’ on the basis of human-generated data, these data may unknowingly contain bias or prejudice.
- Persons have the right to object against the use of their personal data for automated decision-making (profiling), and have the right to ask for correction or deletion of their data.
Prevent these privacy risks with Privacy by Design
There are proactive ways in which AI algorithms can be adjusted to correct for biases, whether improving input data, or implementing filters, to ensure that people of different ethnicities or genders are receiving equal treatment. One way to prevent Privacy risks with AI and Chatbots is to embed Privacy by Design, already in the design phase of your AI tool or Chatbot.
Do you want to know more about how to apply Privacy by Design, or do you need advice for your AI and/or Chatbot projects? Please check the Privacy Law service page or contact me.