The Dutch Data Protection Authority (Autoriteit Persoonsgegevens – AP) has expressed concerns regarding the handling of personal data by organizations employing generative artificial intelligence (AI). Special attention is directed towards apps designed for young children that utilize this form of AI.

As a result, the AP is taking various actions in the coming period. One such action involves the AP seeking clarity from a tech company concerning the functioning of the chatbot integrated into their app, which is popular among children. The AP seeks to ascertain the level of transparency exhibited by the chatbot regarding the handling of user data.

Given the rapid pace of developments in AI deployment, particularly in generative AI applications, there is a growing concern. The AP perceives significant risks associated with offering applications like AI chatbots to children, who may not be adequately aware of these risks. It is crucial for them to understand what happens to their chat messages and whether they are interacting with a chatbot or a real person.

The AP has raised several questions with the tech company, including inquiries about the transparency of the app regarding data usage and whether adequate consideration has been given to the retention period for collected data. Based on the responses, the AP will determine whether further actions are necessary.

Various Actions

Previously, the AP announced its intention to take several actions targeting organizations employing generative AI. During the summer, the AP sought clarification from software developer OpenAI regarding the ChatGPT chatbot and inquired about the company’s handling of personal data during the system’s training.

Additionally, the AP is part of the Taskforce ChatGPT within the collaborative framework of European privacy supervisory authorities, the European Data Protection Board (EDPB).

Generative AI

Generative AI can produce text or images based on a given prompt. These models are trained using extensive datasets typically sourced from the internet, often collected through scraping.

The questions posed by users in such tools can also serve as a source of data, occasionally including highly personal information. For example, someone seeking advice on marital disputes or medical matters through such a tool may share sensitive data. Consequently, generative AI tools often involve processing personal data, and developers must adhere to privacy regulations during their development.

The AP plans to release guidelines on scraping for businesses in the near future.