Dutch DPA to local authorities: ‘Be hesitant when using artificial intelligence’

The Autoriteit Persoonsgegevens (AP), the Dutch privacy regulator, is asking local municipalities to be vigilant when using artificial intelligence (AI).
Artificial intelligence is undergoing rapid developments, but as a technology it’s still in its infancy. There is plenty of experimentation going on: from a ‘rat race’ in generative AI by Big Tech companies to AI-based behavioral recognition systems in supermarkets and gyms.
However, the management of the risks of AI systems is not happening at the same pace. The Dutch data protection authority (DPA) states it’s difficult to estimate whether AI applications are sufficiently controlled.
Furthermore, it’s likely more incidents will occur when using AI, because the technology is becoming increasingly intertwined in society. Incidents could therefore have major consequences for citizens. This means local authorities and businesses should be vigilant and pay more attention when implementing AI technology.
“These remain stormy times. This is also understandable given the emergence of a new system technology that offers many opportunities, for example in medical treatments and inclusive services. But the risks are also known,” Aleid Wolfsen, chairman of the AP, points out in a new report that outlines the opportunities and risks of AI for Dutch society.
The regulator studied, among other things, the risks of AI in providing online information. Because of the AI systems used by social media and search engines, they control what news and information people will get to see. In addition, the rise of generative AI entails a high risk of spreading misinformation and disinformation.
“Due to AI applications that generate lifelike text, images, video and audio, people can no longer trust that what they see or hear is actually correct,” the report states.
Furthermore, the Dutch DPA pleads for more democratic control over AI. A survey shows local municipalities and city council members have limited overview and knowledge of the AI systems they use. That stresses the need to examine a national AI strategy.
Wolfsen is pleased that politicians in the Netherlands focus their efforts on implementing AI regulations on top of data protection, consumer privacy protection and cybersecurity.
“There is an increasing realization that responsible use of AI is labor-intensive and demands a lot from an organization. The warning we are issuing is that as long as organizations still have doubts about the extent to which they are aware of the risks of AI, they should be cautious in using AI,” he says.
Your email address will not be published. Required fields are marked