Another issue that is worthy to look into when discussing the intersection between the General Data Protection Regulation (GDPR) and AI is the requirement of transparency encompassed in article 12 & 13.

Article 12 of the GDPR, which focuses on safeguarding the rights of data subjects, lays down the foundation of transparent information and communication. It mandates that data controllers adhere to various information obligations outlined in Articles 13, 14, 15, 22, and 34. The essence of this provision is to ensure that data controllers furnish data subjects with essential information in a manner that is concise, transparent, easily understandable, and readily accessible, utilising clear and straightforward language. For clarification purposes, a data controller, as defined in Article 4.7, is responsible for determining the purposes and methods of processing personal data. Notably, a data controller may also concurrently function as a data processor.

According to Article 13.2(f), the controller, in order to ensure transparency, is to notify the data subject of the employment of automated decision-making and to provide information “about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject”. However, this condition creates a practical issue. As many AI enthusiasts are aware, AI systems frequently suffer the "black box dilemma," which means that the public, aside from engineers, has little comprehension of how these systems create outcomes. Furthermore, the deep technological complexity of machine-learning technologies make it difficult for data processors to offer accessible explanations for the rationale underlying these systems.

This contrast highlights the contradiction between the GDPR's transparency requirements and the inherent opacity of AI technology, which poses a substantial hurdle to complete compliance with the legislation.

GDPR on AI - a series of posts written by Maria Mot