This blog series (A.B.C. is part of my assignment of Internet Law with Prof Cedric Manara) aims to collect your opinions, views, questions and suggestions regarding the use of AI-based internet health services, namely the risks to legally protected interests of citizens (patients using the services) namely: rights related to personal data usage. An introduction/context text is presented and then a list of reflection questions is launched, feel free to blog and respond…
The 2016 General Data Protection Regulation (Regulation (EU) 2016/679 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation – GDPR), 2016) implies a new set of rules, with stronger enforcement on health data classified as a special category of personal data (art. 9º) specially obligations in case of data security breaches (Pinheiro, 2018). Artificial Intelligence and the protection of personal data are intertwined. Artificial Intelligence applications in many ways is therefore subject to the General Data Protection Regulation.
The Regulation applies when the controller or processor is established in the European Union (EU) or when the processing activities relate to data subjects in the EU (GDPR, article 3). Therefore, it clearly applies to public sector health in the EU. Controllers are subject to the principle of accountability (GDPR, article 24). In practice, he or she shall “implement appropriate technical and organizational measures” to ensure compliance with the Regulation’s requirements (ex: encryption or pseudonymisation). These measures will be determined on a case-to-case basis depending on the type of business, the number of data subjects, the type of data processed and so forth. This means that any absence in necessary diligence in data protection is made unlawful or illicit by GDPR and regardless of national law due to the nature of the regulation. The appropriate measures must also be determined by carrying out a data protection impact assessment when the processing “is likely to result in a high risk to the rights and freedoms of natural persons” in regard to the nature, scope, context and purposes of the processing (GDPR, Article 35). Article 29 Working Party (WP29) published guidelines regarding data protection impact assessment on October 4th, 2017. This new obligation under the Regulation reinforces the accountability of data controllers.
In practice, data controllers have the responsibility to adapt their procedures to conform to the Regulation and may have to incorporate or modify their organisational and technical measures accordingly. If this is evident for older technologies like EHRs, it is perhaps even more so with regards to AI-based technologies. Because some of the principles within the GDPR are particularly challenging for the very nature of AI technology and because explicit legal grounds are needed for lawful use of health data, and especially for machine made decisions (which includes profiling or in addition to profiling).
Article 5 of the GDPR lists the principles relating to processing of personal data. It namely holds that the personal data must be processed in a transparent manner in relation to the data subject. It also holds the principle of data minimization under which only “adequate, relevant and limited” personal data can be processed in relation with the purposes of the processing. However, this principle seems in contradiction with the essence of Artificial Intelligence. Another aspect to consider is the right of data subjects “not to be subject to a decision based solely on automated processing, including profiling” (GDPR, article 22) AI technologies are directly concerned as an automated process.
QUESTIONS FOR REFLECTION:
A.1 How, before AI has used and processed data of many subjects, do we know what data can be ignored (i.e. not used for the model)? Thereby following principle of data minimization.
A.2 Can AI algorithms be designed and integrated in online AI-based health services in such a way that as soon as a variable is found to be irrelevant for the predictions, the systems (online service) stops to ask for that information from the uses?
A.3 Inversely, in a health crisis such as COVID-19, could new “relevant” variables be “automatically” included in online questionnaires/Chatbots as soon as epidemiological circumstances make them relevant?
A.4 Would it be possible to have adaptable AI system to the GDPR principles. For example, could such “principles” be coded into the internal design of the mathematical algorithms, namely the principle of minimization to “adequate, relevant and limited” personal data use?