top of page
  • henrique8516

B. Online Health Services: GDPR, Profiling and AI

The WP29 has published a very important guideline on Automated Decision Making and Profiling s (ARTICLE 29 DATA PROTECTION WORKING PARTY, 2018), thus providing an official interpretation of many of the issues regarding AI and GDPR. Since most AI applications depend on either Automated Decisions (as their desired output, even if using a rules-based system and not Machine Learning-type algorithms) or the building of a personal profile – profiling – to allow prediction to be made for a particular individual based on statistical inferences from similar sub-groups/clusters or cohorts of individuals, clearly Article 22nd is pivotal to understand AI-technology use in health, especially for medical device type of purposes. This is the case in a triage setting, where personal data is processed with an AI-based software (hence using an more or less sophisticated algorithm) by creating a profile of that patient which is then compared with other data for suggested/recommendation of certain a triage outcome. In summary, Article 22 provides that:

(i) as a rule, there is a general prohibition on fully automated individual decision-making, including profiling that has a legal or similarly significant effect;

(ii) there are exceptions to the rule;

(iii) where one of these exceptions applies, there must be measures in place to safeguard the data subject’s rights and freedoms and legitimate interests

This prohibition applied only when this processing can have a legal effect on or similarly significantly affects someone. The guideline details the interpretation of legal effect and that of similarly significant. It defines that for data processing to significantly affect someone the effects of the processing must be sufficiently great or important to be worthy of attention. In other words, the decision must have the potential to:

1. significantly affect the circumstances, behaviour or choices of the individuals concerned;

2. have a prolonged or permanent impact on the data subject; or

3. at its most extreme, lead to the exclusion or discrimination of individuals

If we take healthcare triage (an online system that allows or not to redirect patients to a tele-health service or a hospital service), point 1 is applicable, and actually the guideline gives exactly the example of a significant decision, as those that affect someone’s access to health services. Equally, both point 2 and 3 could be applicable, since a triage decision can have a prolonged or permanent impact on a patient, as it determines the timeliness with which he/she will receive healthcare, and it can also lead to the exclusion or discrimination of one individual versus others when they would otherwise be treated equally. So, no doubt that triage profiling fits Article 22 criteria, and yet, the usage of AI can be applied assuming a lawful exception is evoked. The exceptions include:

1. necessary for the performance of/or entering a contract;

2. authorised by Union or Member State law to which the controller is subject, and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or

3. based on the data subject’s explicit consent.

While the first one does not seem to be relevant for the situation of online health, the third would apparently be possible, yet problematic to implement due to the problem of obtaining explicit consent based on a “black-box” system, such as most AI-based systems (more on tread 3). which undermine some of the principles of explicit consent. Therefore, it seems that only under Union or Member State law, and with details on the “suitable” safeguard measures, would profiling using health data be possible. In any case “appropriate safeguards” are required, and these include:

1. the right to be informed (addressed in Articles 13 and 14 – specifically meaningful information about the logic involved, as well as the significance and envisaged consequences for the data subject),

2. the right to obtain human intervention

3. the right to challenge the decision (addressed in Article 22(3).


B.1. If two different national laws exist, the one from the location of the company hosting the online AI-based online health services, and the one from the user, regarding these matters what should apply?

B.2 How “deep” can human intervention go, if and when that right is evoked? Intervention only to explain? And in this case, how can a Human explain a truly AI-based system? Or, moreover, the intervention to modify prediction outcomes, or the data variables fed to the mathematical algorithmic model so that a different outcome can be achieved?

173 views0 comments


bottom of page