health + tech in law = mixing chalk + cheese
Intuitively, medical treatment that is tailored to our individual physical needs makes sense. For example, the best treatment for me for a particular condition may not be the best for you.
This is the goal of personalized medicine. The scope of personalized medicine can be fuzzy, but the Centers for Disease Control and Prevention (CDC) states:
Precision medicine, sometimes called personalized medicine, is an approach for protecting health and treating disease that takes into account a person’s genes, behaviors, and environment. Interventions are tailored to individuals or groups, rather than using a one-size-fits-all approach in which everyone receives the same care.
I think we can expect the expansion and improvement of personalized medicine in the age of Big Data. The advances in data analytics and AI will lead to better capabilities to draw correlations between a person’s health and other information, and provide better insights into treatments.
I also think that the use of these technologies in developing personalized medicine gives rise to a number of ethical, if not legal, issues. In this post, I’m going to discuss the collection, ownership and use of de-identified or anonymized data.
Data analytics and artificial intelligence (AI) need large amounts of data to produce accurate and useful results. Clearly, the best source of this information is from health care providers. Further, health information obtained while seeking medical advice is becoming more easily obtainable with the widespread adoption of electronic health records.
Privacy laws generally prohibit the use of information from identifiable individuals without consent. But one may have fewer (or no) rights to that information once it has been sufficiently de-identified/anonymized and aggregated with other peoples’ data. The main concern is that anonymized data cannot be traced back to the associated individual(s). For example, in Ontario, the Information and Privacy Commissioner (IPC) has discussed the use of anonymized data, including in the context of health information. According to this document issued by the IPC:
Health information that is de-identified such that an individual cannot be re-identified, or that it is not reasonably foreseeable that an individual can be re-identified, would fall outside the scope of PHIPA.
From my reading of the IPC’s document, the question of whether consent is needed to anonymise an individual’s data in the first place falls into a grey area. However, it seems to suggest consent is, at least in some cases, not needed. The IPC also notes that obtaining consent can be difficult (such as for large data sets), or sometimes alter the data set.
Similarly, in Europe, the important part seems to be ensuring that data cannot be traced back to the individual. Consent for anonymization may not be needed as long as certain requirements are met.
Arguably, this means that those who collect healthcare data are able to do what they want with the data as long as the proper anonymisation/de-identification procedures are followed without falling afoul of privacy laws. As a result, the fate of patients’ data are mainly dictated by the contractual terms their healthcare providers have with their suppliers.
Health data sets are important for developing personalized medicine because they can provide valuable insights into a real patient population. In contrast, traditional clinical studies have been conducted on relatively narrowly-defined groups of patients. As such, they may not accurately reflect the impact of a treatment in a real patient population. For example, the results may include a bias relating to gender, ethnicity, etc., or fail to account for the potential effect of other health conditions that may be common in target population.
However, from other non-health areas of the tech industry, we also know there is commercial value in these data. Indeed, recently the Star reported the sale of anonymous data by an EHR company to the health data company IQVIA. According to an article elsewhere, IQVIA also anonymises health record information and sells the data to pharmaceutical companies. Given what I have stated above, it will be interesting to see the results of the investigation.
I think the IQVIA case highlights the quandary of health data. On the one hand, probably everyone agrees that there is benefit to allow the use of de-identified/anonymised data for research to improve healthcare. The pharmaceutical companies obtaining the data from IQVIA may well be using them to generate real world evidence of efficacy and safety.
On the other hand, once a data set has been transferred to a private company, it may be used for non-research related purposes. For example, pharmaceutical companies may well be using the data for strategic marketing.
In addition, the loss of control of one’s data for another party’s profit, even if anonymised, may seem unfair. One may perhaps even draw an analogy to the facts in the US case Moore v. the Regents of California. In that case, the plaintiff consented to having his spleen removed for medical reasons; subsequently, his doctor (with the other defendants) conducted research on his cells and commercialized a cell line based on that research. In the end, the Supreme Court of California found that Moore did not have the right to own the cell line (no conversion). However, the Court found that the consent to the spleen removal did not permit the subsequent research. This can be contrasted to the idea consent is (probably) not needed to anonymize a person’s data.
Obviously, one distinction between the Moore case and the health data set example above is that here there was just individual, as opposed to a large number of individuals. Another distinction is the effect on the “material” – the spleen cells would stay the same regardless of Moore consented, while, as noted above, the data set could change if consent was required. A question I think we should ask ourselves is whether these distinctions (or others) justify our acceptance of “giving” our data to others.
Finally, the control and use of the results of a data analysis or AI processing of the data may also give rise to “meta-privacy” concerns. What I mean by “meta-privacy” is that those results may allow inferences to be made about an individual. The individual may not have disclosed that fact to anyone (or even be aware of it themselves). These inferences can be powerful tools in the delivery of personalized medicine, such as predicting the best course of treatment given a patient’s individual circumstances.
However, these inferences may also reveal to others things that a person may not wish to be revealed. For example, recall the reaction when Target exposed a teen’s pregnancy to her family through flyers targeted at pregnant women. The teen had never disclosed the fact that she was pregnant to Target; Target sent her the flyer based on an inference in analyzing her purchases.
As another example, the Genetic Non-Discrimination Act received Royal Assent just over two years ago, on May 4, 2017. This Act was passed because of the fear that insurance companies would ask for genetic testing results. They would then use these results to discriminate against people based on inferred disease or health condition risk. I think, instinctively, there is a certain “ick” factor to allow discrimination based on an inference on someone’s personal characteristics. However, insurance companies do this all the time. Consider that premiums for car insurance have traditionally depended statistical analyses on factors such as age, sex, and marital status (and, yes, this is still a thing in Ontario, though not in all provinces). So perhaps there is greater concern when an inference is related to health.
In conclusion, personalized medicine in the age of Big Data can mean great things in health care. But, as a society, we should consider whether we want or need controls on the use of anonymised/de-identified health data.