In the back halls of any academic medical center, you’ll find notice boards covered with an assortment of flyers offering money to those willing to participate in medical studies. As a cash-strapped graduate student, I would scan these notices, looking for studies that offered a few hundred dollars in exchange for doing something that didn’t sound too unbearable. I wanted to supplement my meager graduate school stipend, and I had a few resources I could exchange for cash: my blood, my tissues, my immune system, and my tolerance for pain.
The ethics of paying people to volunteer as medical research subjects have been a source of controversy for decades. The primary concern is that money often warps people’s judgment when it comes to assessing risk. Bioethicists worry that a cash offer is an “undue inducement” that will cause low-income people to ignore the risks of a study and compromise their ability to give fully-informed consent. While researchers and institutional ethics committees may see these payments as a benefit that justifies riskier studies.
People who consent to participate in unspecified future studies can’t possibly know in advance what risks they’re exposing themselves to.
Despite bioethicists debating whether and how much human subjects should be paid, academic medical centers and pharmaceutical companies have generally forged ahead and done whatever’s been needed to recruit enough participants for their studies. The results have sometimes been ugly, like when pharmaceutical company Eli Lilly paid homeless alcoholics to participate in safety testing of new drugs.
In a 2011 paper, California bioethicists Ari VanderWalde and Seth Kurzban reviewed the contentious ethical history of paying human subjects and took stock of where things stand now. They argued that the lack of consensus among bioethicists has resulted in an ad hoc, patchwork system that leaves too much up to the individual researchers, who may have good intentions but are “likely to subconsciously exploit, coerce, and put subject health at risk.”
They concluded that, despite disagreement among bioethicists, research institutions are not going to stop using money as a recruitment tool any time soon, and there seems to be no shortage of people willing to sell their bodies to science—human subjects even have their own jobzine, Guinea Pig Zero. The main task now is to work out the ethical framework that should guide these payments, so that subjects are compensated fairly for their efforts, and so that financial incentives don’t hinder their ability to make an informed assessment of the study risks.
UNFORTUNATELY, THE RECENT EXPLOSION in the number of human genetic studies means that an informed risk assessment may no longer be possible. Driven by technological advances in DNA analysis, human genetic studies generate an insatiable demand for human subjects, which has led to a rise in so-called biobanks, which store hundreds of thousands of tissue and genetic samples that can be used in future research. For statistical reasons, human genetic studies often require a very large number of subjects, which means it’s too expensive and time consuming to recruit subjects from scratch for every study. Instead, researchers share samples and data through biobanks and databases like the National Institutes of Health’s database of Genotypes and Phenotypes.
Human subjects are now frequently asked to consent not just to the procedures and risks of a single study, but also to broadly consent to the use of their samples in completely unrelated future studies by different research teams. This kind of broad consent is a new beast on the ethical landscape: people who consent to participate in unspecified future studies can’t possibly know in advance what risks they’re exposing themselves to.
The risks of these future studies obviously aren’t the kind of physical risk that, say, a drug trial involves, but they can still be serious. One very extensive genetic study, the Personal Genomes Project, lists what your genetic data could expose you to: “Anyone with sufficient knowledge” could take that data and “infer paternity or other features of the participant’s genealogy,” “claim relatedness to criminals or incriminate relatives,” or even “make synthetic DNA corresponding to the participant and plant it at a crime scene.”
So what ethical principles should guide payments to human research subjects whose samples might be biobanked, freely shared among researchers, and used in future genetic studies? It’s an issue that has hardly been addressed yet, perhaps because, as some researchers have remarked, “the empirical facts of the genomic science change too fast for the reflections of ethics to keep pace with.” But we have an acute need to set up a consistent ethical framework that can help researchers and institutional ethics committees resolve the tension between the need to share scientific resources and the responsibility to protect the people who put themselves at risk for medical research. If bioethicists and funding agencies don’t get ahead on this issue, individual researchers and institutions will resolve it in their own way, establishing practices that will be hard to change later. Without a uniform framework to guide how we use money as a recruiting tool, we’ve experienced some major ethical lapses in drug trials and traditional medical studies. With genetic studies, we’re at risk for ethical lapses on an even grander scale.