Storygram: Charles Piller’s “Failure to Report”
The Storygram series, in which professional writers annotate award-winning stories to illuminate what makes a great science story great, is a joint project of The Open Notebook and the Council for the Advancement of Science Writing. It is supported by a grant from the Gordon and Betty Moore Foundation.
Charles Piller is STAT’s West Coast editor and Natalia Bronshtein is STAT’s interactives editor. Together, they won a AAAS Kavli award in 2016 for the following story. This annotation was done by Roxanne Khamsi and is co-published at The Open Notebook.
Stanford University, Memorial Sloan Kettering Cancer Center, and other prestigious medical research institutions have flagrantly violated a federal law requiring public reporting of study results, depriving patients and doctors of complete data to gauge the safety and benefits of treatments, a STAT investigation has found.Piller’s piece starts off with a bang, and rightfully so. This finding—that some of the biggest offenders are our nation’s most esteemed institutions—grabs the reader by the collar because it is so unexpected. The lede sentence also nods to the impact that the violation has on patients, underscoring the human impact of this finding, which is important in a data-driven piece such as this. Lastly, it makes clear that the forthcoming information in the article is original and exclusive; it signals to the reader that the story will contain new revelations.
The violations have left gaping holes in a federal database used by millions of patients, their relatives, and medical professionals, often to compare the effectiveness and side effects of treatments for deadly diseases such as advanced breast cancer.
The worst offenders included four of the top 10 recipients of federal medical research funding from the National Institutes of Health: Stanford, the University of Pennsylvania, the University of Pittsburgh, and the University of California, San Diego. All disclosed research results late or not at all at least 95 percent of the time since reporting became mandatory in 2008.Numbers can help the reader grasp the enormity of the problem. Rather than just vaguely saying that violation of the law is widespread, this 95 percent figure gives the reader something firm to grip. The sentence also signals how long the regulations have been in place.
Failure to Report: A STAT investigation
What we found: Most research institutions — including leading universities and hospitals in addition to drug companies — routinely break a law that requires them to report the results of human studies of new treatments to the federal government’s ClinicalTrials.gov database.
Drug companies have long been castigated by lawmakers and advocacy groups for a lack of openness on research, and the investigation shows just how far individual firms have gone to skirt the disclosure law.This sentence telegraphs that the piece will explain how institutions have not just ignored the law but done something much more malicious by actively not complying with it. It makes us wonder just how far they went and thus motivates us to keep reading. But while the industry generally performed poorly, major medical schools, teaching hospitals, and nonprofit groups did worse overall — many of them far worse.
The federal government has the power to impose fines on institutions that fail to disclose trial results, or suspend their research funding. It could have collected a whopping $25 billion from drug companies alone in the past seven years. But it has not levied a single fine.This paragraph made my jaw drop. Twenty-five billion dollars is an enormous amount of money. What’s great is that the story provides this hypothetical number. Piller also really drives home the lack of enforcement with a succinct sentence about how no money has been collected to date.
The STAT investigation is the first detailed review of how well individual research institutions have complied with the law.This is important—readers need to know what’s new. This clearly lays that out: The piece will name names (of individual institutions) for the first time.
The legislation was intended to ensure that findings from human testing of drugs and medical devices were quickly made public through the NIH website ClinicalTrials.gov. Its passage was driven by concerns that the pharma industry was hiding negative results to make treatments look better — claims that had been dramatically raised in a major lawsuitContext is so important, and this provides it brilliantly. Here we begin to understand why this law exists in the first place. Giving readers this origin story is important and helps them connect the dots. alleging that the manufacturer of the antidepressant Paxil concealed data that the drug caused suicidal thoughts among teens.
“GlaxoSmithKline was misstating the downside risks. We then asked the question, how can they get away with this?” Eliot Spitzer, who filed the 2004 suit when he was New York attorney general, said in an interview.Spitzer’s voice here helps flesh out the background to this story and explain why lawmakers instituted the regulations, and what they hoped to achieve. He’s also a well-known politician who has endured scandal himself, so people will be intrigued to know he was involved in this.
The STAT analysis shows that “someone at NIH is not picking up the phone to enforce the obligation to report,” Spitzer said. “Where NIH has the leverage through its funding to require disclosure, it’s a pretty easy pressure point.”
NIH Director Dr. Francis Collins acknowledged in a statementIt’s important to include what the institutions in question have to say about these revelations. And there’s no better person to speak for NIH than Collins himself. that the findings are “very troubling.”
Clinical trial reporting lapses visualized
STAT examined data reporting for all institutions – federal agencies, universities, hospitals, nonprofits, and corporations – required to report results to ClinicalTrials.gov for at least 20 human experiments since 2008.OK, now we’re in the meat of the story. This is a good place to tell the reader the nuts and bolts of how the investigation was done. It’s important to include this methodology in the story. This explanation is direct and doesn’t get too technical. It provides just the right amount of detail. This visualization shows the first comparative analysis of their performance. Among the groups, just two companies provided trial results within the legal deadline more than half the time. Most – including the National Institutes of Health, which oversees the reporting system – violated the law the vast majority of the time.
|CLICK HERE to read Roxanne Khamsi’s annotation of Natalia Bronshtein’s data visualization.There are a lot of dimensions to this graphic, which is what makes it so powerful. Piller told me that he and Bronshtein didn’t want to just include a chart in the piece ranking the worst offenders, because the truth was that a lot of institutions performed very poorly in terms of complying with the law. Bronshtein uses color very effectively here to quickly give the user a sense of how badly each institution and research sector did. And using a dark red to convey alarm works well. It’s also brilliant how she sorted the data so that readers can easily separate and discern the academic, corporate, and governmental institutions from one another. Just a quick glance shows how many long, dark-red slices are in the academic/nonprofit segment of the graphic. Bronshtein also wisely recognized that some users might just want to know about a particular institution, and gives them the option to search using a drop-down menu. Accommodating these different needs with one interactive graphic is a stroke of genius.|
Doctors rely on robust reporting of data to see how effectively a drug or device performs and to understand the frequency of side effects such as suicidal thinking — crucial information for making treatment recommendations.Here the story expands on an earlier point and gives more detail about why compliance with the law matters to the general public. In a data-driven story like this, it’s important to remind the reader at various junctures in the piece that this violation of the law has a real effect on medical care. Researchers depend on prior results to plan future studies and to try to ensure the safety of volunteers who participate in trials.
Yet academic scientists have failed to disclose most results on time — within a year of a study being completed or terminated.
In interviews, researchers, university administrators, and hospital executives said they are not intentionally undermining the law. They said that they are simply too busy and lack administrative funding, and that they disclose some clinical trial results in medical journals and at conferences. Some experts voiced concerns that the researchers might also face pressure from corporate sponsors to delay reports — whether negative or positive — for commercial reasons.We know by now in the story that institutions are performing poorly in terms of compliance. The next logical question is why. This bit helps satisfy that curiosity. In doing so, it doesn’t leave the reader hanging. The reader is getting a payoff here—greater understanding of the issue and the forces at play.
Not much to boast about
A number of trials languishing on researchers’ “to do” piles involve life-and-death problems.I love the stark contrast in this sentence between the banal “‘to do’ piles” and striking “life-and-death problems.” The section opener ups the ante and gets the reader wondering again how this situation ever came to be. Piller is continuously piquing the reader’s curiosity.
Memorial Sloan Kettering failed to report data for two trials of the experimental anticancer drug ganetespib, made by Synta Pharmaceuticals. The results of tests involving breast and colorectal cancer patients showed serious adverse effects in 13 of 37 volunteers. They included heart, liver, and blood disorders, bowel and colon obstructions, and one death.In a data-driven story involving a lot of numbers, examples mean everything. This example is very powerful in that it involves not one but two trials of a drug that a prestigious institution failed to report in a timely way into the database and one in which the study included a death.
The New York City hospital said it delayed its reporting so that it could complete related medical journal articles. It rushed to submit the results to ClinicalTrials.gov only after being contacted by STAT — more than two years late for one of the studies.
Collette M. Houston, who directs clinical research operations for Memorial Sloan Kettering, said she believes the researchers did immediately report adverse events to Synta and the Food and Drug Administration. But a hospital spokesperson said records needed to verify if that happened were no longer accessible.
Houston said that Synta, which paid for the research, normally would alert investigators on other ongoing trials of the same drug about adverse events, so they would learn of the possible added risks to patients despite Memorial Sloan Kettering’s failure to report to ClinicalTrials.gov.
In another case,Piller gives the reader another example. This drives home the important point that it’s not just a single incident, it’s a widespread issue. delayed reporting denied doctors timely information on treating breast cancer. In 2009, the nonprofit Hoosier Cancer Research Network terminated a study of Avastin in 18 patients with metastatic breast cancer. The drug didn’t help and caused trial volunteers serious harm — including hypertension, gastrointestinal toxicity, sensory problems, and pain. But the Indianapolis-based network, which runs trials under contract for drug companies, did not report the results as required the following year — though doctors nationwide were debating whether it was advisable to treat breast cancer patients with Avastin.
In 2011, the FDA revoked its approval of Avastin for breast cancer after determining that it was ineffective for that use and posed life-threatening risks. The Hoosier Network researchers finally published the data in a medical journal in 2013. They have yet to post results on ClinicalTrials.gov — nearly six years after the legal deadline.
The lead investigator, Dr. Kathy Miller of Indiana University School of Medicine, blamed “short staffing” for the reporting delay. “Getting that uploaded was not a big priority,” she said, adding that she had never heard of anyone using ClinicalTrials.gov to examine trial results. She insisted that her study “didn’t have anything to add to that national debate” about Avastin’s safety and efficacy.
STAT identified about 9,000 trials in ClinicalTrials.gov subject to the law’s disclosure requirements. The analysis focused on the 98 universities, nonprofits, and corporations that served as sponsors or collaborators on at least 20 such trials between 2008 and Sept. 10 of this year.Earlier in the piece, Piller described the methodology. This part here gives some payoff and tells the reader that 9,000 trials—a huge number—were identified by STAT’s investigation as subject to the law’s disclosure requirements. This gives some additional vantage on the scale of the issue.
None of the organizations has much to boast about.The use of short, declarative topic sentences such as this help frame the information in a way that’s easy to understand. Piller uses approachable language throughout the piece, which could otherwise have slipped into a difficult-to-digest legalistic tone. Only two entities — drug companies Eli Lilly and AbbVie — complied with reporting requirements more than half the time, a sign of how uniformly the research community flouts the law.
Results from academic institutions arrived late or not at all 90 percent of the time, compared with 74 percent for industry. More than one-fourth of companies did better than all the universities and nonprofits.
The FDA, which regulates prescription drugs, is empowered to levy fines of up to $10,000 a day per trial for late reporting to ClinicalTrials.gov. In theory, it could have collected $25 billion from drug companies since 2008 — enough to underwrite the agency’s annual budget five times over.This $25 billion was alluded to earlier in the piece, but here it is given additional weight by being compared to the FDA’s budget, which it dwarfs. That gives some additional context. Because Piller evolves this information by adding something new, it doesn’t come across as a mere repetition of the number. Instead, it’s a further revelation. But neither FDA nor NIH, the biggest single source of medical research funds in the United States, has ever penalized an institution or researcher for failing to post data.
Even the agency’s own staff scientists have violated the reporting law three-quarters of the time.This line stands alone for a reason: It’s a dramatic twist, and counterintuitive.
Collins said NIH research shows reporting rates by both industry and academia have improved over the past several years due to greater awareness, and NIH’s own scientists are now reporting their results about 90 percent of the time. “Of course I would like to see all of these rates reach 100 percent,” Collins said.
Data from ClinicalTrials.gov reviewed by STAT show NIH scientists’ reporting of results within the legal deadline peaked at 38 percent in 2013. Counting results reported late, NIH staff performance reached 90 percent for studies due in 2012 but has dropped since then.
Beginning next spring, after further refinement of ClinicalTrials.gov rules, NIH and FDA will have “a firmer basis for taking enforcement actions,” Collins said. If an NIH grantee fails to meet reporting deadlines, their funding can be terminated, he said.
The agency’s staff scientists, he added, “will be required to follow the law and NIH policy, and there will be consequences … for failing to do so.”
‘Too busy to follow the law’
University researchers and administrators cited a lack of time as one of the reasons they miss reporting deadlines. Lisa Lapin, a top communications official at Stanford, said via email that many faculty members find the process for entering results “above and beyond” their capacity, “given multiple existing constraints on their time” and no funding for additional staff.
Stanford supplied data to ClinicalTrials.gov for 26 of 82 trials for which results were due — far below the performance by most other top universities or companies. Professors supplied data on time for only four studies.
Dr. Harry Greenberg, senior associate dean for research at Stanford, said his institution was determined to improve.
“We will solve it, and we’ll solve it the best way possible,” through steps that include providing support for faculty members to enter trial data, he said.
NIH estimates that it takes, on average, about 40 hours to submit trial results, and many drug and medical device makers have dedicated staff to post data to ClinicalTrials.gov. Eli Lilly has seven full-time employees for the task.One of the logical questions at this point in the piece is, how much effort would it take to comply with the law? Piller satisfies our curiosity by specifying that it takes 40 hours, and then provides another measure (staffing) to illustrate how reporting trial data is not a trivial task. Including this calculation of hours acknowledges the barriers to depositing the information into the database. But it doesn’t excuse the massive lapse. It gives more context and weight to the investigation.
But Collins and advocates for transparency said time demands are no excuse for noncompliance. “There is no ethical justification for ignoring the law,” said Jennifer Miller, a New York University medical ethicist. “It’s not OK to say, ‘I’m too busy to follow the law.’”This quote is perfectly set up by the preceding text and previous paragraph.
Filling information gaps
Six organizations — Memorial Sloan Kettering, the University of Kansas, JDRF (formerly the Juvenile Diabetes Research Foundation), the University of Pittsburgh, the University of Cincinnati, and New York University — broke the law on 100 percent of their studies — reporting results late or not at all.
Dr. Paul Sabbatini, a research executive at Memorial Sloan Kettering, said that as an efficiency measure, his organization previously tried to complete work on scholarly publications in tandem with reporting ClinicalTrials.gov results. “We thought and hoped that we could keep that process bundled,” he said, but preparing journal articles and getting them published often takes years. The hospital will now post to ClinicalTrials.gov independently.At this point, the story has delved deep into the issue of noncompliance. Piller gives the reader the sense here that the story is very much in motion and that his investigative reporting process has already triggered some rethinking among institutions—even before the publication of this piece. Again, this is payoff for the reader.
Several organizations argued that publishing in peer-reviewed medical journals is the highest form of disclosure of study results and should be sufficient, or that there should be a way to simply copy journal data into ClinicalTrials.gov.
Critics say such views reflect the primacy of scholarly articles for academic career advancement more than any inherent superiority over ClinicalTrials.gov.
A 2013 analysis in the journal PLOS Medicine compared 202 published studies against results in ClinicalTrials.gov for the same research. The NIH registry proved far more complete, especially for reporting adverse events — signs that a therapy might be going badly wrong.Piller has noted earlier in the story how some institutions point to published scientific papers as a place they have reported findings. But here he uses a robust research study to show that published papers do not necessarily provide the same breadth of information as the ClinicalTrials.gov data.
Journals often reject papers about small studies or trials stopped early for a range of reasons, such as a drug causing worrisome side effects. They publish relatively few negative results, although failed tests can be as important as positive findings for guiding treatment. ClinicalTrials.gov tries to fill these critical information gaps and serve as a timely and comprehensive registry.
The website is open to the public, unlike many journals that charge a substantial fee.It’s important to make comparisons direct and straightforward. This sentence achieves that. And the site requires data to be entered in a consistent way that allows easier comparisons of benefits or side effects across many studies.
It also lets experts check whether researchers have cherry-picked results, which can mislead doctors and patients.
Preliminary work by The Compare Project, at the Centre for Evidence-Based Medicine at the University of Oxford, suggests that shifts away from outcome measures designated before the start of a clinical trial to results selected at the study’s conclusion are common. The group recently examined reports in leading journals on 44 trials and found that overall, 213 original measures went unreported, while 225 new measures were added.
Withheld data spurred action
Congress passed a law requiring registration of clinical trials in 1997, and the ClinicalTrials.gov database was created in 2000 for this purpose. At first it was little used.Here the story starts to come full circle, but it also delves deeper into the backstory of the law. The lens is widening out so we can get the big picture now that we’ve scrutinized the present.
NYU’s Miller traces the origins of stronger reporting requirements to Spitzer’s Paxil suit.
GlaxoSmithKline disputed the charges but settled out of court. This past September, the medical journal BMJ reanalyzed clinical trial data gathered by the company in 2001 and recently shared with independent scientists. Its findings showed Paxil to be no more effective in teens than a placebo, and linked the drug to increased suicide attempts.
Scenarios similar to the Paxil case have been repeated many times since with other drugs.
In the wake of the bird flu scare a decade ago, governments stockpiled billions of dollars worth of the antiviral drug Tamiflu, thought to be a lifesaver. The Lancet reported last year, however, that previously undisclosed trial data showed that Tamiflu “did not necessarily reduce hospital admissions and pulmonary complications in patients with pandemic influenza.”This example shows us that concerns about data reporting persist beyond Paxil, and that new examples have cropped up as recently as last year.
Another notorious case involved an experimental arthritis drug, TeGenero’s TGN1412, tested on six men in London in 2006. All immediately fell gravely ill. One suffered heart failure, blood poisoning, and the loss of his fingers and toes in a reaction that resembled gangrene. A close variant of the drug had been found potentially dangerous years earlier, but the results had never been published.This is an incredibly sad example, but an important one because it was a headline story in 2006 and drives home in a very poignant way how lives may have been saved if the data had been available from the earlier trial.
In response to Spitzer’s suit, GlaxoSmithKline agreed to share much more data with the public. The move prodded competitors and lawmakers to do better.
Then in 2007, a new US law broadened trial registration requirements and added mandates for disclosing summary data, adverse events, and participants’ demographic information on ClinicalTrials.gov.
“Mandatory posting of clinical trial information would help prevent companies from withholding clinically important information about their products,” Senator Charles Grassley, an Iowa Republican and a leading proponent of the law, said at the time.Grassley is another prominent politician, and this quote gives an illustrative snapshot in time in terms of what his hopes were when advocating for the regulations in 2007. “To do less would deny the American people safer drugs when they reach into their medicine cabinets.”
High hopes not realized
The popularity of ClinicalTrials.gov shows progress on lawmakers’ goal of a better-informed public. The site logs 207 million monthly page views and 65,000 unique daily visitors. An NIH survey suggested that 8 in 10 are patients and their family members or friends, and medical professionals and researchers.This is very thorough reporting and great context: Just like with the $25 billion statistic, these figures give a sense of the scale and importance of the issue at hand. Instead of being vague, it drives home the actual, concrete use of ClinicalTrials.gov. The reader gets a sense that this website is big potatoes. With a $5 million budget, the program’s staff of 38, operating at NIH in Bethesda, Md., reviews trial designs and results, provides services, conducts research, and sets policy.
Yet, the lofty hopes for the website have never been realized. More than 200,000 trials are registered on ClinicalTrials.gov, but many drug or device studies are exempted from reporting results, including most trials that have no US test site, those involving nutritional supplements, and early safety trials.
Study sponsors can also request permission to delay reporting on experimental drugs not yet approved by the FDA, and have done so on more than 4,300 occasions.Earlier in the piece, Piller noted that institutions have actively sought to delay reporting data. Here we see this point underscored.
The many exceptions mean that virtually all research organizations — even the most conscientious — have posted results for just a small fraction of their registered trials.
AbbVie, one of the most legally compliant pharmaceutical companies, was involved in 459 registered trials. Just 25 of those required the reporting of results by law. Even for that small number, AbbVie achieved its relatively good reporting record not from exceptional efforts on public disclosure, but by diligently applying for filing extensions.
Biogen delivered results late or not at all 97 percent of the time, the worst record among pharma companies. It said that many of its studies were not covered by the reporting law, and that it had extension requests pending or planned for others — accounting for the poor performance. But STAT did not count trials exempted from the law, nor those that NIH listed as having applied for extensions.
The reporting of results on ClinicalTrials.gov, poor by any reasonable standard, is worse than it appears, some researchers say.The piece up until this point carefully stayed focused on the findings of the investigation, but here Piller widens the lens once again to show that there is much more that is unknown. This uncertainty creates a tension that drives the piece forward.
In part, that’s because there is no easy way to tell whether companies or universities are registering all of their applicable trials, and there is no comprehensive policing mechanism. A research organization that appears conscientious in reporting results might be hiding some trials.
NYU’s Miller studied drugs from major companies approved by the FDA in 2012. In a recent article in the medical journal BMJ Open, she and her coauthors showed that trials subject to the legal requirements often were not registered, and results not provided — even when the firms gave the data confidentially to the FDA during the approval process.
Miller found that results from only about two-thirds of studies used to justify FDA approvals were posted to ClinicalTrials.gov or published elsewhere.
Reporting lapses betray research volunteers who assume personal risks based on the promise that they are contributing to public health, said Kathy Hudson, an NIH deputy director who helps to guide the agency’s clinical trial policies.Here, the piece shines a light on another human cost of late or nonexistent data reporting: the study participants. This helps the layperson understand further why this issue is of importance to people outside the research community. “If no one ever knows about the knowledge gained from a study,” she said in an interview, “then we have not been true to our word.”
Draconian consequences — including FDA and NIH penalties — might prove necessary to improve the distorted evidence base of clinical medicine, said Dr. Ben Goldacre, a fellow at the University of Oxford and cofounder of The Compare Project and Alltrials.net, which advocates for disclosure of clinical research.
As a young doctor, Goldacre broke his leg in a fall while running to treat a cardiac arrest patient. “Nobody has ever broken a leg running down a corridor to try to fix the problem of publication bias,” he said. “That’s because of a failure of empathy.”This kicker works so well on a number of levels. First, it’s an evocative image. You can visualize the scene, with a doctor running down a hall. Second, it showcases the urgency that doctors feel in trying to help patients and compares it to the urgent need to fix clinical trial data gaps. This investigative story has given readers numerous reasons to care about the failure to report data; this kicker heightens the emotion even further.
Roxanne Khamsi is a journalist whose work has appeared in Scientific American, Slate, Newsweek, and The New York Times Magazine, and is chief news editor for Nature Medicine. She teaches at Stony Brook University’s Alan Alda Center for Communicating Science. Follow her on Twitter @rkhamsi.