Frequency of clinical trials data making it into the public domain

I recently came across this article from last summer: http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0101826. It presents an analysis of the frequency with which data from completed clinical trials (in the US) actually make it into the public domain, which the authors define as publication of the primary outcome in a peer-reviewed journal and/or posting of the data to ClinicalTrials.gov. The studies had to be interventional, at least Phase II, and listed on ClinicalTrials.gov as having been completed in 2008. That year was chosen as it was the first year in which the Food and Drug Administration Amendment Act (FDAAA) requirements on posting of clinical trials data within 1 year of completion (except Phase I, and except for products pre-licensure) came into effect. A total of 400 randomly-selected studies—from both industry and non-industry, and a mixture of the two, funding sources—fulfilling the pre-defined criteria were selected, and a 4-year follow-up period was used.
To cut a long story short, the data are not marvellous. The analysis showed a widespread failure to publically disclose results, with data for nearly 30% of the 400 studies that were included neither having been posted nor published within 4 years of completion of the study. For those that were disclosed publically, the median time taken from the end of the study was around 600 days. That’s pushing 2 years. Also a worry is that industry-funded studies were less likely to be published than studies with non-industry or mixed-funding.
All this is fairly depressing reading, and seems to show that publication bias is alive and kicking. But we should remember that this analysis was conducted on studies completed in 2008, the first year of the FDAAA requirements being announced. We are all aware of historic poor publication practices, and changing that mind-set is no mean feat. But with the advent of Good Publication Practice, the FDAAA requirements, and the much-increased reach of organisations such as the International Society for Medical Publication Professionals in recent years, it is to be hoped that things are now moving in the right direction. It would be interesting to see this analysis on a rolling annual basis, to hopefully see the momentum gained now that this particular oil tanker is underway. Hopefully we’d see more and more data being released into the public domain in a timely and transparent manner, reducing publication bias.

(Non-)publication of (non-)industry sponsored clinical trial data

First, I need to make clear that I am committed to transparency in the publication of clinical trial data, that I recognize the historic problem of cherry-picking the ‘good’ data to publish, that I have read and appreciated much of Ben Goldacre’s recent book ‘Bad Pharma’, and also that for 16 years I have worked in the pharmaceutical industry, latterly as a publication manager and writer for a major pharmaceutical company. My experience within the industry was one of increasing transparency and a general willingness of my peers to publish all data, once the concept of publication bias had been explained thoroughly to them. A large part of my role was to heighten the visibility and awareness of medical publications and their importance, to educate my peers around Good Publication Practices, authorship criteria, and all the rest, and also to explain why and how the landscape of medical publications has changed so fundamentally in recent years. As I mention above, I recognize that all has not always been ideal in this area, but it is important to stress that—based on my own experience—things are moving in the right direction.

It is frustrating, therefore, when I read an article such as the one recently published in BMJ, entitled ‘Non-publication of large randomized clinical trials: cross sectional analysis’ (Jones CW, et al BMJ 2013 347:f6104 doi: 10.1136/bmj.f6104). Without getting into the politics or agendas that may lie beneath the surface, I’d just like to raise a couple of points. The overall conclusion of the article is that ‘…non-publication is common among large randomized clinical trials…’ (based on 29% of the sampled trials not having been published) – the inference being that publication is therefore uncommon. Could this not, in fact, be inverted to say ‘…publication is common among large randomized clinical trials…’ as 71% were published? Industry is singled out as being a particularly errant force: ‘…non-publication was more common among trials that received industry funding…than those that did not…’, this being considered of sufficient importance to be promoted to the abstract.

Prior to the article’s conclusions, the authors do cite certain limitations of the study. Another may be the fact that data are included from trials dating back to 1999, a time that could be considered the infancy of publications in terms of good practice. And a further limitation would be the skewed data interpretation that darkens the face of the industry-sponsored trials, the industry as a whole and all who sail in her. While no-one would claim that data posting is a replacement for data publication in a peer-reviewed journal, it can serve as a ‘stop-gap’, and an examination of the data shows that 100% of the unpublished trials that had data posted were industry-sponsored, whereas 100% of the unpublished non-industry sponsored trials were not even posted. It is interesting how these data can be interpreted, or selected for interpretation, in different ways and used to make very different points and highlight different conclusions.

My point is not that all is rosy in medical publications within the industry, but my experience was that by-gone biases and what we may now, with our benefit of hindsight, even call malpractice are being addressed and that compliance to an increasingly regulated medical publications environment is recognized as all-important (agreed, not least for reasons of avoiding hefty punitive financial measures, but also, yes, due to the ethical standards of individual publication managers operating within industry on a day-to-day basis). But equally, I would not say that all is rosy in the publication of non-industry sponsored data, and clearly all is not well in the posting of these data. The conclusion should not be ‘industry bad: non-industry good’. This is simplistic at best.

I feel that it is important to stress, albeit qualitatively, how things are improving, and have improved over recent years in terms of prospective publication planning, data posting, and compliance to our now recognized good publication practices, at least within the industry, and not to demonise the vast majority of well-meaning professionals trying to cope with an rapidly-evolving landscape. In terms of quantifying this impression, I would welcome any thoughts or direction towards any relevant published data.