First, I need to make clear that I am committed to transparency in the publication of clinical trial data, that I recognize the historic problem of cherry-picking the ‘good’ data to publish, that I have read and appreciated much of Ben Goldacre’s recent book ‘Bad Pharma’, and also that for 16 years I have worked in the pharmaceutical industry, latterly as a publication manager and writer for a major pharmaceutical company. My experience within the industry was one of increasing transparency and a general willingness of my peers to publish all data, once the concept of publication bias had been explained thoroughly to them. A large part of my role was to heighten the visibility and awareness of medical publications and their importance, to educate my peers around Good Publication Practices, authorship criteria, and all the rest, and also to explain why and how the landscape of medical publications has changed so fundamentally in recent years. As I mention above, I recognize that all has not always been ideal in this area, but it is important to stress that—based on my own experience—things are moving in the right direction.
It is frustrating, therefore, when I read an article such as the one recently published in BMJ, entitled ‘Non-publication of large randomized clinical trials: cross sectional analysis’ (Jones CW, et al BMJ 2013 347:f6104 doi: 10.1136/bmj.f6104). Without getting into the politics or agendas that may lie beneath the surface, I’d just like to raise a couple of points. The overall conclusion of the article is that ‘…non-publication is common among large randomized clinical trials…’ (based on 29% of the sampled trials not having been published) – the inference being that publication is therefore uncommon. Could this not, in fact, be inverted to say ‘…publication is common among large randomized clinical trials…’ as 71% were published? Industry is singled out as being a particularly errant force: ‘…non-publication was more common among trials that received industry funding…than those that did not…’, this being considered of sufficient importance to be promoted to the abstract.
Prior to the article’s conclusions, the authors do cite certain limitations of the study. Another may be the fact that data are included from trials dating back to 1999, a time that could be considered the infancy of publications in terms of good practice. And a further limitation would be the skewed data interpretation that darkens the face of the industry-sponsored trials, the industry as a whole and all who sail in her. While no-one would claim that data posting is a replacement for data publication in a peer-reviewed journal, it can serve as a ‘stop-gap’, and an examination of the data shows that 100% of the unpublished trials that had data posted were industry-sponsored, whereas 100% of the unpublished non-industry sponsored trials were not even posted. It is interesting how these data can be interpreted, or selected for interpretation, in different ways and used to make very different points and highlight different conclusions.
My point is not that all is rosy in medical publications within the industry, but my experience was that by-gone biases and what we may now, with our benefit of hindsight, even call malpractice are being addressed and that compliance to an increasingly regulated medical publications environment is recognized as all-important (agreed, not least for reasons of avoiding hefty punitive financial measures, but also, yes, due to the ethical standards of individual publication managers operating within industry on a day-to-day basis). But equally, I would not say that all is rosy in the publication of non-industry sponsored data, and clearly all is not well in the posting of these data. The conclusion should not be ‘industry bad: non-industry good’. This is simplistic at best.
I feel that it is important to stress, albeit qualitatively, how things are improving, and have improved over recent years in terms of prospective publication planning, data posting, and compliance to our now recognized good publication practices, at least within the industry, and not to demonise the vast majority of well-meaning professionals trying to cope with an rapidly-evolving landscape. In terms of quantifying this impression, I would welcome any thoughts or direction towards any relevant published data.