A random sampling of clinical trials reported to the federal ClinicalTrials.gov database found that 1 in 3 had inconsistencies in their adverse event outcomes when compared with reports made to peer-reviewed medical journals, researchers said.
Most discrepancies involved either entire omission of "serious" adverse events (SAE) or differences that would have made little difference to the "direction of risk" for the overall study. However, the bias tended to lean in favor of a more favorable representation of the therapy in question.
In general the studies reported higher rates of AEs on the federal site than were mentioned in medical publications. The researchers stopped short of determining whether the federal site or the medical journals ultimately had the more accurate figures, but said that ClinicalTrials.gov was host to a "more comprehensive summary" of AEs.
"Underreporting of AEs is a major concern because it can distort how decision makers balance the benefits and harms of medical interventions," the authors wrote. "Even when the inconsistencies are minor in individual studies, as was the case for several of the trials analyzed, these distortions can be amplified when results are combined within systematic reviews."
They noted that there may be some gray area about the subject of "serious" adverse events in particular, but maintained that the inconsistencies were inexcusable.
"Although there is some inherent subjectivity in the FDA’s standard criteria for an SAE, determination of AEs as serious should not change on the basis of reporting sources," they wrote.
Negative outcomes were not the only point of contention found in the analysis, which was published today in the Annals of Internal Medicine. The studies were rife with discrepancies in mortality rates and even primary results.
Unlike AEs, however, deaths were reported more frequently in medical journals than on the federal site. About 3-quarters of the sampled studies mentioned no deaths on the ClinicalTrials.gov database, but 17% of those studies reported deaths in matched publications. Of the studies that did report deaths on the federal site, nearly half gave inconsistent mortality figures in peer-reviewed journals.
"Reporting of deaths was more consistent when they were included in the outcomes section of ClinicalTrials.gov," the authors noted.
About 1 in 5 studies reported inconsistent results on their trials’ primary outcomes and a full 80% contained reporting discrepancies regarding secondary outcome measures.
"The most common [secondary outcome measure] differences they noted were outcomes listed in the publication but missing from ClinicalTrials.gov," according to the analysis. "Although this may reflect incomplete reporting in the ClinicalTrials.gov database, it could also indicate the misrepresentation of post hoc analyses as prespecified SOMs in the publication."
The authors noted that their analysis had important limitations and that further research is needed to further suss out and characterize what appears to be rampant flaws in clinical trial reporting.
The researchers used for their study 110 clinical studies that had been completed by Jan. 1, 2009, which may have been among the 1st studies ever posted to the ClinicalTrials.gov database. The inconsistencies may simply be a reflection of inexperience with the then-new system.
Furthermore, the analysis did not look at changes made to the database over time, but examined only final reported results. Some of the discrepancies in outcomes may relate to changes made to protocols mid-study.
Ultimately, the analysts cautioned that their report should not be taken as evidence of willful wrongdoing among researchers, but should draw attention to the process of collecting, interpreting and reporting clinical trial data – a system that analysts said is not always transparent.
"It is uncertain whether discrepancies that we observed represent deliberate misrepresentation, reporting carelessness, the influence of journal editors, or simply an evolution of investigators’ thinking or analytic approach over time," the authors wrote. "Although there are many possible explanations for such discrepancies, these findings contribute to the growing sense that the process of taking the initially collected "raw" participant-level data and deriving the ultimately reported aggregate or "summary" data involves a series of decisions that are not entirely prespecified or objective; different iterations of the process thus produce different results."
"If investigators do not (or cannot) provide consistent quantitative summaries of the fundamental features of their trials, one must question how accurate either reporting source could be," they concluded.