Abstract

Adherence to study registration and reporting best practices is vital to fostering evidence-based medicine. All registered clinical trials on ClinicalTrials.gov conducted in Canada as of 2009 and completed by 2019 were identified. A cross-sectional analysis of those trials assessed prospective registration, subsequent result reporting in the registry, and subsequent publication of study findings. The lead sponsor, phase of study, clinical trial site location, total patient enrollment, number of arms, type of masking, type of allocation, year of completion, and patient demographics were examined as potential effect modifiers to these best practices. A total of 6720 trials were identified. From 2009 to 2019, 59% (n = 3,967) of them were registered prospectively, and 32% (n = 2138) had neither their results reported nor their findings published. Of the 3763 trials conducted exclusively in Canada, 3% (n = 123) met all three criteria of prospective registration, reporting in the registry, and publishing findings. Overall, the odds of having adherence to all three practices concurrently in Canadian trials decrease by 95% when compared with international trials. Canadian clinical trials substantially lacked adherence to study registration and reporting best practices. Knowledge of this widespread non-compliance should motivate stakeholders in the Canadian clinical trial ecosystem to address and continue to monitor this problem.

Background

Publication bias is “the tendency on the part of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings” (Dickersin 1990). Studies with statistically significant or positive results are more likely to be published than those with statistically non-significant or negative results—this means the dissemination of research findings is a biased process (Stern and Simes 1997; Dubben and Beck-Bornholdt 2005; Song et al. 2010; Dwan et al. 2013). Similarly, studies with positive results were much more likely to be published in a shorter time than studies with indefinite conclusions (Stern and Simes 1997). Publication bias threatens the practice of evidence-based medicine, the validity of meta-analyses, and the reproducibility of a study (Marks‐Anglin and Chen 2020). Under-reporting due to publication bias exaggerates the benefits of treatments and underestimates their harms (McGauran et al. 2010). Ultimately, this is detrimental to the healthcare system, as it wastes resources and puts patients at risk (Moher 1993; Chalmers et al. 2013). Between 1999 and 2007, fewer than half of all trials registered and completed on ClinicalTrials.gov were published (Ross et al. 2009). Between 2010 and 2012, fewer than half of all the registered trials for rare diseases were published within four years of their completion (Rees et al. 2019).
In 2008, the World Medical Association's (WMA) Declaration of Helsinki stated that “every clinical trial must be registered in a publicly accessible database before recruitment of the first subject” (WMA 2021). The declaration also states that all studies should be published, regardless of the statistical significance of their outcome. Nonetheless, publication bias remains prevalent globally. For instance, in all 36 German university medical centers between 2009 and 2013, only 39% of clinical trials were published within 2 years of their completion (Wieschowski et al. 2019). In 2015, the World Health Organization (WHO) published a Statement on Public Disclosure of Clinical Trial Results. It states that the main findings of clinical trials are to be published at the latest 24 months after study completion and that “the key outcomes are to be made publicly available within 12 months of study completion by posting to the results section of the primary clinical trial registry” (WHO 2022). This WHO statement serves as a global guideline to reduce publication bias and improve overall evidence-based medical decision-making.
In 2020, the Canadian Institutes of Health Research (CIHR) asserted that, as of 2021, it would implement new policies to ensure full adherence to the WHO Joint Statement requirements (Government of Canada CI of HR 2020). Similarly, the Canadian Government recommends adherence to the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS 2), which states that all clinical trials shall be registered before recruitment of the first trial participant and that researchers shall promptly update the study registry with the location of the findings (TCPS 2018). The Canadian Government also encourages sponsors to register clinical trials in a publicly accessible registry such as ClinicalTrials.gov (Canada 2003). Despite these recommendations, explicit guidance for trial reporting is lacking (Cobey et al. 2017). Even in jurisdictions where the legal frameworks have been established around trials registration and reporting (e.g., Food and Drug Administration in the United States), low adherence persists (DeVito et al. 2018).

Objectives

This research aimed to evaluate the adherence of clinical trials conducted in Canada to study registration and reporting best practices. Specifically, among interventional trials registered on clinicaltrials.gov and completed between 2009 and 2019, we aimed to evaluate the proportion of trials that were prospectively registered on clinicaltrials.gov (Dickersin 1990); the proportion of trials with study results reported on clinicaltrials.gov (Song et al. 2010); the proportion of trials with publication of findings (Stern and Simes 1997); and characteristics related to clinical trials that may predict prospective registration, reporting of study results, and publication of findings (Dwan et al. 2013).

Methods

Open science statement

Our study protocol was registered on the Open Science Framework (DOI: 10.17605/OSF.IO/FRPQN, https://osf.io/f8nrw?view_only=f2acaa3cf3ad4c2d9d1185f44c243a9d) prior to data analysis. The data acquired and analyzed in this study are publicly available; therefore, research ethics board approval was not required. We have made our Excel document with all the data publicly available on the Open Science Framework (DOI: 10.17605/OSF.IO/FRPQN, https://osf.io/mv36k?view_only=f2acaa3cf3ad4c2d9d1185f44c243a9d). This work was first communicated as a preprint on Medrxiv (DOI: 10.1101/2022.09.01.22279512, https://www.medrxiv.org/content/10.1101/2022.09.01.22279512v1) (Alayche et al. 2022). We report this cross-sectional study using the STROBE guideline (Cuschieri 2019).

Sampling

We obtained our cohort of interventional clinical trials registered on clinicaltrials.gov with a completion date of 2009–19 and at least one clinical site based in Canada. While recognizing that some trials may have been registered on other registries like ISRCTN (International Standard Randomised Controlled Trials Number), we selected clinicaltrials.gov as it is the largest and most comprehensive online registry in the world, with over 423,000 study records from 221 countries (U.S. National Library of Medicine 2019). The definition of specific terminology on clinicaltrials.gov is outlined in Table 1. The data were downloaded from the registry on 5 October 2021. Microsoft Excel was used to complete the data analysis. To obtain the cohort of Canadian trials, we applied the following filters: (A) Interventional studies (Clinical Trials), (B) Completed studies, (C) Canadian studies, (D) “Start date”: 01/01/2009, and (E) “Primary completion date”: 12/31/2019.
Table 1.
Table 1. Definition of certain terms as per clinicaltrials.gov.
TermDefinition
“Primary completion date”Date on which the last participant in a clinical study was examined or received an intervention to collect final data for the primary outcome measure
“Start date”Date on which the first participant was enrolled in a clinical study
“Canadian studies”All trials with at least one clinical trial site located in Canada, irrespective of the primary investigator's country of origin
The time period between 2009 and 2019 was selected because the WMA's Helsinki statement (published in 2008) gave all researchers an opportunity to become familiar with best practices for study registration. It often takes more than 5 years from inception to publish a completed clinical trial (Stern and Simes 1997). Therefore, when evaluating studies for publication in an academic journal, we excluded any study with a primary completion date after 31/12/2014 so that all researchers had at least 5 years to publish their results.

Outcomes

For all registered and completed clinical trials conducted in Canada between 2009 and 2019, we counted the proportion that prospectively registered their study before the recruitment of their first participant (Dickersin 1990); reported their results in the registry (Song et al. 2010); and published study findings (Stern and Simes 1997).
The first two outcomes were measured directly by analyzing the data downloaded from clinicaltrials.gov. Prospective registration and reporting of trial results on clinicaltrials.gov were taken directly from the registry. Assessing whether the trial findings were published in an academic journal was more complicated than the first two outcomes. Notification of publishing the trial finding in a journal requires additional effort and some degree of interpretation.
To measure the third outcome, we added the following search criteria: “(NOT NOTEXT) [CITATIONS]” in the “Other terms” box in clinicaltrials.gov to identify clinical trials with publications in an academic journal. Those publications were either automatically indexed by clinicaltrials.gov or manually added by the sponsor or investigator. As per the clinicaltrials.gov website, citations are automatically identified by clinicaltrials.gov based on their NCT number and are subsequently indexed in the database. Clinicaltrials.gov does not expand any further on how these studies are automatically identified and indexed (How to Find Results of Studies 2022). Clinicaltrials.gov's automatic indexing can be inconsistent, making it unreliable for determining if a clinical trial has been published or not. Two main problems arise due to this inconsistency: clinicaltrials.gov may underestimate the true number of publications if a paper is not automatically indexed or the authors forget to manually add it (Dickersin 1990); and clinicaltrials.gov may overestimate the true number of publications if it automatically indexes a paper only related to the study that does not contain the results of the trial in question (Song et al. 2010). We took into consideration these potential inaccuracies and subsequently created a quality assurance team (see Quality assurance section).

Trial characteristics

A total of ten study characteristics were recorded for each trial, including: lead sponsor (e.g., industry); phase of the clinical trial (e.g., phase 3); total number of participants (e.g., <100); biological sex of participants (e.g., male); primary completion date (e.g., 2019); clinical trial site location (e.g., Canada); number of arms (e.g., two-arm trial); type of masking (e.g., double blind); intervention model (e.g., parallel assignment); and type of allocation (e.g., randomized).

Data analysis

We calculated the proportions of trials that were prospectively registered, had their results reported in the registry, and subsequently had their findings published.
We analyzed whether the results were modified by these 10 variables of interest as outlined in the study characteristics above using univariable logistic regressions (odds ratios with 95% confidence intervals). This analysis enables us to identify which characteristics have the strongest impact on best practice adherence. The outcomes of this analysis were reported as a prevalence in percentages.

Quality assurance

To verify the clinicaltrials.gov records of publication status, we randomly selected a 10% sample of studies for manual verification. The 10% sample was randomly picked for quality assurance via the Excel “RAND” function. In total, four members of the research team participated in quality assurance. Each study in the sample was manually searched independently by two researchers following a three-step process: the clinical trial identifier (NCT ID) was entered on clinicaltrials.gov to verify the publication status posted on the website (Dickersin 1990); the NCT ID was then searched on PubMed to verify the publication status (Song et al. 2010); and the NCT ID was finally searched on Google Scholar to verify the publication status (Stern and Simes 1997). We confirmed the lack of publication of a study if both the PubMed and Google Scholar search yielded no results. We confirmed the successful publication of a study if either PubMed or Google Scholar yielded results. We also re-evaluated trials with cited publications on clinicaltrials.gov to ensure that at least one of the cited publications did report the results of the trial in question.

Results

A total of 6790 clinical trials conducted in Canada were identified. Of those clinical trials, we excluded 70 of them that were submitted to clinicaltrials.gov with one or more incorrect entries and were therefore incompatible with our software (Microsoft Excel). For example, among those 70 studies, a few submitted an “alphabetic” entry despite a strictly “numerical” requirement. Our software could not correct those mistakes, and therefore those studies were excluded. Therefore, a total of 6720 (99%) studies were included in our analysis.

Demographic data

The demographic data of the collected sample are highlighted in Table 2. The median year of trial primary completion was 2015. Of the 6720 included clinical trials, 1% (n = 65) had a primary completion date in 2009, 12% (n = 819) had a primary completion date in 2019. A total of 38% (n = 2581) of identified trials did not indicate the phase of their study. Fifty-nine percent (n = 3967) of trials were prospectively registered, while the remaining trials were registered after recruitment started. Moreover, 39% (n = 2642) of all trials made their results available in the registry. The National Institutes of Health (NIH) was identified as the lead sponsor in only 0.8% (n = 57) of all trials, which were then included in the “Industry” category, as can be seen in Table 2.
Table 2.
Table 2. Trial characteristics.
Total6720100.0%
Time of registration  
Before trial start396759%
After trial start275341%
Results reported  
Results reported in the registry264239%
Results not reported in the registry407861%
Publication of findings  
Published372455%
Not published299645%
Lead sponsor  
Industry±324548%
Academia*347552%
Phase  
Early I361%
I4907%
I–II1813%
II125619%
II–III981%
III158724%
IV4917%
Not reported258138%
Number of participants  
1–99349652%
100–500218232%
>500104115%
Not reported10%
Clinical trial site location  
Canada only376356%
International§295744%
Primary completion date  
2009651%
20102163%
20113996%
20125548%
201368110%
201473611%
201578712%
201678112%
201784813%
201883412%
201981912%
±
The industry category includes pharmaceutical and device companies and NIH studies.
*
The academia category includes universities, individuals, and community-based organization.
§
Includes all international clinical trials that have at least one Canadian site.

Primary completion date

Outlined in Table 3 are all the primary outcomes we measured based on the year of primary completion. From 2009 to 2019, there were an increasing number of studies reaching primary completion. There was an increasing trend from 2009 to 2019 in the prevalence of prospective registration: 35% in 2009 and 73% in 2019. However, there is no trend observed in the reporting of results across the years: 34% in 2009 and 32% in 2019. Besides the first year examined in 2009, we did not observe a trend in the publication of study findings from 2010 to 2014. Overall, there was a slight upward trajectory in adherence to all three practices, mostly explained by the increase in prospective registration and publication of findings.
Table 3.
Table 3. Adherence of Canadian trials to study registration and reporting best practices based on the year of primary completion.
Year of completionNumber of studiesProspective registrationResults reportedFindings published*All three practices*
20096523 (35.38%)22 (33.85%)24 (36.92%)3 (4.62%)
201021692 (42.59%)82 (37.96%)120 (55.56%)30 (13.89%)
2011399174 (43.61%)161 (40.35%)210 (52.63%)74 (18.55%)
2012554276 (49.82%)215 (38.81%)304 (54.87%)101 (18.23%)
2013681354 (51.98%)305 (44.79%)386 (56.68%)145 (21.29%)
2014736413 (56.11%)302 (41.03%)438 (59.51%)174 (23.64%)
2015787453 (57.56%)341 (43.33%)
2016781472 (60.44%)308 (39.44%)
2017848542 (63.92%)347 (40.92%)
2018834573 (68.71%)300 (35.97%)
2019819595 (72.65%)259 (31.62%)
Total67203967 (59.03%)2642 (39.32%)1482 (55.9%)527 (19.88%)
*
Years 2015–2019 were excluded when measuring the “published” variable.

Lead sponsor, phase of study, total enrollment, and countries implicated

Table 4 outlines all the primary outcomes based on lead sponsor, total patient enrollment, phase of study, and clinical trial site location. Overall, 59% (n = 3967) of trials were prospectively registered, 39% (n = 2642) had their results reported in the registry, and 55% (n = 3724) had their findings published. One-third (n = 2138) of trials did not have their results available to the public in any form—i.e., the results were not reported in the registry nor were the findings published.
Table 4.
Table 4. Adherence of Canadian trials to study registration and reporting best practices based on the lead sponsor, total enrollment, phase of study, and countries implicated.
Variable of interestNumber of studiesProspective registrationResults reportedFindings publishedAll three practices
Total67203967 (59.03%)2642 (39.32%)3724 (55.42%)1361 (20.25%)
Lead sponsor
Industry±32452235 (68.88%)2217 (68.32%)1849 (56.98%)1182 (36.43%)
Academia*34751732 (49.84%)425 (12.23%)1875 (53.96%)179 (5.15%)
Total patient enrollment
1-9934961813 (51.86%)762 (21.8%)1538 (43.99%)274 (7.84%)
100-50021821402 (64.25%)1101 (50.46%)1355 (62.1%)585 (26.81%)
>5001041752 (72.24%)779 (74.83%)831 (79.83%)502 (48.22%)
Not given10 (0%)0 (0%)0 (0%)0 (0%)
Phase of study
Early I3614 (38.89%)3 (8.33%)14 (38.89%)0 (0%)
I490262 (53.47%)73 (14.9%)145 (29.59%)19 (3.88%)
I/II18197 (53.59%)51 (28.18%)79 (43.65%)18 (9.94%)
II1256863 (68.71%)679 (54.06%)634 (50.48%)322 (25.64%)
II/III9860 (61.22%)34 (34.69%)61 (62.24%)16 (16.33%)
III15871172 (73.85%)1251 (78.83%)1192 (75.11%)777 (48.96%)
IV491295 (60.08%)197 (40.12%)285 (58.04%)106 (21.59%)
Not given25811204 (46.65%)354 (13.72%)1314 (50.91%)103 (3.99%)
Clinical trial site location
Canada only37631774 (47.14%)435 (11.56%)1800 (47.83%)123 (3.27%)
International§29572193 (74.16%)2207 (74.64%)1924 (65.07%)1238 (41.87%)
±
The industry category includes pharmaceutical companies, device companies, and NIH studies.
*
The academia category includes universities, individuals, and community-based organization.
§
Includes all international clinical trials that have at least one Canadian clinical site.
Trials with an “Industry” lead sponsor had higher rates of prospective registration, reporting of results, and publication of study findings than trials with an “Academia” lead sponsor. Overall, clinical trials led by the “Industry” had a 36% (n = 1182) adherence to all three best practices, while clinical trials led by the “Academia” had an adherence of only 5% (n = 179). A univariable analysis determined that the odds of having prospective registration with an “Academia” lead sponsor decrease by 56% when compared with “Industry” (OR = 0.44; 95%CI: 0.40–0.49). Moreover, the odds of result reporting are 93% lower (OR = 0.07; 95%CI: 0.06–0.08), and publication of findings is 13% lower (OR = 0.87; 95%CI: 0.79–0.96). Overall, adherence to all three practices concurrently is 90% lower in “Academia” than in “Industry” (OR = 0.1; 95%CI: 0.09–0.12).
There was a higher adherence to study registration and reporting best practices based on the size of the clinical trial. Of the clinical trials with over 500 participants, 48% (n = 502) adhered to all three practices. Clinical trials with fewer than 100 participants had an overall adherence rate of only 8% (n = 274). A univariable analysis determined that the odds of prospective registration of trials with <100 participants decreased by 59% when compared with trials with >500 participants (OR = 0.41; 95%CI: 0.36–0.48). Moreover, the odds of result reporting are 91% lower (OR = 0.09; 95%CI: 0.08–0.11), and the publication of findings is 80% lower (OR = 0.20; 95%CI: 0.17–0.23). Overall, adherence to all three practices concurrently is 91% lower in trials with <100 participants than in trials with >500 participants (OR = 0.09; 95%CI: 0.08–0.11).
Phase 3 trials had the highest performance in prospective registration, result reporting, and publication of findings in comparison to any other phase. Phase 3 studies have an adherence of 49% (n = 777) to all three practices, as opposed to phase 1 studies with an adherence of 4% (n = 19). A univariable analysis determined that the odds of prospective registration with a phase 1 trial decreased by 59% when compared with phase 3 trial (OR = 0.41; 95%CI: 0.33–0.50). Moreover, the odds of result reporting are 95% lower (OR = 0.05; 95%CI: 0.04–0.06) and the odds of publication of findings are 86% lower (OR = 0.14; 95%CI: 0.11–0.17). Overall, adherence to all three practices concurrently is 96% lower in phase 1 trials than in phase 3 trials (OR = 0.04; 95%CI: 0.03–0.07).
International clinical trials had a higher rate of prospective registration (74%; n = 2193), result reporting (75%; n = 2207), and publication of findings (65%; n = 1924) than trials conducted with exclusively Canadian sites. Overall, international trials had a 42% (n = 1238) adherence to all three practices, and “Canadian Only” trials had an adherence of 3% (n = 123) to all three practices. A univariable analysis determined that the odds of having prospective registration by Canadian trials decreased by 69% when compared with international trial (OR = 0.31; 95%CI: 0.28–0.35). Moreover, the odds of result reporting are 96% lower (OR = 0.04; 95%CI: 0.04–0.05) and the odds of publication of findings are 51% lower (OR = 0.49; 95%CI: 0.45–0.54). Overall, adherence to all three practices concurrently is 95% lower in Canadian trials than in international trials (OR = 0.05; 95%CI: 0.04–0.06).
The remaining results of the variables of interest are in the Appendix. All the data analyzed, the statistical analyses, and the results pertaining to the top Canadian institutions can be accessed directly via this link here.

Quality assurance results

A random 10.0% sample (n = 672) was selected for manual verification of the publication status. As per the downloaded data from clinicaltrials.gov, 56% (n = 378) of those trials had results published in a medical journal. The quality assurance determined that 70% (n = 470) of those trials were in fact published. Clinicaltrials.gov underestimated the true prevalence of published clinical trials because some trials were published without an NCT ID (Dickersin 1990); some trials were published after we downloaded the data (Song et al. 2010); some trials were published in the form of an Abstract/Thesis/Poster/PhD (Stern and Simes 1997); and Clinicaltrials.gov was not able to automatically index some publications in certain journals (Dwan et al. 2013).

Discussion

Of the almost 7000 RCTs in our sample, less than two-thirds were prospectively registered; less than half made their results available; and less than two-thirds published their findings. Less than a quarter of the RCTs completed all three best practices. Trials conducted with exclusively Canadian sites were substantially less compliant with these practices than trials with both Canadian and international sites. Importantly, the trials we describe were not small (Chan and Altman 2005); nearly a third of the trials included between 100 and 500 participants, and 15% (n = 1041) of them included over 500 participants.
Of all the variables included in the logistic regression, four were highly associated with study registration and reporting best practices. They were as follows: clinical trials led by “Industry” (pharmaceutical companies) (Dickersin 1990), phase 3 clinical trials (Song et al. 2010), trials with over 500 participants (Stern and Simes 1997), and, finally, clinical trials conducted by a multinational team (Dwan et al. 2013). The four variables that were negatively associated with study registration and reporting best practices were as follows: clinical trials led by “Academia” (universities) (Dickersin 1990), phase 1 clinical trials (Song et al. 2010), trials with fewer than 100 participants (Stern and Simes 1997), and trials conducted with exclusively Canadian sites (Dwan et al. 2013).
Some readers will view these results as another example of egregious waste in biomedicine with little improvement since the 2014 landmark Lancet series on research waste (Kleinert and Horton 2014). Patients, who are critical to the success of clinical trials, are likely to be disappointed with these results; their contributions are not being honored. For clinical practice guideline developers, these results indicate that evidence is missing regarding the totality of knowledge about an intervention. For healthcare funders, these results indicate a bad return on investment. If grantees use scarce resources, often taxpayer dollars, and do not prospectively register their trials and/or make their results available in registries or publish their results, everyone loses. Finally, academic institutions may risk their institutional reputation when their faculty members fail to meet minimum national/global standards (WHO; CIHR).
These results are not unique to Canada. Similar results have been reported elsewhere (van den Bogert et al. 2016; Rüegger et al. 2017; Wieschowski et al. 2019; Taylor and Gorman 2022). The overall lack of prospective registration and reporting of clinical trial results may reflect a lack of knowledge on the topic on the part of clinical trials teams and/or their academic institutions. Prospective registration and reporting results are part of a larger ecosystem of open science, which includes transparency. Compared with some other parts of the world, Canada has been slow to publicly embrace the practices of open science (Government of Canada 2022).
The lack of prospective registration may reflect that clinical trial principal investigators are leaving these responsibilities to other team members. Alternatively, funders may not have strong enough adherence monitoring in place. With the advent of automated digital monitoring and the increasing availability of application programming interfaces (APIs) this should be less of a problem. The European Trials Tracker scheme, developed by the University of Oxford's Bennett Institute for Applied Data Science, is an example of existing monitoring on a large scale (https://www.bennett.ox.ac.uk/) (Bennett Institute for Applied Data Science 2022). Funders and academic institutions could use digital dashboards to monitor adherence to clinical trial policies and identify training needs (Cobey et al. 2022).
In the last few years, several initiatives have proposed moving away from traditional metrics of the number of publications (the irony of our results—counting publications—is not lost on us) towards a broader set of best practices that reflect an institutional commitment to research integrity when assessing researchers for hiring, promotion, and tenure (Hicks et al. 2015; Moher et al. 2020; DORA 2022). Current researcher assessment could be augmented by tracking whether faculty members have prospectively registered their trials (and other studies), made their results available on a trial register, and published them (preferably in an open access journal).
It is time for trial funders and academic institutions to collaborate to address the overall lack of adherence of Canadian trials to registration and reporting mandates. In the Canadian context, there is now a requirement for equity, diversity, and inclusion training when applying for grant funding for the CIHR. CIHR has had success in requiring principal investigators to take sex and gender training before they can submit a grant application (Haverfield and Tannenbaum 2021). Something similar could be introduced for clinical trial registration and results reporting. The training can be required for all faculty and research staff involved in conducting trials. This would portray the funders and academic institutions’ commitment to improving this situation. They could also collectively commit to evaluating such an educational intervention by conducting a stepped wedge and/or cluster trial across universities to ensure the training was having the desired effect.
From the earliest attempts to introduce clinical trial registration in the 1980s, there was a strong belief that it might reduce publication bias and provide a more accurate picture of the estimated benefits and harms of interventions. Our results, and others (Scherer et al. 2018; Wieschowski et al. 2019), suggest that publication bias is still a substantive problem despite prevalent views that clinical trials are heavily regulated. Indeed, although regulation via policy exists, if we fail to audit adherence, the policy goals will not be adhered to.
CIHR recently updated their policy guide (see box). The “stick” in this updated guidance is likely to be the lack of future funding for principal investigators unless their current trials are registered, and the results made available. Importantly, the agency will monitor adherence annually “by asking impacted researchers to provide clinical trial registration identifiers, and links to summary results and open access publications”.
“The following new requirements apply to all clinical trial grants funded on or after 1 January 2022:
Public disclosure of results must be done within a mandated time frame:
○ publications describing clinical trial results must be open access from the date of publication;
○ summary results must be publicly available within 12 months from the last visit of the last participant (for the collection of data on the primary outcome); and
All study publications must include the registration number/Trial ID (to be specified in the article summary/abstract).
Nominated principal investigators receiving CIHR grant funds for clinical trial research after 1 January 2022 must comply with the above requirements to remain eligible for any new CIHR funding”. Despite the recently updated policy guide, the CIHR stated that in 2022, only 57% of the CIHR-funded trials “had registered their clinical trial in a publicly available, free to access, searchable clinical trial registry complying with WHO's international agreed standards before the first visit of the first participant” (Government of Canada CI of HR 2023). Perhaps, adherence to the new policy guide may only be realized for future CIHR-funded trials once nominated principal investigators lose funding eligibility for their lack of compliance.

Limitations

We relied on the data reported on clinicaltrials.gov. This is the most widely used registration platform (Zarin et al. 2011), accounting for the vast majority of all clinical trial registrations. To be counted in our analysis, we only tracked forward the trials that were marked as “completed” on clinicaltrials.gov; this means we missed records that were registered and then never updated to be marked as complete despite these trials having ended. The results are likely worse than we report here. Moreover, we only tracked studies that were registered on clinicaltrials.gov in the first place. Had we also analyzed publications that were not registered on clinicaltrials.gov, adherence may have been lower. Furthermore, we did not analyze the length of time it took between the study completion date and the publication date. In other words, some studies may have posted their results and published their findings a decade after the study completion, despite the recommendation to do so within 2 years (WHO 2022). Despite all of this, less than one-quarter of the sample adhered to all three best practices. Some may argue that trials with non-statistically significant results take longer to publish; however, a recent review found no difference in time to publication based on the statistical significance of the trial (Jefferson et al. 2016).
Another limitation is that we relied on clinicaltrials.gov regarding the publication of study findings. Ideally, we would have completed the quality assurance for our entire sample; however, due to a resource constraint, we limited the quality assurance to 10% of our sample. As demonstrated by our quality assurance, clinicaltrials.gov has underestimated the true prevalence of publications by roughly 14%. Clinicaltrials.gov reported that 56% (n = 378) of trials were published, but our quality assurance determined the true number to be 70% (n = 470). This is in line with a recent study highlighting that over a third of trials they analyzed on clinicaltrials.gov had no results available on either clinicaltrials.gov or PubMed, up to 36 months after their primary completion date (Nelson et al. 2023). Some publications were missed by clinicaltrials.gov for the following reasons: publication does not include NCT IDs; publication date was after we collected the data; publications were in the form of an Abstract/Thesis/Poster; and, finally, some publications were in journals not accessible by clinicaltrials.gov. Overall, this underestimation of publication status suggests that despite low levels of prospective registration and results reporting, more results from these trials are being published. All three best practices must be adhered to concurrently to maintain a high level of scientific rigor. If findings are published without prospective registration or results reporting, a bias is introduced into the published paper. Therefore, this underestimation of publication status supports our findings that there is room for improvement. Ultimately, our analysis is limited by how comprehensive and detailed the documentation is on clinicaltrials.gov.
In summary, our analysis of nearly 7000 Canadian trials registered on clinicaltrials.gov indicates that there is substantial room for improvement in ensuring they are prospectively registered at inception (prior to the first person being randomized), the results are reported in a publicly accessible registry, and that completed trials are published, preferably in an open access platform/journal. The consequences of not monitoring adherence to these activities are profound and wasteful. The international AllTrials initiative, signed by more than 700 organizations, has brought attention to public support for these policies (AllTrials 2022). Several Canadian-based foundations have signed this declaration (e.g., Canadian Cancer Research Alliance, Canadian Agency for Drugs and Technologies in Health, Canadian Medical Association, and Canadian HIV Trials Network). Other stakeholders, like Health Canada, and funders of academic trials, like the federal Tri Agency, ought to also commit to the AllTrials initiative and its broader principles. Canada should join the global movement to address research waste due to incompliant registration and reporting of trials and seek to lead in identifying resolutions.

References

Alayche M., Cobey K.D., Ng J.Y., Ardern C.L., Khan K.M., Chan A.W., et al. 2022. Evaluating prospective study registration and result reporting of trials conducted in Canada from 2009-2019. medRxiv. Available from https://www.medrxiv.org/content/10.1101/2022.09.01.22279512v1.
AllTrials. 2022. AllTrials. Available from https://www.alltrials.net/.
Bennett Institute for Applied Data Science—University of Oxford. 2022. Available from: https://www.bennett.ox.ac.uk/.
Chalmers I., Glasziou P., Godlee F. 2013. All trials must be registered and the results published. British Medical Journal, 346: f105.
Chan A.W., Altman DG. 2005. Epidemiology and reporting of randomised trials published in PubMed journals. The Lancet, 365(9465): 1159–1162.
Cobey K., Haustein S., Brehaut J., Dirnagl U., Franzen D., Hemkens L., et al. 2022. Establishing a core set of open science practices in biomedicine: a modified Delphi study. Available from https://europepmc.org/article/PPR/PPR511112.
Cobey K.D., Fergusson D., Moher D. 2017. Canadian funders and institutions are lagging on reporting results of clinical trials. Canadian Medical Association Journal, 189(42): E1302–E1303.
Cuschieri S. 2019. The STROBE guidelines. Saudi Journal of Anaesthesia, 13(Suppl. 1):31.
DeVito N.J., Bacon S., Goldacre B. 2018. FDAAA TrialsTracker: a live informatics tool to monitor compliance with FDA requirements to report clinical trial results. p. 266452. Available from https://www.biorxiv.org/content/10.1101/266452v3.
Dickersin K. 1990. The existence of publication bias and risk factors for its occurrence. Journal of the American Medical Association, 263(10): 1385–1389.
DORA. 2022. Declaration on research assessment. DORA. Available from https://sfdora.org/.
Dubben H.H., Beck-Bornholdt HP. 2005. Systematic review of publication bias in studies on publication bias. British Medical Journal, 331(7514): 433–434.
Dwan K., Gamble C., Williamson P.R., Kirkham JJ. 2013. Systematic review of the empirical evidence of study publication bias and outcome reporting bias—an updated review. PLoS One, 8(7): e66844.
Government of Canada. 2022. The open science dialogues: summary of stakeholders round tables—science.Gc.ca. Available from https://www.ic.gc.ca/eic/site/063.nsf/eng/h_98359.html.
Government of Canada CI of HR. 2020. CIHR signs the World Health Organization's Joint Statement on Public disclosure of results from clinical trials—CIH. Available from https://cihr-irsc.gc.ca/e/52189.html.
Government of Canada CI of HR. 2023. CIHR's monitoring of clinical trials registration and disclosure 2022 – CIHR. Available from https://cihr-irsc.gc.ca/e/53444.html [accessed 11 June 2023].
Haverfield J., Tannenbaum C. 2021. A 10-year longitudinal evaluation of science policy interventions to promote sex and gender in health research. Health Research Policy and Systems, 19(1): 94.
Hicks D., Wouters P., Waltman L., de Rijcke S., Rafols I. 2015. Bibliometrics: the Leiden Manifesto for research metrics. Nature, 520(7548): 429–431.
How to find results of studies—ClinicalTrials.Gov. 2022. Available from https://www.clinicaltrials.gov/ct2/help/how-find/find-study-results#Published.
Jefferson L., Fairhurst C., Cooper E., Hewitt C., Torgerson T., Cook L., et al. 2016. No difference found in time to publication by statistical significance of trial results: a methodological review. JRSM Open, 7(10): 205427041664928.
Kleinert S., Horton R. 2014. How should medical science change? The Lancet, 383(9913): 197–198.
Marks‐Anglin A., Chen Y. 2020. A historical review of publication bias. Research Synthesis Methods, 11(6): 725–742.
McGauran N., Wieseler B., Kreis J., Schüler Y.B., Kölsch H., Kaiser T. 2010. Reporting bias in medical research—a narrative review. Trials, 11(1): 37.
Moher D. 1993. Clinical-trial registration: a call for its implementation in Canada. Canadian Medical Association Journal, 149(11): 1657–1658.
Moher D., Bouter L., Kleinert S., Glasziou P., Sham M.H., Barbour V., et al. 2020. The Hong Kong Principles for assessing researchers: fostering research integrity. PLoS Biology, 18(7): e3000737.
Nelson J.T., Tse T., Puplampu-Dove Y., Golfinopoulos E., Zarin DA. 2023. Comparison of availability of trial results in ClinicalTrials.gov and PubMed by data source and funder type. Journal of the American Medical Association, 329(16): 1404–1406.
Rees C.A., Pica N., Monuteaux M.C., Bourgeois FT. 2019. Noncompletion and nonpublication of trials studying rare diseases: a cross-sectional analysis. Plos Medicine, 16(11): e1002966.
Ross J.S., Mulvey G.K., Hines E.M., Nissen S.E., Krumholz HM. 2009. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. Plos Medicine, 6(9): e1000144.
Rüegger C.M., Dawson J.A., Donath S.M., Owen L.S., Davis PG. 2017. Nonpublication and discontinuation of randomised controlled trials in newborns. Acta Paediatrica (Oslo Norway: 1992), 106(12): 1940–1944.
Scherer R.W., Meerpohl J.J., Pfeifer N., Schmucker C., Schwarzer G., von Elm E. 2018. Full publication of results initially presented in abstracts. Cochrane Database of Systematic Reviews (Online), 11: MR000005.
Song F., Parekh S., Hooper L., Loke Y.K., Ryder J., Sutton A.J., et al. 2010. Dissemination and publication of research findings: an updated review of related biases. Health technology assessment (Winchester, England), 14(8): iii, ix–xi, 1–193.
Stern J.M., Simes RJ. 1997. Publication bias: evidence of delayed publication in a cohort study of clinical research projects. British Medical Journal, 315(7109): 640–645.
Taylor N.J., Gorman DM. 2022. Registration and primary outcome reporting in behavioral health trials. BMC Medical Research Methodology [Electronic Resource], 22(1): 41.
Tri-Council Policy Statement: Ethical Conduct for research involving humans—TCPS 2 (2018). Available from https://ethics.gc.ca/eng/policy-politique_tcps2-eptc2_2018.html.
U.S. National Library of Medicine. 2019. ClinicalTrials.gov background. Available from https://clinicaltrials.gov/ct2/about-site/background.
van den Bogert C.A., Souverein P.C., Brekelmans C.T.M., Janssen S.W.J., Koëter G.H., Leufkens H.G.M., et al. 2016. Non-publication is common among phase 1, single-center, not prospectively registered, or early terminated clinical drug trials. PLoS One, 11(12): e0167709.
WHO. 2022. Statement on public disclosure of clinical trial results. Available from https://www.who.int/news/item/09-04-2015-japan-primary-registries-network.
Wieschowski S., Riedel N., Wollmann K., Kahrass H., Müller-Ohlraun S., Schürmann C., et al. 2019. Result dissemination from clinical trials conducted at German university medical centers was delayed and incomplete. Journal of Clinical Epidemiology, 115: 37–45.
WMA. 2021 The World Medical Association-WMA Declaration of Helsinki—ethical principles for medical research involving human subjects. Available from https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/.
Zarin D.A., Tse T., Williams R.J., Califf R.M., Ide NC. 2011. The ClinicalTrials.Gov results database—update and key issues. New England Journal of Medicine, 364(9): 852–860.

Appendix

Variable of interest: biological sex of participants
Table A1.
Table A1. Adherence of Canadian trials to study registration and reporting best practices based on biological sex of participants.
Biological sex of participantsTotal number of studiesProspective registration%Results reported%Published%All three practices%
All5932357260.22%240340.51%331955.95%126521.33%
Female49324148.88%14329.01%25551.72%499.94%
Male29515452.20%9632.54%15050.85%4715.93%
Total6720396759.03%264239.32%372455.42%136120.25%
Variable of interest: type of masking
Table A2.
Table A2. Adherence of Canadian trials to study registration and reporting best practices based on type of masking.
MaskingTotal number of studiesProspective registration%Results reported%Published%All three practices%
None2950170657.83%104035.25%145049.15%46915.90%
Single88842147.41%14215.99%50957.32%606.76%
Double101965063.79%50649.66%60359.18%27426.89%
Triple62437760.42%29346.96%35657.05%16025.64%
Quadruple122480966.09%65953.84%79765.11%39732.43%
Not reported15426.67%213.33%960.00%16.67%
Total6720396759.03%264239.32%372455.42%136120.25%
Variable of interest: intervention design
Table A3.
Table A3. Adherence of Canadian trials to study registration and reporting best practices based on intervention.
Intervention designTotal number of studiesProspective registration%Results reported%Published%All three practices%
Parallel assignment4303271263.03%192344.69%266862.00%109025.33%
Single group assignment156187956.31%53734.40%68343.75%20313.00%
Crossover assignment70230042.74%14720.94%28740.88%456.41%
Factorial assignment823441.46%78.54%4858.54%33.66%
Sequential assignment543157.41%1935.19%2648.15%1222.22%
Not reported181161.11%950.00%1266.67%844.44%
Total6720396759.03%264239.32%372455.42%136120.25%
Variable of interest: allocation
Table A4.
Table A4. Adherence of Canadian trials to study registration and reporting best practices based on allocation.
AllocationTotal number of studiesProspective registration%Results reported%Published%All three practices%
Randomized4965295459.50%200240.32%295559.52%110622.28%
Non-randomized49830461.04%19739.56%22645.38%8617.27%
N/A123569956.60%43735.38%53343.16%16613.44%
Not reported221045.45%627.27%1045.45%313.64%
Total6720396759.03%264239.32%372455.42%136120.25%
Variable of interest: arms
Table A5.
Table A5. Adherence of Canadian trials to study registration and reporting best practices based on number of arms.
Number of armsTotal number of studiesProspective registration%Results reported%Published%All three practices%
1130273356.30%46735.87%56343.24%18013.82%
23555212759.83%127935.98%208158.54%69619.58%
391055160.55%44649.01%54259.56%24627.03%
447327457.93%22246.93%27758.56%11824.95%
Not reported602745.00%11.67%2338.33%00.00%
Total6720396759.03%264239.32%372455.42%136120.25%

Note: A total of 480 clinical trials in our sample had a “number of arms” reported between 5 and 24. Those trials were excluded from this table. The “Total” row however reflects the entire sample (n = 6720).

Information & Authors

Information

Published In

cover image FACETS
FACETS
Volume 8January 2023
Pages: 1 - 10
Editor: Tanzy Love

History

Received: 23 September 2022
Accepted: 7 August 2023
Version of record online: 23 November 2023

Key Words

  1. reporting
  2. trial registration
  3. transparency
  4. reporting best practices
  5. publication bias
  6. clinical trials

Sections

Subjects

Plain Language Summary

Lack of Adherence to Best Practices in Canadian Clinical Trials: Improving the Implementation Gap

Authors

Affiliations

Centre for Journalology, Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Canada
Faculty of Medicine, University of Ottawa, Ottawa, Canada
Author Contributions: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Methodology, Project administration, Validation, Visualization, Writing – original draft, and Writing – review & editing.
University of Ottawa Heart Institute, Ottawa, Canada
School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
Author Contributions: Conceptualization, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, and Writing – review & editing.
Centre for Journalology, Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Canada
Author Contributions: Validation and Writing – review & editing.
Sport and Exercise Medicine Research Centre, La Trobe University, Melbourne, Australia
Department of Family Practice, University of British Columbia, Vancouver, Canada
Author Contributions: Validation and Writing – review & editing.
Karim M. Khan
Department of Family Practice and School of Kinesiology, University of British Columbia, Vancouver, Canada
Author Contributions: Validation and Writing – review & editing.
An-Wen Chan
Department of Medicine, Women's College Research Institute, Toronto, Canada
Institute of Health Policy, Management and Evaluation, University of Toronto, Toronto, Canada
Author Contributions: Validation and Writing – review & editing.
Centre for Journalology, Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Canada
Faculty of Medicine, University of Ottawa, Ottawa, Canada
Author Contributions: Formal analysis, Validation, and Writing – review & editing.
School of Medicine, University College Dublin, Dublin, Ireland
Author Contributions: Formal analysis, Validation, and Writing – review & editing.
Gerstein Science Information Centre, University of Toronto, Toronto, Canada
Author Contributions: Formal analysis, Validation, and Writing – review & editing.
Centre for Journalology, Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Canada
Author Contributions: Formal analysis, Validation, and Writing – review & editing.
Faculty of Medicine, University of Ottawa, Ottawa, Canada
Author Contributions: Formal analysis, Validation, and Writing – review & editing.
Faculty of Medicine, University of Ottawa, Ottawa, Canada
Author Contributions: Formal analysis, Validation, and Writing – review & editing.
Centre for Journalology, Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Canada
Faculty of Medicine, University of Ottawa, Ottawa, Canada
Author Contributions: Validation and Writing – review & editing.
Centre for Journalology, Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Canada
School of Epidemiology and Public Health, Faculty of Medicine, University of Ottawa, Ottawa, Canada
Author Contributions: Conceptualization, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Visualization, and Writing – review & editing.

Author Contributions

Conceptualization: MA, KDC, DM
Data curation: MA
Formal analysis: MA, RC, MM, APA, SE, JG, IA
Funding acquisition: MA
Investigation: MA, KDC, DM
Methodology: MA, KDC, DM
Project administration: MA, KDC, DM
Resources: KDC, DM
Supervision: KDC, DM
Validation: MA, KDC, JYN, CLA, KMK, AWC, RC, MM, APA, SE, JG, IA, JVW, DM
Visualization: MA, KDC, DM
Writing – original draft: MA
Writing – review & editing: MA, KDC, JYN, CLA, KMK, AWC, RC, MM, APA, SE, JG, IA, JVW, DM

Competing Interests

The authors declare no conflicts of interest.

Funding Information

Faculty of Medicine, University of Ottawa: Summer Studentship Program
This work was supported by the Summer Studentship Program from the University of Ottawa Faculty of Medicine.

Metrics & Citations

Metrics

Other Metrics

Citations

Cite As

Export Citations

If you have the appropriate software installed, you can download article citation data to the citation manager of your choice. Simply select your manager software from the list below and click Download.

There are no citations for this item

View Options

View options

PDF

View PDF

Get Access

Media

Media

Other

Tables

Share Options

Share

Share the article link

Share on social media