Evidence is essential for identifying treatment priorities and developing polices, said Claire Allen of Evidence Aid, at Evidence-Based Medicine Live (EBMLive), which took place on 15–17 July in Oxford, UK. Evidence is also key in justifying clinical decision-making. However, is it sufficient for making such decisions?
Based on evidence alone, biopsy – an invasive and often embarrassing procedure – would be the standard procedure for diagnosing prostate cancer, although magnetic resonance imaging (MRI) is a viable alternative. This was the example given by Dr Eve O’Toole of the National Cancer Control Programme, and demonstrates the need to involve patients in clinical decision-making and for this process to be made transparent. Organizations such as the National Institute for Health and Care Excellence have made steps toward improving transparency in decision-making by opening up meetings to the public. Apps such as MAGICapp have been developed to help patients to be involved in their own treatment plans.
With cases of selective reporting and of outcome switching, clinical trials are still criticized for their lack of transparency. The COMPare study, published earlier this year, identified more than 50 trials with discrepancies between their pre-specified and reported outcomes, across five high-impact journals. Correction letters were published for only 40% of these trials, with a large range among journals (0–100%). The audience at EBMLive pushed for clear guidelines to help peer reviewers to identify cases of selective reporting, and for more support for authors who are under pressure from journals to give a more positive story.
Although publishers clearly have a part to play in improving clinical transparency, patient activists Kath Sansom (Sling the Mesh), Marie Lyon (Association for Children Damaged by Hormone Pregnancy Tests) and Susan Cole (Valproate Victims UK ) focused their attention on industry and regulatory bodies. Kath in particular focused on the need for transparent declarations of conflicts of interest (COIs) when developing guidelines for clinical decision-making.
The call to improve disclosure of COIs was the focus of the final day of EBMLive. An entertaining whistle-stop tour through the history of COIs and the advent of drug reps ended with GP Dr Pete Deveson explaining the difficulty in noticing hidden motives, and therefore, in challenging them. His suggested solutions? To speak up, ask more questions and use tools such as whopaysthisdoctor.org to self-declare and to search declarations.
Dr Kate Mandeville of Medic to Medic took the audience to new grounds, asking them which forms of communication they feel COIs are missing from. In the future, guidelines could, and should, include COI disclosures. An interactive session led by Dr Margaret McCartney (Radio 4’s Inside Health), Professor Carl Heneghan (University of Oxford) and Dr Helen Macdonald (The BMJ) saw the attendees taking the reins; attendees shared their thoughts on the current issues with disclosure of COIs and the improvements needed to improve transparency across academia, clinical practice and education. The attendees were united in the belief that COIs need to be more clearly defined and perhaps standardized, whether that be in reporting guidelines or by introducing regulations to explain what to disclose, when to disclose it and how far to backdate disclosures. Students should also be educated on COIs and their importance in increasing public trust.
Other presentations covered questionable research practices, bias in medical research and the reproducibility of evidence in healthcare. Professor Isabelle Boutron of the Paris Descartes University emphasized the high prevalence of ‘spin’ in the literature: the practice that makes ‘the tip of the iceberg look good’, or in other words, the misrepresentation of study results. Isabelle made clear that it is the responsibility of researchers to report data without spin. Failure to do so can lead to spin in health news stories and the public’s misinterpretation of treatment benefits.
Strong words from Professor John Ioannidis, Stanford University, ended EBMLive for this year. John highlighted that data need to be reproducible, with proper analysis plans peer-reviewed before the study begins. Data, especially if from a publicly funded study, should be shared. John stated that the accumulation of evidence from a data source will create a greater impact than just one analysis of the data. One incentive for this is that data curators should be given credit for any analyses.