Years ago it was already pointed out that there is way too much published research for the average doctor to keep up to date. Even to just be aware of all the guidelines that summarize research into best practice means reading hundreds of pages for the conditions that you might only infrequently see.
Of course you don’t always know if the patient in front of you is typical of the condition being discussed – research often excludes complicated cases (or children, or pregnant women).
Even then – “most published research findings are false” [Ioannidis, Plos 2005]. Lots of findings are never confirmed by further studies, and “knowledge” is based on a p value of <0.05. Actually depends on pre-test probability… Are certain things already known? Or is there just a lot of data without established relationships? Even if research design is perfect, bias eg selective reporting or manipulation of analysis. Different studies may use different end points or definitions, which increases the chance findings are false. True findings can be lost in noise or concealed by conflict of interest. Fixed beliefs may be as prejudicial as financial conflict of interest… Expert opinion often differs from outcomes of metanalysis. Small studies and small effects mean any significant result is more likely to be false.
In the model discussed in this study, an underpowered early phase clinical trial that produces a positive finding is likely to be misleading 75% of the time, even before you consider bias. If you are talking about a field where there probably isn’t actually any relationship between the things being studied, then large effects with high significance may just reflect the degree of bias, and should be seen potentially as a warning rather than something exciting!
Authors don’t check primary sources so misconceptions promulgate. Peer review is inefficient, inconsistent and insufficient. Post publication retractions are messy and difficult. See the problem of citations, below.
Systematic reviews are not kept up to date – in fact, they are usually already out of date when published…
Authors of guidelines have a particular duty to ensure rigorous analysis.
The average 10 min consultation will produce at least 1 unanswered question.
[Richard Smith BMJ 2010]
The problem of citations
Citation error rate is estimated at 11-15% in biomedical literature. Propagates mistakes (even academic urban legends eg iron in spinach, due to a misplaced decimal point in a 1930s paper, which I have not verified) and undermines respect for literature review.
Can be non-existent findings, incorrect interpretations of findings, or (20% of errors) chains of errors. Sometimes a hypothesis becomes a fact.
1 surgical study was found to be misquoted by 40% of articles that cited it!
AI can help or make this worse. CONSENSUS.app is AI powered search engine for academics.
Best would be a declaration, that the authors have read the original papers and checked for accuracy and relevance.