Basing Treatments on Trustworthy Evidence


“It is scientifically proven” – “It is significant” – “The paper was in a high ranking Medical Journal”

Arguments for trustworthiness and credibility of medical information are many, and often involves the words “scientific” or “significant” – best of all both.

Unfortunately, does neither being “scientific” nor being “significant” equals being meaningful for patients and healthcare professionals. In fact, 85% of research resources do not lead to useful knowledge.


Whether being healthcare professionals of someone seeking healthcare, you want to have access to the best possible and most trustworthy information to guide health decisions. This information does almost exclusively come from medical research, why the trustworthiness and reliability of these findings are crucial.

The presented results in a medical paper is not the answer to the research question

The results in medical papers are obviously 100% correct in presenting what the analysis of the data has discovered. However, it is extremely important to be aware, that the results are not! the answer to the research question, but only an analysis of data collected in that specific study. Thus study design and research method, as well as the technique used to analyse the data are crucial to provide useful information for clinicians and their patients. The design of the study, the study population, the study settings, study duration, etc. must be relevant and make sense at the point of care, and must reflect the problems and decisions patients and healthcare professionals meet.

Significance does not distinguish between “right” or “wrong.”

Statistical significance is the most used “proof of effect” in clinical research and evidence-based medicine. But statistical significance cannot be used for this no matter how “highly significant” the findings are. The significance is most often documented with one or more p-values. You may wish that “p” in p-value should be for “patient,” but that is not the case. It is a measure of the “probability” of getting a result as found or an, even more, extreme result by chance. Traditionally a chance of 1 out of 20 (p≤0.05) is chosen. There is no specific good reason or even scientific argument for this cut-off value; it is simply “what everyone else seems to do,” and as such due to tradition.

The p-value is directly related to the collected data, but not to the research question. Careful study design and study management provide the best data, and thereby the best chance for trustworthy results. That is if the analysis of the data is correct.

There is something “magical” with the p≤0.05. If above 0.05 the results of the study are seen as “not proved.” If a study compares a new treatment with the standard treatment for a given condition and the p-value is 0.06, the data can still show clinical relevant differences. If the difference between the two treatments is not clinically relevant, this is also useful information because then we do not need to consider the new treatment since it does not lead to better outcomes. Provided that the study is well designed and managed. Results with p-values higher than 0.05 are often not published. Researchers may see them as negative and do not want to publish the results, or the papers are rejected by the editors of the medical journals. A research project looking at more than 1.6 million papers published from 1990 to 2015 found that 96% of papers presenting p-values had one or more p-values at or lower than 0.05. To minimise the risk for post study adjustments being made in an effort to get the magical low p-values, researchers are encouraged to publish a formalised description of the study before study start on pages like Unfortunately, the compliance with own pre-published information seems to be rather low, though.

Start Testing

Do You Want to Test the Zignifica Prototype?

Trustworthy evidence

Luckily many studies provide good and reliable results to be used to help and guide clinical decisions, but unfortunately are many published research findings false or exaggerated. Researchers from Stanford University has estimated, that 85% of research resources do not lead to useful knowledge. That’s a huge number.

The process of carefully analyse and evaluate a medical paper is time-consuming and complicated. Time must, just like money, always be taken from somewhere else. In this case, time will often be taken from time spend with patients.

Kim Kristiansen, M.D.