[JC SERIES] Episode 4: Oslerphiles Rejoice: We're Gonna Talk DIAGNOSIS
Don't tune out yet - this is actually pretty important stuff!
Diagnostic PICO questions
Really, with diagnostic foreground questions we are asking: does a particular piece of subjective or objective data (i.e. test, physical exam finding, imaging) have value in ruling IN or ruling OUT a condition? And if so, HOW valuable is it?
JAMA RCE: https://jamanetwork.com/collections/6257/the-rational-clinical-examination
Two main approaches to diagnosis:
Pattern recognition
System 1 processing
Fast, efficient and instinctive
E.g. recognition of dermatomal rash of herpes zoster
Probabilistic diagnostic reasoning
System 2 processing
Slow, analytic
E.g. determining whether a patient with acute onset shortness of breath has pulmonary embolism vs heart failure vs pneumonia, etc
Probabilistic Diagnostic Reasoning
Pre-test probability: what is the probability that my patient has disease x prior to any workup?
Test threshold: the pre-test probability threshold necessary for you to order a given diagnostic test to rule in or rule out disease x
Post-test probability: what is the probability that my patient has disease x after my workup?
Treatment threshold: the post-test (or pre-test) probability threshold necessary for you to initiate treatment for disease x
Check out the helpful graphic to the left.
Diagnosis: the literature
Sometimes we need to move to the literature to ask HOW useful is a given test is.
Ideally, our diagnostic tests will move us FROM our pre-test probability TO:
An extremely high post test probability (i.e. RULE IN)
OR
An extremely low post-test probability (i.e. RULE OUT)
Study Format & Risk of Bias
There really isn’t any ONE study design used for diagnostic tests. They are often observational, but sometimes are studied with RCTs or systematic reviews as well.
There are a number of biases we need to be on the lookout for:
Spectrum Bias - when a study of a diagnostic test compares florid cases of the disease with asymptomatic, healthy volunteers
Not helpful because this isn’t representative of our patient population
This same thing happened with the carcinoembryonic antigen (CEA) testing or the urine dipstick for diagnosis of UTI
Partial verification bias - when a study of a diagnostic test doesn’t expose all patients to the reference or gold standard test.
Some people may be more likely to send a positive stress test for left heart catheterization, but may not send a negative stress test for LHC. This may lead to overestimation of the utility of stress testing for coronary artery disease.
Test result blinding - you want to make sure those who are interpreting results of a given diagnostic test are blind to the gold standard – or else it may influence how hard they look for a given condition, skewing your results
STATS IN 60 SECONDS OR LESS (SISSOL) TOPICS
Type I and Type II Errors
Type I-false positive
Type II-false negative
Sensitivity and Specificity
Sensitivity-true positive rate
Aka power of detection
A test that is 99% sensitive for a given condition will have a very LOW rate of false positives
Great for ruling IN a condition
Specificity-true negative rate
A test that is 99% specific is going to have a very LOW rate of false negatives
Great for ruling OUT a condition
Likelihood ratios
We calculate two types
Positive likelihood ratio (for positive test results)
Negative likelihood ratio (for negative test results)
Used to evaluate two things:
The utility of a particular diagnostic test
How likely is it that my patient has disease x
Generated from sensitivity and specificity
You want a likelihood ratio of:
10 (to rule in)
0.1 (to rule out)
Calculation of likelihood ratios:
How to convert pre- to post-test probabilities
Check out MedCalc or the Diagnosis App (no affiliation)
Or use the Fagan nomogram (below)
Questions We Should All be Asking Ourselves
Will this change my management?
Was the population used to study a given diagnostic test generalizable to my patient population?
Will the patient be better off as a result of the test?