[JC SERIES] Episode 3: What's the Harm?
This episode is all about HARM:
Does a particular exposure to a particular variable cause harm in a given patient population?
This is very similar to our Therapy question (does exposure to a particular variable improve patient outcomes). However, in this case, we are NOT randomizing our study population. We are OBSERVING.
Find patients exposed to a given variable, watch who is harmed and who isn’t - the basic premise behind an observational study.
Randomization is obviously a more robust study technique, however this isn’t always feasible. There are four main reasons we prefer observational studies over RCTs:
- It may be unethical to randomize patients to exposures known to be potentially harmful
- Observational studies are better at detecting rare and serious adverse effects relative to RCTs
- Longer duration of follow-up; allowing us to follow patients for years or decades after a given exposure
- Sometimes an RCT simply isn’t available, so an observational study is all we have (observational studies are easier to perform and thus more readily available)
There are four main observational study designs (the four C’s)
- Cohort studies
- Can be prospective or retrospective
- The investigator selects a cohort of people - some who have been exposed and some who haven’t.
- The cohort is followed for a period of time and outcomes are monitored
- Advantage: power to control how patients are monitored and followed
- Disadvantage: labor intensive, long time to obtain data (like the Harvard Study of Adult Development)
- Advantage: studies are much easier to conduct
- Disadvantage: little to no control over how data is collected (or how relevant it is to your clinical question)
- Case Control Studies
- Always retrospective
- Participants split up into two groups:
- Cases-those with a given outcome
- Controls-those without a given outcome
- Investigators then ascertain whether or not an exposure occurred & further compare the two groups
- Advantage: relatively easy to perform
- Disadvantage: limited control over data collection
- Cross-sectional studies
- Very similar to prospective cohort studies
- Participants split up into ‘exposed’ and ‘unexposed’
- Patients are NOT followed over time. The study simply asks ‘do you have the outcome RIGHT NOW?’
- Case Series/Case Reports
- This is just a description of what happened to a group of patients (series) or single patient (report)
- No comparison available - so very difficult to say whether or not exposure affected outcome
Sources of bias common in observational research include:
Recall bias-this is when patients cannot remember information accurately. One example of this is the False Memories demonstration.
Surveillance bias-the harder you look, the more you find. Certain patients may be worked up more extensively than others, introducing bias into your data.
Confounding-it’s possible that there are other variables influencing your data that you aren’t considering (i.e., perhaps blueberries reduce the risk of Alzheimer’s dementia, or maybe this is confounded with the healthy lifestyle that would encourage someone to eat blueberries).
Correlation vs causation-just because two variables are correlated does NOT mean one causes the other.
Stats in 60s or Less (SiSSoL) Topics
- Regression analysis is a form of statistical modeling that aims to estimate the relationship between variables
- Specifically, the relationship between a dependent (outcome) and an independent (exposure) variable
- Generated these by plotting out our independent and dependent variables on the x and y axes, respectively, and generating a ‘line of best fit’ (which can be linear – which it usually is, or nonlinear)
- Regression analysis is really all about predicting relationships. In other words, if my independent variable is x, what can I expect my dependent variable to be? What about 2x? 3x? and so on
- Remember that we have to be cautious not to immediately assume causation
- Null hypothesis the assumption that there is NO difference between our two study groups.
- When conducting research, it is standard to assume the null hypothesis is true until the data proves you wrong
- The P-value represents the probability of obtaining a result greater than or equal to the result observed in our data (assuming the null hypothesis is true)
- Calculated via the fisher exact test
- We typically accept a p value less than 0.05 or 5% as ‘statistically significant’
- Note that a p-value is NOT telling us whether our data is true or false. It’s just a tool – the entirety of the study, including design, plausibility of the clinical question and limitations of the study, must be considered
- The fragility index and the p-value go hand in hand.
- The fragility index is the number of patients who would have to change from event to nonevent (or vice versa) to make a study result statistically insignificant
- It’s derived by moving patients over from the nonevent group to the event group (again, or vice versa) – until the p value exceeds 0.05
- The number of patients required to become statistically insignificant is the fragility index
- The smaller the fragility index, the more fragile a trial’s outcome
- Much our our published literature in critical care medicine is based on statistically fragile data
- Check out PulmCrit’s take on the fragility index
Until next time!