When we analyze data from an epidemiological study, we usually build a statistical model with the aim to describe what has happened in our study. To do so, we make assumptions and often, intentionally ignore differences between individuals or subgroups, so that we can estimate an average association between the exposure and the outcome that applies to the entire population. But sometimes, after controlling for confounding and bias, there is still a third variable, the impact of which on the association between exposure and outcome is so important that cannot and should not be ignored. This is called effect modification. Imagine you are conducting a randomised clinical trial which aims to test the effectiveness of a new antibiotic against pneumonia. Some of the patients received this new antibiotic, and the rest are given the older drug that is widely used. You follow all the patients up and there are two potential outcomes, a patient can either recover or die. When you analyze data from the entire sample, you find that the odds ratio of recovery of those exposed to the new drug compared to those exposed to the old drug is 1.5, which means those taking the new antibiotic are 50 percent more likely to recover compared to the controls. This is an important result for the trial and if you have conducted your RCT properly, you don't need to worry about confounding. But before you publish your results, one of your colleagues decides to stratify the data by sex, and notices that the odds ratio is 1.1 for men and 1.9 for women. Men and women do not differ in terms of age, comorbidities, or other confounding factors. After careful consideration, your team decides that the bias cannot explain this difference. So, what's happening? Well, sometimes a drug can be more effective in women compared to men, or vice versa. In other words, sex modifies the association between the drug, your exposure, and recovery, your outcome. This is a phenomenon that we call effect modification. Making the definition more general, we say that effect modification exists when the strength of the association varies over different levels of a third variable. In such cases, reporting the overall estimate would not be helpful at all because it would not reflect what actually happened in either sex. Should you then find a way to control for effect modification and avoid this problem? Definitely not. Unlike confounding, effect modification is a naturally occurring phenomenon. It's not a problem of your study. You should have no intention to control for it, but the way you report your results should take it into account. In the case of the trial with the new antibiotic, you simply need to present results stratified by sex. You might need one more table in your paper, but this will allow you to accurately report your findings for both men and women. In general, when effect modification is detected, you must conduct stratified analysis. In the example above, I ignored uncertainty. You probably noticed that I gave the estimates without their confidence intervals. In real life uncertainty cannot be ignored, and this raises one key question, how can we be certain that the stratum-specific estimates are truly different between them? There are statistical methods that can help us identify effect modification such as the Breslow-Day test, the Q test, and including interaction terms in regression models. Regression models are very frequently used, and the term interaction is often considered equivalent to effect modification. The term synergism means that the effect modifier potentiates the effect of the exposure, and antagonism means that the effect modifier diminishes the effect of the exposure. Effect modification is an important concept in epidemiology because it is relevant to many associations in nature but also one that confuses a lot of people. I think it's because we're so used to trying to eliminate bias and confounding, that we find it hard to accept that this is a natural phenomenon that we simply need to describe.