Sean Harrison: Blog

My reaction to: The campaign to make alcohol ‘the new tobacco’

I read this article by Christopher Snowdon: . I have some views on this, and since it was 100 tweets long, maybe a blog post would be handy.

Also, Snowdon said I should get a blog.

Summary

The article cherry-picks data, conflates observational epidemiology with causal inference, and misunderstands basic statistics.

Preface

I don’t care whether people drink or not. I’d prefer it if people drank in moderation, but I’m certainly not an advocate for teetotalism.

I do, however, think people should be informed of the risks of anything they do, if they want to be.

I think the article is poor, but think people should feel happy to drink if they want to. Based on the available evidence though, I wouldn’t say it helps your heart, and there may be some risk of drinking moderately.

But that’s the same for cake.

The campaign to make alcohol ‘the new tobacco’

Let’s delve into the article.

The piece starts out by saying that there is a drive to treat drinkers like smokers. That seems to conflate saying that alcohol can be harmful with saying people shouldn’t drink alcohol.

They aren’t the same.

I also don’t know which organisation runs this campaign, but calling people who say alcohol is harmful “anti-alcohol activists” is a trick to make those same people seem like “others” or “them”. It also makes them sound like fanatics, trying to stop “you” drinking “your” alcohol.

But that’s not why I’m writing this.

It’s the “health benefits of moderate drinking”, stated as if it were indisputable fact. As if it’s known that alcohol causes health benefits.

The health benefits of moderate drinking

Causal statements like this need rigorous proof. They need hard evidence. If moderate alcohol intake is associated with health benefits, that’s one thing. But saying it causes those health benefits is quite another.

Even if alcohol caused some benefits though, something can have both positive and negative effects – it’s not absurd to tell people about the dangers of something even if it could have benefits, that’s why medications have lists of side-effects.

And calling something “statistical chicanery” is another tactic to make it seem like people saying alcohol is harmful are doing so by cheating, or through deception.

The link to “decades of evidence” is to a 2004 meta-analysis, showing

Strong trends in risk were observed for cancers of the oral cavity, esophagus and larynx, hypertension, liver cirrhosis, chronic pancreatitis, and injuries and violence.

Which sounds pretty bad to me.

I’m guessing that if this is the right link, then it was meant for you to observe that there is a J-shaped relationship between alcohol intake and coronary heart disease.

That is, low and high levels of drinking are bad for your heart, but some is good. This sounds good – alcohol protects your heart – and it is common advice to hear from loads of people, doctors included.

The problem is that the evidence for this assertion comes from observational studies – the association is NOT causal.

This is all about causality.

We cannot say that drinking alcohol protects your heart, only that if you drink moderately, you are less likely to have heart problems. They sound the same, but they aren’t. The first is causation, the second is correlation, and if there’s one thing statisticians love to say, it’s “correlation is not causation”.

Studies measuring alcohol intake and heart problems are mostly either cross-sectional or longitudinal – they either look at people at one point in time, or follow them up for some time.

These are observational studies, they (probably) don’t change people’s drinking behaviour. Of course, people might change their behaviour a little if they know they have to fill in a questionnaire about their drinking habits, but we kind of have to ignore that for now.

Anyway, observational studies do not allow you to make causal statements like “drinking is good for your heart”.

Why not?

It comes down to bias/confounding, the same things I  on Twitter when those researchers claimed .

There are ways to account for this when comparing drinkers with non-drinkers, but they rely on knowing every possible way people are different.

Imagine the reasons why someone doesn’t drink very much. Off the top of my head, they:

Now imagine the reasons why someone doesn’t drink at all. The above holds true, but you can add in:

A confounder is something that affects both the exposure (alcohol intake) and the outcome (health). If you want to compare drinkers and non-drinkers, you need to account for everything that might affect someone’s drinking behaviour and their health. This includes many of the things I listed above.

But this is nigh-on impossible, as behaviours are governed by so many things. You can adjust out *some* of the confounding, but you can’t prove you’ve gotten ALL the confounding. You can measure people’s health, but you won’t capture everything that contributes to how healthy a person is. You can ask people about their behaviour, but there’s no way you’ll capture everything from a person’s life.

If you see, observationally, that moderate drinking is associated with fewer heart problems, what does that imply?

My last was about how you really should have mechanisms to posit causality, i.e. if you say X causes Y, you need to have an idea of how X causes Y (or evidence from trials). This holds true here too.

Suppose alcohol protects your heart. How?

How does alcohol protect your heart?

Fortunately, people have postulated mechanisms, and we can assess them: one possible mechanism is that alcohol increases HDL cholesterol (the good one), which improves heart health.

We can’t assign a direction to that mechanism using observational studies, since people who live healthily might have good HDL levels anyway, meaning they drink moderate amounts because they can.

To work this out (and to assign causality more generally), you can use trials. Ideally randomised controlled trials, since they’re so good. The ideal trial, the one where we wouldn’t need mechanisms at all, is one where we randomise people to drink certain amounts (none, a little, some, a lot) over the course of their life, make sure they stick to that, then see what happens to them.

Since that would never work, the next best thing is to test the proposed mechanisms, because if alcohol increases HDL cholesterol in the short-term (i.e. after a few weeks), then we’re probably on safer territory. We’d then have to prove that higher HDL cholesterol causes better heart health, but one thing at a time.

Well, a of trials was done to look at exactly that, fairly recently too (2011):

Effect of alcohol consumption on biological markers associated with risk of coronary heart disease: systematic review and meta-analysis of interventional studies

In total, there were 63 trials included, looking at a few markers of heart health, including HDL cholesterol. They found that alcohol increased HDL a little bit.

But there were problems.

The trials were a mix of things, but having looked at a few, it looks like many studies randomised small numbers of people to drink either an alcoholic drink or a non-alcoholic drink (the good ones had alcohol-free wine compared with normal wine), and they measured their HDL before and after the trial.

The problem with small trials is that they can have quite variable results, because there is a lot of imprecision when you don’t have enough people. You do a trial with 60 people and get a result. You repeat it with new people, and get an entirely different result.

That’s one reason why we do meta-analyses in the first place – one study rarely can tell you the whole story, but when you combine loads of studies, you get closer to the truth.

But academic journals exist, and they tend to publish studies that are interesting, i.e. ones that show a “statistically significant” effect of something, in this case alcohol on HDL. This has three effects.

  1. Studies with null results get published less frequently
  2. People might choose to write up a study that didn’t show anything, because it might not get published
  3. You might want to redo the same trial with a new batch of people if you get poor results

Repeat a study enough, you’ll eventually get the result you want. Since lots of people want alcohol to be beneficial to the heart, and because these trials are pretty inexpensive, there is a good chance that there are missing studies that were never published.

I’m aware this sounds like I’m reaching, and I could never prove that these things happened. But I can show, with relative certainty, that there are missing studies, ones that showed either that alcohol didn’t affect HDL or reduced it.

In meta-analyses, we tend to produce , which show whether studies fall symmetrically around the average effect, i.e. the average effect of alcohol on HDL. Since studies should give results that fall basically randomly around the true effect of alcohol on HDL, they should be symmetrical on a funnel plot.

If some studies have NOT been published, i.e. ones falling in the “no effect” area, or those without statistical significance, then you see asymmetry.

We don’t know WHY these studies are missing, just that something isn’t right, and we should treat the average effect with caution. The link I gave above shows a nice symmetrical funnel plot, and an asymmetrical one.

And here is the funnel plot I made from the meta-analysis data.

Note: I had to make this plot myself, the authors did not publish it – they stated in the paper:

No asymmetry was found on visual inspection of the funnel plot for each biomarker, suggesting that significant publication bias was unlikely.

See how the effect gets smaller (more left) as the “s.e. of md” goes down? That’s the standard error of the mean difference – the smaller it is, the more precise the result is, the more confident we are in the result. More people = smaller standard error.

With a smaller numbers of people, the standard error goes up, and the more variable the results become. One study may find a huge effect, the next a tiny effect. The fact ALL the small studies found a comparatively large effect is extremely suspicious.

So yeah, there was asymmetry in the funnel plot for the effect of alcohol on HDL cholesterol. The asymmetry says to me that there are missing studies that showed no effect of alcohol on HDL cholesterol, and so the true effect of alcohol on HDL cholesterol will thus be smaller than they said.

To be honest, there’s probably no effect, or if there is, it’s tiny.

To be fair though, I should say most of the studies had a small follow-up time. It’s entirely possible longer studies would have found a larger effect. The point is, we don’t know.

There are likely other proposed mechanisms, but I think the HDL mechanism is the one commonly thought of as the big one. :

The best-known effect of alcohol is a small increase in HDL cholesterol

So, I don’t really see the evidence as being particularly in support of alcohol protecting the heart. The observational evidence is confounded and possibly has reverse causation. The trial evidence looks to be biased. What about the genetic evidence?

Genetic evidence

We use genetics to look at things that are difficult to test observationally or through trials. We do this because it can (and should) be unconfounded and is not affected by reverse causation. This is true when we can show how and why the genetics works.

For proteins, we’re on pretty solid ground. A change in gene X causes a change in protein Y. But for behaviours in general, we’re on much shakier ground.

There is one gene, however, that if slightly faulty, produces a protein that doesn’t break down alcohol properly. This is a good genetic marker, since people without that protein get hangovers very quickly after drinking alcohol, so tend not to drink.

One found

Individuals with a genetic variant associated with non-drinking and lower alcohol consumption had a more favourable cardiovascular profile and a reduced risk of coronary heart disease than those without the genetic variant.

(in an Asian population) found:

robust evidence that alcohol consumption adversely affects several cardiovascular disease risk factors, including blood pressure, waist to hip ratio, fasting blood glucose and triglyceride levels. Alcohol also increases HDL cholesterol and lowers LDL cholesterol.

So alcohol may well cause higher HDL cholesterol levels.

Note that in genetic studies, you’re looking at lifetime exposure to something, in this case alcohol. So as above, a trial looking at the long-term intake of alcohol may find it raises HDL cholesterol.

It’s just, currently, the trial data doesn’t support this.

Back to the article

Halfway now, and I hope I have shown that the evidence alcohol protects the heart is shaky at best. This is kind of important for later. I don’t claim to have done a systematic or thorough search though, so let me know if there is anything big I’ve missed!

Let’s return to the article.

I got side-tracked by the article’s reference to the paper that said alcohol increases risk to loads of bad stuff, and has a J-shaped association with heart disease.

The is an example of why I mostly dislike research articles being converted into media articles. It is *exceedingly* difficult to convey the nuances of epidemiological research in 850 words to a lay audience. It just isn’t possible to relay all the necessary information that was used to inform the overall conclusion of the Global Burden of Disease study.

David Spiegelhatler’s flippant remarks at the end probably don’t help:

Yet Prof David Spiegelhalter, Winton Professor for the Public Understanding of Risk at the University of Cambridge, sounded a note of caution about the findings.

“Given the pleasure presumably associated with moderate drinking, claiming there is no ‘safe’ level does not seem an argument for abstention,” he said.

“There is no safe level of driving, but the government does not recommend that people avoid driving.

“Come to think of it, there is no safe level of living, but nobody would recommend abstention.”

states in the discussion that their

results point to a need to revisit alcohol control policies and health programmes, and to consider recommendations for abstention

Spiegelhalter seizes on the use of the word abstention to make the study authors sound more unreasonable that they actually are. I don’t think this is particularly helpful when talking about, well, anything. If you can make people who disagree with you look unreasonable, then it’ll make for an easier argument, but it doesn’t make you right and them wrong.

 

How many cigarettes are there in a bottle of wine?

The study in attempted to explain the additional risk of cancers from drinking to that of smoking, because the public in general understand that smoking is bad. I don’t have an opinion one way or the other for this method of communicating risk.

I’m quite happy to state I don’t know enough about communication of risk.

What I do know is that that communicating risk is difficult, as few people are trained in statistics. Even those who are aren’t necessarily able to convert an abstract risk into their daily reality. So maybe the paper is useful, maybe not. I do not think their research question is brilliant, but my opinion is pretty uninformed:

In essence, we aim to answer the question: ‘Purely in terms of cancer risk – how many cigarettes are there in a bottle of wine’?

I don’t think it’s “shameless” (why should the authors feel shame?), and it isn’t a “deliberate conflation” of smoking and drinking. It’s expressing the risk of one behaviour as the similar risk you get from doing a different behaviour.

The article’s theory is that the authors wrote the paper for headlines (It’s worth stating here that saying “yeah, right” in an article makes you sound like a child.):

Maybe they were targeting the media with their paper. In general, researchers pretty much all want their work to be noticed, to have people possibly even act on their work. That’s whole point of research. It’s not a bad thing to want your work to be useful.

I dislike overstated claims, making work seem more important than it is, and gunning for the media at the expense of good work. But equally, researchers need their work to be seen. We’re rated on it now. If our work is shown to have “impact”, then it’s classified better, so we’re classified better, so our university’s are classified as better. I dislike this (not least because it means method work can be ignored, since it may take years for it to be appreciated and used widely), but there we go.

Questioning the paper’s academic merit is fine though, so what are the criticisms of the paper? There’s just one: that the authors chose a level of smoking that has not been extensively measured as the comparator.

The article says they used 35 cigarette week and “extrapolated” to 10 cigarettes per week, and called this “having a guess”.

It’s not extrapolation, and it’s not a guess.

The authors looked at previous studies, usually meta-analyses, to see what the extra risk of smoking 35 cigarettes a week was on several cancers, adjusted for alcohol intake. They made some assumptions with how they calculated the risk of 10 cigarettes a week: they assumed each cigarette was as bad as the next one, so assumed that each of the 35 cigarettes contributed to the extra risk of cancer equally.

This assumes a linear association between your exposure (smoking) and outcome (cancer), an incredibly common thing done by almost all researchers. It is actually interpolation though, not extrapolation (since the data point they wanted was between two they had). And it isn’t a guess, it’s based on evidence (with appropriate assumptions).

The article says there is a single study estimating risks at low levels of cigarette smoking that should have been used. However, that study didn’t adjust for drinking, so it was useless for this study. For the study to be meaningful, they had to work out the extra risk from smoking on cancer independent from any effect of alcohol, since alcohol and smoking are correlated.

Finally, the study didn’t just report 10 cigarettes a week. They reported 35 cigarettes a week, so made no guesses or assumptions (beyond those made in the meta-analyses). So I think the criticism of the study was unfounded. The article felt otherwise:

OK, but all it was doing was communicating risk. If people haven’t thought about smoking 10 cigarettes then it didn’t do it well, but how would anyone know? Has a study been done asking people?

This isn’t a war on alcohol, or a conspiracy to link alcohol and smoking so people stop drinking. It’s not a crusade by people that hate alcohol. It was trying to communicate the risk of alcohol to people who might not know how to deal with the statistics presented in dense academic papers.

Decades of epidemiological studies

The “decades of epidemiological studies” referenced is actually a paper from 2018, concluding:

The study supports a J-shaped association between alcohol and mortality in older adults, which remains after adjustment for cancer risk. The results indicate that intakes below 1 drink per day were associated with the lowest risk of death.

The J-shaped association could easily be confounding – teetotalers are different to drinkers in many ways (see above). But that’s not really “decades of studies” anyway, and the conclusion was that drinking very little or nothing was best.

The second reference is to a systematic review of observational studies. This is relevant to the point about decades of research, but not conclusive given they are observational studies.

The claim that the positive association between alcohol intake and heart health has “been tested and retested dozens, if not hundreds, of times by researchers all over the world and has always come up smiling” is facetious.

It betrays a lack of understanding of causality, or publication bias, of confounding and reverse causation.

Basically, a lack of understanding about the very studies the article is leveraging to support its argument. It shows ignorance of how to make causal claims, because the entire premise of the argument has been built on observational data.

This is next part is inflammatory and wrong:

It certainly wouldn’t put you in the “Flat Earth” territory to believe that alcohol might not be good for you, unless you took as gospel that observational evidence was causal.

This reference is to observational studies, not “biological experiments”. I don’t know which biological experiments is meant here, maybe the ones I talked about earlier and dismissed? Also, the best observational evidence we have is probably genetic for many things, because the chance of confounding is slightly less. And the genetic studies say any level of alcohol has a risk.

There are certainly people who have agendas. People who want everyone to stop drinking. I do not doubt this. But who in the “’public health’ lobby” is the article referencing? What claims have they made? Without references, this it’s a pointless argument.

Also, public health people would like it if everyone stopped smoking and drinking, because public health would improve. That is, on average, people would be healthier – even if alcohol helps the heart, more people die of alcohol-related causes than would be saved by any protective effect of alcohol.

But this doesn’t mean public health people call for teetotalism.

To my knowledge, they generally advocate reducing drinking and in general, moderation. Portraying them as fanatics who “deny, doubt and dismiss” is ludicrous.

 

Meta-analysis of cohort studies

Prospective studies are good, because they can rule out reverse causation, i.e. heart problems can’t cause you to reduce alcohol intake if everyone starts with a good heart. But they do not address confounding. They are just as flawed to confounding as cross-sectional studies.

So prospective studies might be the best “observational” evidence (not “epidemiological” evidence given we deal with trials too), but only if you want to discount genetics. And “best” doesn’t mean “correct”

Statistical significance in individual studies is not something I have every talked about in meta-analysis. Because it isn’t relevant. At all. In fact, if your small studies are all significant and your big studies aren’t, it’s probably because you have publication bias, i.e. small studies are published because they had “interesting” results, big ones because they were good.

 

 

The article is now comparing meta-analyses with 31 and 25 studies with one with 2 studies. Given the large variation in the effects seen in the studies from the previous meta-analyses, I wouldn’t trust the result of just 2 studies. I actually tried to find those 2 studies to see if they were big/good, but in the original meta-analysis paper, they don’t make it easy to isolate which studies those two actually are. So I gave up.

This part is a fundamental misunderstanding of statistics. Saying something is “not statistically significantly” associated with an outcome is not the same as saying something is “not associated” with an outcome.

There are plenty of reasons why even large associations may not be statistically significant. In general, it will be because you didn’t study enough people, or for long enough. But how the analysis was conducted matters, as does chance. But it takes as much or more evidence to prove two things aren’t associated as proving they are.

If you start from the assumption that alcohol is good, then yeah, you would need evidence that there are risks from very light drinking. But why start from that premise?

We know that drinking lots is bad, so why assume drinking a bit is good? I can see why, when presented with evidence that moderate alcohol drinking and good heart health are correlated, people might think drinking is good for your heart. But what about every other disease?

In the absence of complete evidence, it would make sense to assume that if lots of alcohol is bad, some alcohol may also be bad. I think it is a bit much to start from the premise that because moderate drinking is correlated with good heart health, small quantities of alcohol are fine or good.

The burden of proof should be on whether alcohol is fine in any quantity. And then finding out how much is “reasonably ok”, and at which point it becomes “too much”.

And no, again, we don’t know that “very light drinking confers significant health benefits to the heart”, because this is a causal statement and you only have observational evidence. If you drink very lightly, your heart may well be in better shape than people who drink a lot or don’t drink, but that doesn’t mean the drinking caused your heart to be healthy.

 

Shit in, Shit out

I certainly dismiss this article as quackery with mathematics…

Actually, this is a good point, but is against the article’s argument. If you have low-quality, biased studies in a meta-analysis, that meta-analysis will be more low-quality and biased. Meta-analysis is not a cure for poor underlying research.

Stated somewhat more prosaically:

shit in, shit out

“Ultra-low relative risks” is relative. Most people won’t be concerned about small risks. But it makes a big difference to a population.

Research is often not targeted at individuals, it’s targeted at people who make policies that affect vast numbers of people. A small decrease in risk probably won’t affect any single person in any noticeable way. But it might save hundreds or thousands of people.

The article is guilty of the same thing. It “clings” to research that shows a beneficial effect of alcohol because it suits the argument. The observational evidence is confounded. It’s biased. The trial evidence is likely biased and wrong.

If all your evidence suffers from the same flaw (confounding, affecting each study roughly the same), then the size of your pile of evidence is completely irrelevant. A lot of wrong answers won’t help you find the right one.

A good example in a different field is survivorship bias when looking at the damage done to planes returning from missions in WW2. Researchers looked at the damage on returning planes, and recommended that damaged areas get reinforced.

Except this would be pointless.

Abraham Wald noted that planes that returned survived – they never saw the damage done to the planes that were shot down. If a plane returned with holes, those holes didn’t matter. Whereas the areas that were NOT hit did matter. It wouldn’t matter how many planes you looked at. You could gather all the evidence that existed, and it would still be wrong, because of bias.

The same is true of observational studies.

You can do a million studies, but if they are all biased the same way, your answer will be wrong.

The article makes the same ignorant point once again, conflating observational research with causal inference, while also cherry-picking studies. The facile point Snowdon makes about spending time on PubMed to reinforce his own views belies his own flawed approach to medical literature.

And that’s it for the article!

In summary, the article uses observational data to make causal claims, cherry picks evidence (while accusing others of doing the same), and seems to misunderstand basic statistical points about statistical significance.