Blog Archives

Should you believe the fitness hype?

I see this over and over again among otherwise very intelligent people; an odd belief that the latest “big” thing in exercise or weight loss will be a magic bullet that suddenly brings them the body they’ve always thought they should have. Zaggora hot pants (Burn more calories!), Skechers Shape Ups (reduce cellulite!), green coffee extract (100% natural!) – the list is basically endless. Leaving aside the notion that this will somehow make them happy (for I haven’t the knowledge or skills to even begin to tackle that), why do these bright people fall for it?  I can’t answer that either. It’s potentially very harmful though – this tweet from @nchawkes says it rather well:

People end up spending frightening amounts of time, money and energy on these promises, and even when there’s temporary success (often due to diving into a new regime with a positive opinion, in my totally-un-evidence-based opinion) ultimately there’s stagnation at best, failure or regression at worst. These things are hugely destructive to body image and overall self-image.

So if I can’t explain the fascination with these things, the least I can do is provide a small extra weapon in the battle against profiteering and misinformation in the fitness world.  (Aside: it’s worth noting that much of the misinformation is spread amongst well-meaning friends, just trying to help one another; this type is just as difficult to address as any other dearly-held belief).

My first pearl of wisdom is hardly novel: anything that seems too good to be true, is. The cold hard truth is that you can’t permanently change your body without permanently changing your diet and lifestyle; they needn’t be massive, life-altering changes, but they must happen. You also can’t permanently change your body by throwing money at it instead of good quality food and exercise (unless we’re talking surgery; that’s pretty permanent).

My second piece of advice is: apply critical thinking. Is there something you’re naturally skeptical about, or distrustful of? Apply that same level of suspicion to diet and lifestyle advice. New device guarantees weight loss in one workout? Great. What’s the mechanism? Does it seem plausible? Is it more likely that it’s just helping dehydrate slightly, thereby losing water via sweat? Never ever forget that water’s heavy; 1kg (2.2lb) per litre to be precise. Doubt everything. 

Thirdly, and maybe most importantly (and predictably), demand evidence. Good quality evidence at that. Be ruthless. Be picky. Crucially, don’t accept anecdotes. These are everywhere in weight loss fads, to the point that I feel they’re worthy of a specially-adapted version of the anecdote rules:

  • Did the person gain the advertised benefit, and maintain it?
  • Was the advocated treatment the only one used?
  • If it’s really so good, why aren’t doctors and fitness professionals everywhere advocating it?

I’m hoping to look at some individual claims in more detail, but hopefully this post will at least serve as a cue to get you thinking about the way you look at claims in the weight-loss industry.

Advertisements

Homeopathic Harms Vol 4: OK, there’s SOME evidence

Last time I discussed the problem of missing evidence of harm in homeopathy trials and consequently in systematic reviews.  This time, I’m going to discuss some evidence of harm that we DO have. Sadly, it’s not comforting.

In December 2012, a systematic review of the adverse effects of homeopathy was published in the International Journal of Clinical Practice (aside: for a quick explanation of systematic reviews and adverse effects, take a look at volume 2 in this blog series).  The authors of this review searched five databases of medical literature totalling nearly 50 million published trials (though likely with considerable overlap), and found just 38 articles that discussed case reports and case series of adverse events with homeopathy.

It’s worth noting at this point that if systematic reviews are the pinnacle of the evidence pyramid, case reports and case series are somewhere towards the middle or bottom, depending who you ask.  They’re not ideal, because they’re not rigorous – they rely on someone not only noticing an adverse event and linking it to homeopathy, but taking the time to sit down and write about it and submit it to a journal.  Then of course they’ve got to find a journal willing to publish it.  If any of these steps don’t happen, there’s no published evidence for the rest of us to base our decisions on.  So if our systematic review found 38 published reports, the obvious question is “how many were never recognised, written up, or published?”  We’ll never know the answer to that.  Sadly in the absence of high quality reports of harm from the published clinical trials, this is the highest level of evidence we have.

Back to the review.  The 38 retrieved reports contained information relating to 1,159 people from all over the world.  Surprisingly, only 17 of the reports related to indirect harms – the results of substituting conventional care with homeopathy – although some of those indirect harms were severe.  Several people were admitted to hospital (including intensive care) due to replacing their conventional medicines with homeopathy, at least one was left with permanent effects, and one person died.

That leaves 1,142 people who suffered *direct* adverse effects as a result of using homeopathy.  This seems rather counter-intuitive, and I’m at a loss to explain to explain many of them given that your average homeopathic remedy contains precisely no active ingredient.  The authors of the review suggest that perhaps allergic reactions or ingestion of toxic metals (like arsenic or mercury) might be partly to blame.  They also suggest that low dilutions of remedies might be a potential source of adverse effects, but point out that the vast majority of these reports were associated with remedies at 12C potency or below.  To be clear, 12C is the dilution factor at which the chance of a remedy containing even one molecule of the original parent substance is effectively zero.

But whatever the mechanism, it seems clear that the review provides evidence of direct harm being caused by homeopathy.  Some of these harms were reported simply as “mild”, with no other details offered.  Some were potentially very distressing, like dermatitis, hair loss, and migraine.  Some were very serious indeed, including anaphylaxis (life-threatening allergy), acute pancreatitis, cancers, and coma.  Once again the consequences of the effects included hospitalisation, admission to intensive care units, and death.  For a treatment modality generally touted as totally safe, that’s a pretty alarming set of side effects.

So what can we learn from it?  There’s a valid argument to be made that there’s little point conducting more randomised controlled trials of homeopathy, because all of the good quality ones end up showing the same thing: no benefit over placebo.  But where more trials are conducted, we should be demanding that all adverse effects are collected and reported in the same manner as trials of new medicines.  Case series and reports are not proof of causation, but there is a bulk of evidence here that is concerning, and which should be addressed.  The best way to do that is in good quality trials.

In the mean time, is there anything else we can do?  Yes there is – in the UK at least.  The medicines regulator in the UK, the MHRA, runs the Yellow Card Scheme.  This is a mechanism by which anyone can report any side effect they experience after taking a medication.  I would strongly urge anyone who has suffered an adverse event after using homeopathy (or who knows someone who has) to visit www.mhra.gov.uk/yellowcard.  It’s quick and simple, and will help make remedies safer for everyone.  Similar schemes will be coming into effect throughout the EU soon, but if you live elsewhere please check and see if there’s anything similar.  We need all the data we can get!

Homeopathic Harms Vol 2: where’s the evidence?

We often harp on about the evidence for homeopathy working or otherwise, and I’m not going to touch on that here, because it’s been covered beautifully by many more eloquent writers than me.  What you don’t often see though, is comment on the evidence for homeopathy doing harm.  In the last post in this series the lovely @SParkleWildfire touched on medicalisation, an indirect harm that’s very real but tough to quantify; but what about direct harms?  I’m glad you asked…

In conventional medicine, randomised controlled trials are the best kind of study we can do of a drug to see if it works and if it it’s safe.  What maybe doesn’t mentioned quite so often is that there’s an even *better* form of evidence – the systematic review.  These are produced when someone sits down to do the very tough but remarkably important job of finding every single scrap of evidence they can on a given topic, and pooling it all together to try and get closer to the definitive answer.  The result is a document that represents the best evidence possible for how well a drug (or anything else, for that matter) works, and how safe it is.

One of the biggest and most respected sources of these systematic reviews is the Cochrane Collaboration, who cover all areas of medicine.  Happily, they also have a few reviews related to homeopathy, and that seems as good a place to start as any.  The most recently published is:

 Homeopathic Oscillococcinum® for preventing and treating influenza and influenza-like illness

The authors searched multiple databases of medical literature, covering a time period dating back to the mid-60s and all the way up until August 2012.  That’s a lot of literature.  Out of all the results they found six randomised, placebo-controlled trials of Oscillococcinum that were similar enough to be directly compared.  Since we’re not really interested in efficacy in this review, I’ll skip straight to the safety part: out of these six trials, including a total 1,523 people, there was one reported adverse event.  One. It happened to be a headache. Let’s stop and think about that for a moment.

A good quality randomised controlled trial collects every single adverse event that happens to every single patient.  And the use of the term “adverse event” is very deliberate, because it includes absolutely everything unexpected and unwelcome that happens (and here’s the key part) whether or not it’s likely to be related to taking the drug.  That might sound counter-intuitive, but the reason is simple – we want to pick up every possible side effect of drugs, and sometimes side effects are…weird.  So it might sound odd to include as an adverse event that someone got hit by a bus, but what if the drug they were taking made them dizzy, or confused, or clumsy?  It’s not unreasonable to suggest that any one of those things could end up in getting you involved in a traffic accident.  So every single little thing is recorded, and once the trials is over you do some sums to work out the key question – are these things *more likely to happen in the people who took the drug*? If 20 people broke a leg but they were equally spread out among the trial groups then nothing further needs to be said; if 19 of them were on the drug being studied then there might be something to worry about.  The flip side of that of course is that if 19 were in the placebo group, you might want to wonder if the drug is (perhaps unintentionally) promoting better balance and co-ordination, for example (or if everyone in the placebo group was a keen but inept snowboarder).

Is that one single adverse event out of over 1,500 people taking Oscillococcinum starting to look fishy yet?  What about if I drop in the snippet that some of the people involved (327, to be precise) took the remedy every day for four weeks, to see if it stopped them from getting flu in the first place?  How many times in four weeks would an average, healthy person experience something that you could call an adverse event – a headache, a tummy upset, indigestion, a strained ankle, a touch of insomnia?  I’ve had three of those things in the last 24 hours, and I wouldn’t say I’m a particularly remarkable individual.

So hopefully you can see from this that there’s simply a huge, yawning hole in the evidence about safety in homeopathy.  There are ways and means to address this (though they’re far from perfect), and I’ll address one of those in my next post in this series.

The problem with anecdotes

We’re funny creatures, human beings – very easily swayed, and ruled by emotion a great deal more than some might believe. Most people like to think that they are quite rational, quite sensible; most people probably (secretly) think that they are more sensible than the average person. And yet we also believe some quite remarkable things.

Take homeopathy as an example; I have spoke to more than one very bright, intelligent person who has said words to the effect of “I know that homeopathic remedies don’t contain any active ingredient, and it’s probably a load of rubbish, but they definitely work for me”.

One of the most seductive forms of persuasion, for some reason, is anecdote. We see them all the time, often by another name – a testimonial, a case study, an interview – but it all means the same. Anecdotes seem to be a powerful way for people to communicate ideas – they make them relatable, and understandable. They also add a generous splash of emotion to the issue, and that’s not a particularly good thing

Because the fact is this: anecdote is not reliable.  While we may think that we’re rational and swayed only by hard evidence, actually we human beings are all too easily tricked by all manner of things.  Among the most common is confirmation bias, which is the name given to the very common scenario where we take action in the hope of producing some result and, if that result is obtained, assume that it was our action which caused it.  There are lots and lots of very common examples – taking Echinacea for the common cold (or homeopathy for that matter), or arnica for a bruise.  These things may or may not work, but the very fact that we expect them to have an effect means that when the inevitable happens and our cold clears up means that we will inevitably attribute that result to the remedy.

This is also a great example of the very similar concept of “regression to the mean”, which simply means that lots of things have a very predictable nature, and tend to get better on their own.  Everyone knows that a cold will go away whether you treat it or not, but if you did happen to take that Echinacea tablet, isn’t it tempting to think that you helped yourself get better?

And that’s why anecdote is not reliable; these and many, many other biases and problems come into play, so that in most cases we have no idea if the “treatment” worked, or sometimes even if the person was sick in the first place.  A fellow called Dr Moran, a retired surgeon from Australia, has a really rather lovely little website that gives us some tools to try and remain vigilant against this kind of thing.  He has an article called How to Evaluate a Cancer Cure Testimonial, but really it just takes a few minor tweaks to make it into “How to evaluate a medical anecdote”.

So here are my golden rules:

  1. Was the person definitely ill, as shown by reliable tests, when the treatment was started?
  2. Did they get better, as judged by the same tests?
  3. Was the advocated treatment the only one used?

I haven’t really covered that 3rd point here, but it’s probably self-evident: if you did more than one thing to help yourself get better, how do you know which one worked?

And that’s it.  It’s perhaps not particularly important if you’re just thinking about taking a herbal medicine for the common cold, but in other cases it might be.  I’d urge anyone thinking about investing significant resources into a treatment (whether time, money, emotional input, or anything else) where the only evidence is an anecdote, testimonial, or similar to think through those three points.  Be honest with yourself about the answers.  You might just save yourself some cash, or some heartache.

A wild idea to improve science communication

As I discussed in my last post, science communication is really important.  That’s especially true when it comes to healthcare, because the way we communicate medical science affects the decisions people make about their own health.  As Ben Goldacre points out in his book Bad Science, there is almost no teaching in school science lessons about risk or other real-world science, yet something like half of the science stories in our national press are medical ones.

So what can we do about it?  One obvious answer is teach these things in a useful and interesting way in schools, but that’ll take time.  In the mean time, a colleague of mine had a great idea today: get the biomedical journals to help us by including a plain language summary.  This isn’t a particularly new concept, and in fact the Cochrane Collaboration already does it on all of their systematic reviews (here‘s a recent example), but it’s a simple thing that might help increase public understanding of basic science.

Let’s look at an example to see how it might work.  A good sample paper was published online in the New England Journal of Medicine (NEJM) on the 1st of October, entitled “Trastuzumab Emtansine for HER2-Positive Advanced Breast Cancer”.  The story was widely reported (by The Daily Telegraph, Daily Mail, Daily Mirror, and Channel 4 News among others), and also covered by the ever-excellent Behind the Headlines.   The abstract of the original research, as is usual, is publicly available both on the NEJM’s website, and at Pubmed. Abstracts are designed to give an overview of the important points of a journal article, and they’re great if you’re comfortable with the language used and are aware of their limitations.

But what if you’re not a doctor, scientist, pharmacist, or someone other wise used to reading stuff like this?  A lot of the language and concepts used in abstracts is completely meaningless unless you have the necessary training and experience to interpret them.  Taking the story above as an example, the abstract contains the following wonderful sentence:

Among 991 randomly assigned patients, median progression-free survival as assessed by independent review was 9.6 months with T-DM1 versus 6.4 months with lapatinib plus capecitabine (hazard ratio for progression or death from any cause, 0.65; 95% confidence interval [CI], 0.55 to 0.77; P<0.001), and median overall survival at the second interim analysis crossed the stopping boundary for efficacy (30.9 months vs. 25.1 months; hazard ratio for death from any cause, 0.68; 95% CI, 0.55 to 0.85; P<0.001).

Yes, that’s all one sentence.  It contains 77 words (the recommended maximum sentence length to make text readable is 20-25 words), not to mention words and concepts that will be alien to most people.  What’s a confidence interval?  A hazard ratio?  A stopping boundary for efficacy?  For the intended audience that sentence is chock-full of useful information; for Joe Average it’s useless.

So what if journals started adding a bit?  What if the usual structure was Background, Methods, Results, Conclusions, Plain language summary?  We wouldn’t need the full abstract to be “translated”, just 2-3 sentences that give the reader the gist of the meaning.  For the breast cancer story we’ve been concentrating on, I’d suggest something like the following:

This trial looked at a drug called T-DM1, which is designed to treat a type of breast cancer called HER-2 positive breast cancer. T-DM1 was compared to a combination of two drugs that are already available.  It found that cancer progression was delayed by about three months in women who received T-DM1.  These women also lived roughly six months longer.

That’s a first attempt and just an example; I’m sure people more skilled with words than I am could improve it vastly.  But hopefully it illustrates the point; it takes relatively little effort to make the key points of a clinical trial much more accessible to the general public.

So what to do about it?  Well, I plan to pick some key biomedical journals and just ask them nicely.  I’d like you to join me, because we all know that a crowd of people making noise is more influential than one person alone.  To make it easier I’ll make a list of contact details in an update to this post, as soon as I have time – hopefully this evening.  If you have any suggestions for publications we should write to, leave a comment or send me a tweet (@Skanky_fish).  If you have any ideas to improve this little scheme, please do likewise.  It’s a very simple idea, and hopefully one that could make a small difference in the public understanding of medical science.

The obligatory First Post

I am a troubled blogger. I find myself overcome with ideas whilst out and about, or doing the dishes, swinging a kettlebell, or during some other activity that means I can’t possibly sit down and write. But naturally, as soon as I sit down at a keyboard, my brain empties. So you may have to simply take my word for it for now, dear Reader, that I intend this blog to be a home for some of my thoughts on science and quackery.  Let me expand.

In my day job, I am an information scientist.  More specifically, I specialise in medical information.  This means that day-to-day I read clinical trials of new drugs, and try to make judgements on whether there is any place in the modern NHS for them.  It’s rarely a straightforward decision.  Often trials will show that a new drug is about as good as an old one, and might have fewer side effects. So why not use it? Well, even really, really big trials generally only recruit maybe 20,000 people (and more usually somewhere in the region of 200-1000), and that’s just not enough. What if there’s a side effect that’s really rare but really serious?  What if 1 in 10,000 people will just drop dead instantly? Or what if there really are slightly fewer side effects, but the drug costs 5 times as much?  Is it enough if the old drug causes 1 in 10 people to get a headache, while the new drug reduces that to 1 in 20, but we can only afford to give it to half as many people?

If that all seems rather bewildering and bemusing that’s because, quite frankly, it is.  Very rarely there is a clear cut answer. More often I attempt to put the whole thing into some kind of context and allow the good people of our regional Medicines Management teams to make the best decisions they can. I do not envy them.

All of that was a rather rambling way for me to try and explain why evidence is important to me. Good quality, unbiased evidence is a rare and wonderful thing, and it is the only way we can reliably make sound, rational decisions about the world around us. Quackery is the very opposite of this; it relies on anecdote and emotion to sway us. Unfortunately, as human beings we are very, very susceptible to anecdote, and to all sorts of insidious things like confirmation bias. Often this is harmless, but in far too many instances it does very real harm.

I owe a thank you to the lovely Hayley, my companion on this little campaign for rational thinking, for accompanying me on this journey so far. She’s started a blog of her own at A Healthy Dose of Skepticism, and you’ll see from that that we’re on a very similar path.  Her first post also explains a little more about how we each got to this point, where reading about these issues is no longer enough but we must now add our voices to the hubbub. I suspect that, if you stick with me, you’ll be hearing plenty from her, too.

Given my professional background I rather expect this blog to have a strong leaning towards medical content; however, quackery and poor decision-making are rife in many arenas of life, so don’t be surprised if other things creep in too.  I will at least try to be interesting, or if all else fails, give you the occasional chuckle.