Category Archives: The importance of evidence

Homeopathic Harms Vol 4: OK, there’s SOME evidence

Last time I discussed the problem of missing evidence of harm in homeopathy trials and consequently in systematic reviews.  This time, I’m going to discuss some evidence of harm that we DO have. Sadly, it’s not comforting.

In December 2012, a systematic review of the adverse effects of homeopathy was published in the International Journal of Clinical Practice (aside: for a quick explanation of systematic reviews and adverse effects, take a look at volume 2 in this blog series).  The authors of this review searched five databases of medical literature totalling nearly 50 million published trials (though likely with considerable overlap), and found just 38 articles that discussed case reports and case series of adverse events with homeopathy.

It’s worth noting at this point that if systematic reviews are the pinnacle of the evidence pyramid, case reports and case series are somewhere towards the middle or bottom, depending who you ask.  They’re not ideal, because they’re not rigorous – they rely on someone not only noticing an adverse event and linking it to homeopathy, but taking the time to sit down and write about it and submit it to a journal.  Then of course they’ve got to find a journal willing to publish it.  If any of these steps don’t happen, there’s no published evidence for the rest of us to base our decisions on.  So if our systematic review found 38 published reports, the obvious question is “how many were never recognised, written up, or published?”  We’ll never know the answer to that.  Sadly in the absence of high quality reports of harm from the published clinical trials, this is the highest level of evidence we have.

Back to the review.  The 38 retrieved reports contained information relating to 1,159 people from all over the world.  Surprisingly, only 17 of the reports related to indirect harms – the results of substituting conventional care with homeopathy – although some of those indirect harms were severe.  Several people were admitted to hospital (including intensive care) due to replacing their conventional medicines with homeopathy, at least one was left with permanent effects, and one person died.

That leaves 1,142 people who suffered *direct* adverse effects as a result of using homeopathy.  This seems rather counter-intuitive, and I’m at a loss to explain to explain many of them given that your average homeopathic remedy contains precisely no active ingredient.  The authors of the review suggest that perhaps allergic reactions or ingestion of toxic metals (like arsenic or mercury) might be partly to blame.  They also suggest that low dilutions of remedies might be a potential source of adverse effects, but point out that the vast majority of these reports were associated with remedies at 12C potency or below.  To be clear, 12C is the dilution factor at which the chance of a remedy containing even one molecule of the original parent substance is effectively zero.

But whatever the mechanism, it seems clear that the review provides evidence of direct harm being caused by homeopathy.  Some of these harms were reported simply as “mild”, with no other details offered.  Some were potentially very distressing, like dermatitis, hair loss, and migraine.  Some were very serious indeed, including anaphylaxis (life-threatening allergy), acute pancreatitis, cancers, and coma.  Once again the consequences of the effects included hospitalisation, admission to intensive care units, and death.  For a treatment modality generally touted as totally safe, that’s a pretty alarming set of side effects.

So what can we learn from it?  There’s a valid argument to be made that there’s little point conducting more randomised controlled trials of homeopathy, because all of the good quality ones end up showing the same thing: no benefit over placebo.  But where more trials are conducted, we should be demanding that all adverse effects are collected and reported in the same manner as trials of new medicines.  Case series and reports are not proof of causation, but there is a bulk of evidence here that is concerning, and which should be addressed.  The best way to do that is in good quality trials.

In the mean time, is there anything else we can do?  Yes there is – in the UK at least.  The medicines regulator in the UK, the MHRA, runs the Yellow Card Scheme.  This is a mechanism by which anyone can report any side effect they experience after taking a medication.  I would strongly urge anyone who has suffered an adverse event after using homeopathy (or who knows someone who has) to visit www.mhra.gov.uk/yellowcard.  It’s quick and simple, and will help make remedies safer for everyone.  Similar schemes will be coming into effect throughout the EU soon, but if you live elsewhere please check and see if there’s anything similar.  We need all the data we can get!

Advertisements

A wild idea to improve science communication

As I discussed in my last post, science communication is really important.  That’s especially true when it comes to healthcare, because the way we communicate medical science affects the decisions people make about their own health.  As Ben Goldacre points out in his book Bad Science, there is almost no teaching in school science lessons about risk or other real-world science, yet something like half of the science stories in our national press are medical ones.

So what can we do about it?  One obvious answer is teach these things in a useful and interesting way in schools, but that’ll take time.  In the mean time, a colleague of mine had a great idea today: get the biomedical journals to help us by including a plain language summary.  This isn’t a particularly new concept, and in fact the Cochrane Collaboration already does it on all of their systematic reviews (here‘s a recent example), but it’s a simple thing that might help increase public understanding of basic science.

Let’s look at an example to see how it might work.  A good sample paper was published online in the New England Journal of Medicine (NEJM) on the 1st of October, entitled “Trastuzumab Emtansine for HER2-Positive Advanced Breast Cancer”.  The story was widely reported (by The Daily Telegraph, Daily Mail, Daily Mirror, and Channel 4 News among others), and also covered by the ever-excellent Behind the Headlines.   The abstract of the original research, as is usual, is publicly available both on the NEJM’s website, and at Pubmed. Abstracts are designed to give an overview of the important points of a journal article, and they’re great if you’re comfortable with the language used and are aware of their limitations.

But what if you’re not a doctor, scientist, pharmacist, or someone other wise used to reading stuff like this?  A lot of the language and concepts used in abstracts is completely meaningless unless you have the necessary training and experience to interpret them.  Taking the story above as an example, the abstract contains the following wonderful sentence:

Among 991 randomly assigned patients, median progression-free survival as assessed by independent review was 9.6 months with T-DM1 versus 6.4 months with lapatinib plus capecitabine (hazard ratio for progression or death from any cause, 0.65; 95% confidence interval [CI], 0.55 to 0.77; P<0.001), and median overall survival at the second interim analysis crossed the stopping boundary for efficacy (30.9 months vs. 25.1 months; hazard ratio for death from any cause, 0.68; 95% CI, 0.55 to 0.85; P<0.001).

Yes, that’s all one sentence.  It contains 77 words (the recommended maximum sentence length to make text readable is 20-25 words), not to mention words and concepts that will be alien to most people.  What’s a confidence interval?  A hazard ratio?  A stopping boundary for efficacy?  For the intended audience that sentence is chock-full of useful information; for Joe Average it’s useless.

So what if journals started adding a bit?  What if the usual structure was Background, Methods, Results, Conclusions, Plain language summary?  We wouldn’t need the full abstract to be “translated”, just 2-3 sentences that give the reader the gist of the meaning.  For the breast cancer story we’ve been concentrating on, I’d suggest something like the following:

This trial looked at a drug called T-DM1, which is designed to treat a type of breast cancer called HER-2 positive breast cancer. T-DM1 was compared to a combination of two drugs that are already available.  It found that cancer progression was delayed by about three months in women who received T-DM1.  These women also lived roughly six months longer.

That’s a first attempt and just an example; I’m sure people more skilled with words than I am could improve it vastly.  But hopefully it illustrates the point; it takes relatively little effort to make the key points of a clinical trial much more accessible to the general public.

So what to do about it?  Well, I plan to pick some key biomedical journals and just ask them nicely.  I’d like you to join me, because we all know that a crowd of people making noise is more influential than one person alone.  To make it easier I’ll make a list of contact details in an update to this post, as soon as I have time – hopefully this evening.  If you have any suggestions for publications we should write to, leave a comment or send me a tweet (@Skanky_fish).  If you have any ideas to improve this little scheme, please do likewise.  It’s a very simple idea, and hopefully one that could make a small difference in the public understanding of medical science.

The obligatory First Post

I am a troubled blogger. I find myself overcome with ideas whilst out and about, or doing the dishes, swinging a kettlebell, or during some other activity that means I can’t possibly sit down and write. But naturally, as soon as I sit down at a keyboard, my brain empties. So you may have to simply take my word for it for now, dear Reader, that I intend this blog to be a home for some of my thoughts on science and quackery.  Let me expand.

In my day job, I am an information scientist.  More specifically, I specialise in medical information.  This means that day-to-day I read clinical trials of new drugs, and try to make judgements on whether there is any place in the modern NHS for them.  It’s rarely a straightforward decision.  Often trials will show that a new drug is about as good as an old one, and might have fewer side effects. So why not use it? Well, even really, really big trials generally only recruit maybe 20,000 people (and more usually somewhere in the region of 200-1000), and that’s just not enough. What if there’s a side effect that’s really rare but really serious?  What if 1 in 10,000 people will just drop dead instantly? Or what if there really are slightly fewer side effects, but the drug costs 5 times as much?  Is it enough if the old drug causes 1 in 10 people to get a headache, while the new drug reduces that to 1 in 20, but we can only afford to give it to half as many people?

If that all seems rather bewildering and bemusing that’s because, quite frankly, it is.  Very rarely there is a clear cut answer. More often I attempt to put the whole thing into some kind of context and allow the good people of our regional Medicines Management teams to make the best decisions they can. I do not envy them.

All of that was a rather rambling way for me to try and explain why evidence is important to me. Good quality, unbiased evidence is a rare and wonderful thing, and it is the only way we can reliably make sound, rational decisions about the world around us. Quackery is the very opposite of this; it relies on anecdote and emotion to sway us. Unfortunately, as human beings we are very, very susceptible to anecdote, and to all sorts of insidious things like confirmation bias. Often this is harmless, but in far too many instances it does very real harm.

I owe a thank you to the lovely Hayley, my companion on this little campaign for rational thinking, for accompanying me on this journey so far. She’s started a blog of her own at A Healthy Dose of Skepticism, and you’ll see from that that we’re on a very similar path.  Her first post also explains a little more about how we each got to this point, where reading about these issues is no longer enough but we must now add our voices to the hubbub. I suspect that, if you stick with me, you’ll be hearing plenty from her, too.

Given my professional background I rather expect this blog to have a strong leaning towards medical content; however, quackery and poor decision-making are rife in many arenas of life, so don’t be surprised if other things creep in too.  I will at least try to be interesting, or if all else fails, give you the occasional chuckle.