Category Archives: Science communication

Homeopathic Harms vol 5: interactions

I might be a little quiet for the next wee while due to some unforeseen circumstances,  but to tide you over here’s the next installment in the Homeopathic Harms series by @SparkleWildfire – interactions.

In the next installment of our series on the harms of homeopathy, I want to talk about interactions. I’ve covered this a bit in the past, but let’s have a look at this area in a bit more detail.

We all hopefully know by now that homeopathic medicines pretty much have no trace of active ingredient in them by now. Do we need to worry about drug interactions with homeopathic remedies?

Can homeopathic medicines interact with conventional medicines?
The obvious answer is no. Magic Sugar Water Pills are highly unlikely to affect any conventional medicines. There’s a lack of actual evidence to prove this, but I think it’s pretty safe to rely on a theoretical basis here. So that’s great, right, blog post over and see you later. If only it were that simple.

Read the rest over at A Healthy Dose of Skepticism.

Advertisements

Homeopathic Harms Vol 4: OK, there’s SOME evidence

Last time I discussed the problem of missing evidence of harm in homeopathy trials and consequently in systematic reviews.  This time, I’m going to discuss some evidence of harm that we DO have. Sadly, it’s not comforting.

In December 2012, a systematic review of the adverse effects of homeopathy was published in the International Journal of Clinical Practice (aside: for a quick explanation of systematic reviews and adverse effects, take a look at volume 2 in this blog series).  The authors of this review searched five databases of medical literature totalling nearly 50 million published trials (though likely with considerable overlap), and found just 38 articles that discussed case reports and case series of adverse events with homeopathy.

It’s worth noting at this point that if systematic reviews are the pinnacle of the evidence pyramid, case reports and case series are somewhere towards the middle or bottom, depending who you ask.  They’re not ideal, because they’re not rigorous – they rely on someone not only noticing an adverse event and linking it to homeopathy, but taking the time to sit down and write about it and submit it to a journal.  Then of course they’ve got to find a journal willing to publish it.  If any of these steps don’t happen, there’s no published evidence for the rest of us to base our decisions on.  So if our systematic review found 38 published reports, the obvious question is “how many were never recognised, written up, or published?”  We’ll never know the answer to that.  Sadly in the absence of high quality reports of harm from the published clinical trials, this is the highest level of evidence we have.

Back to the review.  The 38 retrieved reports contained information relating to 1,159 people from all over the world.  Surprisingly, only 17 of the reports related to indirect harms – the results of substituting conventional care with homeopathy – although some of those indirect harms were severe.  Several people were admitted to hospital (including intensive care) due to replacing their conventional medicines with homeopathy, at least one was left with permanent effects, and one person died.

That leaves 1,142 people who suffered *direct* adverse effects as a result of using homeopathy.  This seems rather counter-intuitive, and I’m at a loss to explain to explain many of them given that your average homeopathic remedy contains precisely no active ingredient.  The authors of the review suggest that perhaps allergic reactions or ingestion of toxic metals (like arsenic or mercury) might be partly to blame.  They also suggest that low dilutions of remedies might be a potential source of adverse effects, but point out that the vast majority of these reports were associated with remedies at 12C potency or below.  To be clear, 12C is the dilution factor at which the chance of a remedy containing even one molecule of the original parent substance is effectively zero.

But whatever the mechanism, it seems clear that the review provides evidence of direct harm being caused by homeopathy.  Some of these harms were reported simply as “mild”, with no other details offered.  Some were potentially very distressing, like dermatitis, hair loss, and migraine.  Some were very serious indeed, including anaphylaxis (life-threatening allergy), acute pancreatitis, cancers, and coma.  Once again the consequences of the effects included hospitalisation, admission to intensive care units, and death.  For a treatment modality generally touted as totally safe, that’s a pretty alarming set of side effects.

So what can we learn from it?  There’s a valid argument to be made that there’s little point conducting more randomised controlled trials of homeopathy, because all of the good quality ones end up showing the same thing: no benefit over placebo.  But where more trials are conducted, we should be demanding that all adverse effects are collected and reported in the same manner as trials of new medicines.  Case series and reports are not proof of causation, but there is a bulk of evidence here that is concerning, and which should be addressed.  The best way to do that is in good quality trials.

In the mean time, is there anything else we can do?  Yes there is – in the UK at least.  The medicines regulator in the UK, the MHRA, runs the Yellow Card Scheme.  This is a mechanism by which anyone can report any side effect they experience after taking a medication.  I would strongly urge anyone who has suffered an adverse event after using homeopathy (or who knows someone who has) to visit www.mhra.gov.uk/yellowcard.  It’s quick and simple, and will help make remedies safer for everyone.  Similar schemes will be coming into effect throughout the EU soon, but if you live elsewhere please check and see if there’s anything similar.  We need all the data we can get!

Does homeopathy have a place in therapy?

Given the current blog series I’m collaborating on regarding the potential harms of homeopathy, I thought it might be useful to stop for a moment and discuss its appropriate place in therapy. Do I believe that informed, consenting adults should be able to choose homeopathy as part of a treatment regime? Yes.

Are most people fully informed, and therefore able to give full consent? No, I don’t believe they are.

If people are making their treatment decisions based simply on assertions like “it’s safe” or “it’s gentle and natural” or (worst of all) “it works for me”, they are not fully informed. (To see why “it works for me” isn’t adequate, take a look at my post on anecdotes). This creates an ethical problem that should be insurmountable for any decent healthcare provider.

The evidence in favour of homeopathy simply does not reach the standard that we demand of conventional medicines. The evidence that it has the potential to cause harms (as we are showing with the Homeopathy Harms blog series) is very real.  Does a patient tend to feel better after seeing a homeopath?  Probably.  In these days of seven minute GP consultations the chance to sit down for an hour with someone who wants to listen, and dig deeper, and really *help* you is probably a really lovely thing.  Should we mistake that for thinking that homeopathy is a beneficial discipline? No.  Should we allow double standards by accepting lower quality evidence for homeopathy (or any complementary medicine) than we do for conventional medicine? No way.

But knowing all of this, knowing that the most “potent” homeopathic remedies have precisely no active ingredient, that there is no evidence of benefit beyond placebo effect, that at best they’ll derive no therapeutic effect and at worse they may experience serious side effects, should an adult be allowed to choose homeopathy for themselves? Sure.  Do many users of homeopathy meet these basic criteria for informed consent? I very much doubt it.

Plain language summary: calcium supplements and heart attacks

The research:

Calcium supplements with or without vitamin D and risk of cardiovascular events: reanalysis of the Women’s Health Initiative limited access dataset and meta-analysis

The summary:

This trial took information from 36,282 women who had been through the menopause, and looked at them to see whether taking calcium supplements made them more likely to have a heart attack.  Half of the women were given calcium and vitamin D supplements, while half were given a placebo (sugar-pill).  The trial found that taking calcium and vitamin D increased the risk of heart problems slightly, including heart attacks.  Some women took their own personal calcium supplements as well as those provided by the study, and these women had no increased risk of heart attack.

Re-analysis of some older trials found that calcium and vitamin D increased the risk of heart attacks and strokes.  The way that calcium supplements are used should be examined, to see if change is needed.

The caveats:

This paper appears to find that women who take the highest amount of calcium (their own tablets plus the ones provided by the study) have no increase in risk compared to women who don’t take any calcium at all.  If this was a true effect, we would expect women who take the most calcium to have the highest risk. Other authors have published papers that find no evidence of risk with calium and vitamin D supplements.

This paper is discussed in more detail in this blog post.

Plain language summary: Trastuzumab emtansine for breast cancer

The research:

Trastuzumab Emtansine for HER2-Positive Advanced Breast Cancer.

The summary:

This trial looked at a drug called T-DM1, which is designed to treat a type of breast cancer called HER-2 positive breast cancer. T-DM1 was compared to a combination of two drugs that are already available.  It found that cancer progression was delayed by about three months in women who received T-DM1, from 6.5 months to 9.5 months.  These women also lived roughly six months longer than the other group. Serious side effects were common in both groups, however they were less common in women receiving T-DM1.

The caveats:

The women recruited for this trial were reasonably healthy when they started.  Women were only allowed to enrol if:

  • they had breast cancer that had failed to respond to treatment with trastuzumab (which is one of the components of T-DM1) plus another drug
  • their breast cancer had spread (either locally or more widely).
  • they were still able to perform light work, such as housework or an office job.

In real-world conditions, women who receive treatment with T-DM1 might not be this fit, so might not get the same benefits as the women in the trial.

The problem with abstracts

I’ve written at reasonable length here and here about why I think plain language summaries on scientific abstracts would be a good idea. I plan to start producing a few myself (and if anyone would like to volunteer to chip in, I’d be more than grateful!). Before I start though, I thought a brief post on why we should treat abstracts with a little caution might be a good idea.

Abstracts are pretty ubiquitous in the scientific literature. They’re designed to give a brief overview of the paper; summarising the methods and the main results so that the reader can tell if the article is relevant to their needs without reading the whole thing. If the abstract looks promising, then that’s a good sign that it’s worth investing the time to read the whole thing.

But there are some caveats. Some of the same temptations that journalists face crop up – it’s very alluring to state your findings in the grandest terms, to get people interested in reading the whole paper. In fact, it’s not as uncommon as you might think to find that abstracts are actually misleading. A recent paper (and I do realise the irony of linking to an abstract here!) found that nearly a quarter of randomised controlled trials in the field of rheumatology have misleading claims in their abstracts. Another recent paper (and you can read the whole thing this time) found that when abstracts are misleading, or contain “spin”, this in turn leads to spin in the media reports of it.

I suspect (or rather, hope) that all of this is usually done with relatively pure intentions, but there’s no getting away from it – misleading information is not useful, and potentially harmful.

The other important thing to consider is that abstracts are short – they simply do not contain all of the detail of the full paper, as that would rather defeat the object. But this means that they miss out some information that can sometimes be crucial. For instance, the abstract may state that a trial is randomised, while the text reveals that randomisation was done using a method that’s not robust. A small detail like this might cast doubt over the findings of the entire trial, but you won’t find it in the abstract. That’s why it’s alway important to fully “critically appraise” a paper (in other words, take it apart and check for holes) before taking its conclusions at face value.

So as I embark on my little experiment, I’ll be bearing all of this in mind; and you should too. I will do my best to honestly represent the studies I cover, and I’ll read the full thing before producing my summaries whenever possible. However, there’s only one of me, and like most people in the modern world I have rather limited time available to do this in. Be patient with me, and take everything with a pinch of salt.

Update: The first plain language summary, trastuzumab emtansine for breast cancer, is now online.  All future summaries will be published under the Plain language summaries category.  As ever, all feedback gratefully received!

Plain language summaries – an update

Thank you for all of the responses to my post last Friday, describing my colleague’s idea for plain language summaries to be included in journal abstracts.  The reaction was pretty positive, with some welcome constructive criticism too. 

That post was written very much on the spur of the moment (during my lunch break just after the idea was suggested, to be precise!), so after a little time to calm down and cogitate I’ve decided to run something of a pilot scheme, or a proof of concept.  I’ve created a new blog category over to the right there called “Plain language summaries”, and I’ll be adding as many abstracts there as I have time to get through (disclaimer: that might not be many). I’ll kick things off with a general discussion of the strengths and limitations of abstracts (including how they can be misleading in their own right), and then hopefully there will be a steady trickle of content. 

So once again watch this space, and if anyone has any ideas or comments on the feasibility of what I’m trying to do here (or wants to help!), do get in touch.  I know I already have one recruit in the form of the lovely Hayley (from A Healthy Dose of Skepticism), but this is definitely a case of the more the merrier!

A wild idea to improve science communication

As I discussed in my last post, science communication is really important.  That’s especially true when it comes to healthcare, because the way we communicate medical science affects the decisions people make about their own health.  As Ben Goldacre points out in his book Bad Science, there is almost no teaching in school science lessons about risk or other real-world science, yet something like half of the science stories in our national press are medical ones.

So what can we do about it?  One obvious answer is teach these things in a useful and interesting way in schools, but that’ll take time.  In the mean time, a colleague of mine had a great idea today: get the biomedical journals to help us by including a plain language summary.  This isn’t a particularly new concept, and in fact the Cochrane Collaboration already does it on all of their systematic reviews (here‘s a recent example), but it’s a simple thing that might help increase public understanding of basic science.

Let’s look at an example to see how it might work.  A good sample paper was published online in the New England Journal of Medicine (NEJM) on the 1st of October, entitled “Trastuzumab Emtansine for HER2-Positive Advanced Breast Cancer”.  The story was widely reported (by The Daily Telegraph, Daily Mail, Daily Mirror, and Channel 4 News among others), and also covered by the ever-excellent Behind the Headlines.   The abstract of the original research, as is usual, is publicly available both on the NEJM’s website, and at Pubmed. Abstracts are designed to give an overview of the important points of a journal article, and they’re great if you’re comfortable with the language used and are aware of their limitations.

But what if you’re not a doctor, scientist, pharmacist, or someone other wise used to reading stuff like this?  A lot of the language and concepts used in abstracts is completely meaningless unless you have the necessary training and experience to interpret them.  Taking the story above as an example, the abstract contains the following wonderful sentence:

Among 991 randomly assigned patients, median progression-free survival as assessed by independent review was 9.6 months with T-DM1 versus 6.4 months with lapatinib plus capecitabine (hazard ratio for progression or death from any cause, 0.65; 95% confidence interval [CI], 0.55 to 0.77; P<0.001), and median overall survival at the second interim analysis crossed the stopping boundary for efficacy (30.9 months vs. 25.1 months; hazard ratio for death from any cause, 0.68; 95% CI, 0.55 to 0.85; P<0.001).

Yes, that’s all one sentence.  It contains 77 words (the recommended maximum sentence length to make text readable is 20-25 words), not to mention words and concepts that will be alien to most people.  What’s a confidence interval?  A hazard ratio?  A stopping boundary for efficacy?  For the intended audience that sentence is chock-full of useful information; for Joe Average it’s useless.

So what if journals started adding a bit?  What if the usual structure was Background, Methods, Results, Conclusions, Plain language summary?  We wouldn’t need the full abstract to be “translated”, just 2-3 sentences that give the reader the gist of the meaning.  For the breast cancer story we’ve been concentrating on, I’d suggest something like the following:

This trial looked at a drug called T-DM1, which is designed to treat a type of breast cancer called HER-2 positive breast cancer. T-DM1 was compared to a combination of two drugs that are already available.  It found that cancer progression was delayed by about three months in women who received T-DM1.  These women also lived roughly six months longer.

That’s a first attempt and just an example; I’m sure people more skilled with words than I am could improve it vastly.  But hopefully it illustrates the point; it takes relatively little effort to make the key points of a clinical trial much more accessible to the general public.

So what to do about it?  Well, I plan to pick some key biomedical journals and just ask them nicely.  I’d like you to join me, because we all know that a crowd of people making noise is more influential than one person alone.  To make it easier I’ll make a list of contact details in an update to this post, as soon as I have time – hopefully this evening.  If you have any suggestions for publications we should write to, leave a comment or send me a tweet (@Skanky_fish).  If you have any ideas to improve this little scheme, please do likewise.  It’s a very simple idea, and hopefully one that could make a small difference in the public understanding of medical science.

Why is science communication important?

Science communication is, in general, very poorly accomplished in our society.  Newspapers like snappy headlines because they’re catchy, and they sell papers.  The public likes snappy headlines because they’re eye-catching, and easy to digest.  But what if that’s all you see?  What if you don’t have time to read the paper, but catch sight of the headline?  A recent example is the news story that “Frozen chips are a cause of cancer“, as reported by The Daily Telegraph, the Daily Mail, the Metro and the Daily Express.  If you just saw that headline you’d probably be frightened, although if you read the articles you might be comforted to some extent.  It turns out they’re reporting the results of a study that found that frozen chips can contain a chemical called acrylamide, which is known to cause cancer in mice.

Or worse than just reading the headline, what if you read the whole article in the Telegraph, which doesn’t point out that there are uncertainties around the cancer-causing properties of acrylamide.

That’s just one recent example, taken from the excellent Behind the Headlines section of NHS Choices (if ever you see a medical story in the press that you find disturbing, or if you just want to know more, it’s always worth checking Behind the Headlines for a balanced write-up of the story). But hopefully that example illustrates why good science communication is important.  If you only get part of the story, or you get a distorted version of it, there is a very real risk of harm.  In this example the harms are pretty minor – you might not get to eat as many chips as you’d like (and it’s very easy to argue that that’s a benefit, not a harm), and you  may suffer some anxiety or stress over chips already eaten.

But what if the story’s more important.  What if you read the headline about a link between deafness and painkillers, and stopped taking your pain medication (when actually it’s only regular use, and hearing impairment rather than deafness)?  Or if you read that altered sleep patterns are a warning sign of Alzheimer’s and started to self-diagnose and panic (when actually this study looked only at mice, and not humans) ? The possibilities for harm are very real.

And that’s why good science communication is so important.  One of the things I hope to do with this blog is take a leaf out of Behind the Headlines’ book, and try to present important scientific research in a reader-friendly way.  I’ll be starting with the recent story that genetically modified corn causes cancer in rats; there’s been a lot of coverage on that, but because it’s something I feel strongly about I thought I’d stick my oar in too.  It might take a couple of days (the paper is complex) but watch this space!