Monthly Archives: October 2012

Plain language summary: Trastuzumab emtansine for breast cancer

The research:

Trastuzumab Emtansine for HER2-Positive Advanced Breast Cancer.

The summary:

This trial looked at a drug called T-DM1, which is designed to treat a type of breast cancer called HER-2 positive breast cancer. T-DM1 was compared to a combination of two drugs that are already available.  It found that cancer progression was delayed by about three months in women who received T-DM1, from 6.5 months to 9.5 months.  These women also lived roughly six months longer than the other group. Serious side effects were common in both groups, however they were less common in women receiving T-DM1.

The caveats:

The women recruited for this trial were reasonably healthy when they started.  Women were only allowed to enrol if:

  • they had breast cancer that had failed to respond to treatment with trastuzumab (which is one of the components of T-DM1) plus another drug
  • their breast cancer had spread (either locally or more widely).
  • they were still able to perform light work, such as housework or an office job.

In real-world conditions, women who receive treatment with T-DM1 might not be this fit, so might not get the same benefits as the women in the trial.

Advertisements

The problem with abstracts

I’ve written at reasonable length here and here about why I think plain language summaries on scientific abstracts would be a good idea. I plan to start producing a few myself (and if anyone would like to volunteer to chip in, I’d be more than grateful!). Before I start though, I thought a brief post on why we should treat abstracts with a little caution might be a good idea.

Abstracts are pretty ubiquitous in the scientific literature. They’re designed to give a brief overview of the paper; summarising the methods and the main results so that the reader can tell if the article is relevant to their needs without reading the whole thing. If the abstract looks promising, then that’s a good sign that it’s worth investing the time to read the whole thing.

But there are some caveats. Some of the same temptations that journalists face crop up – it’s very alluring to state your findings in the grandest terms, to get people interested in reading the whole paper. In fact, it’s not as uncommon as you might think to find that abstracts are actually misleading. A recent paper (and I do realise the irony of linking to an abstract here!) found that nearly a quarter of randomised controlled trials in the field of rheumatology have misleading claims in their abstracts. Another recent paper (and you can read the whole thing this time) found that when abstracts are misleading, or contain “spin”, this in turn leads to spin in the media reports of it.

I suspect (or rather, hope) that all of this is usually done with relatively pure intentions, but there’s no getting away from it – misleading information is not useful, and potentially harmful.

The other important thing to consider is that abstracts are short – they simply do not contain all of the detail of the full paper, as that would rather defeat the object. But this means that they miss out some information that can sometimes be crucial. For instance, the abstract may state that a trial is randomised, while the text reveals that randomisation was done using a method that’s not robust. A small detail like this might cast doubt over the findings of the entire trial, but you won’t find it in the abstract. That’s why it’s alway important to fully “critically appraise” a paper (in other words, take it apart and check for holes) before taking its conclusions at face value.

So as I embark on my little experiment, I’ll be bearing all of this in mind; and you should too. I will do my best to honestly represent the studies I cover, and I’ll read the full thing before producing my summaries whenever possible. However, there’s only one of me, and like most people in the modern world I have rather limited time available to do this in. Be patient with me, and take everything with a pinch of salt.

Update: The first plain language summary, trastuzumab emtansine for breast cancer, is now online.  All future summaries will be published under the Plain language summaries category.  As ever, all feedback gratefully received!

The problem with anecdotes

We’re funny creatures, human beings – very easily swayed, and ruled by emotion a great deal more than some might believe. Most people like to think that they are quite rational, quite sensible; most people probably (secretly) think that they are more sensible than the average person. And yet we also believe some quite remarkable things.

Take homeopathy as an example; I have spoke to more than one very bright, intelligent person who has said words to the effect of “I know that homeopathic remedies don’t contain any active ingredient, and it’s probably a load of rubbish, but they definitely work for me”.

One of the most seductive forms of persuasion, for some reason, is anecdote. We see them all the time, often by another name – a testimonial, a case study, an interview – but it all means the same. Anecdotes seem to be a powerful way for people to communicate ideas – they make them relatable, and understandable. They also add a generous splash of emotion to the issue, and that’s not a particularly good thing

Because the fact is this: anecdote is not reliable.  While we may think that we’re rational and swayed only by hard evidence, actually we human beings are all too easily tricked by all manner of things.  Among the most common is confirmation bias, which is the name given to the very common scenario where we take action in the hope of producing some result and, if that result is obtained, assume that it was our action which caused it.  There are lots and lots of very common examples – taking Echinacea for the common cold (or homeopathy for that matter), or arnica for a bruise.  These things may or may not work, but the very fact that we expect them to have an effect means that when the inevitable happens and our cold clears up means that we will inevitably attribute that result to the remedy.

This is also a great example of the very similar concept of “regression to the mean”, which simply means that lots of things have a very predictable nature, and tend to get better on their own.  Everyone knows that a cold will go away whether you treat it or not, but if you did happen to take that Echinacea tablet, isn’t it tempting to think that you helped yourself get better?

And that’s why anecdote is not reliable; these and many, many other biases and problems come into play, so that in most cases we have no idea if the “treatment” worked, or sometimes even if the person was sick in the first place.  A fellow called Dr Moran, a retired surgeon from Australia, has a really rather lovely little website that gives us some tools to try and remain vigilant against this kind of thing.  He has an article called How to Evaluate a Cancer Cure Testimonial, but really it just takes a few minor tweaks to make it into “How to evaluate a medical anecdote”.

So here are my golden rules:

  1. Was the person definitely ill, as shown by reliable tests, when the treatment was started?
  2. Did they get better, as judged by the same tests?
  3. Was the advocated treatment the only one used?

I haven’t really covered that 3rd point here, but it’s probably self-evident: if you did more than one thing to help yourself get better, how do you know which one worked?

And that’s it.  It’s perhaps not particularly important if you’re just thinking about taking a herbal medicine for the common cold, but in other cases it might be.  I’d urge anyone thinking about investing significant resources into a treatment (whether time, money, emotional input, or anything else) where the only evidence is an anecdote, testimonial, or similar to think through those three points.  Be honest with yourself about the answers.  You might just save yourself some cash, or some heartache.

Plain language summaries – an update

Thank you for all of the responses to my post last Friday, describing my colleague’s idea for plain language summaries to be included in journal abstracts.  The reaction was pretty positive, with some welcome constructive criticism too. 

That post was written very much on the spur of the moment (during my lunch break just after the idea was suggested, to be precise!), so after a little time to calm down and cogitate I’ve decided to run something of a pilot scheme, or a proof of concept.  I’ve created a new blog category over to the right there called “Plain language summaries”, and I’ll be adding as many abstracts there as I have time to get through (disclaimer: that might not be many). I’ll kick things off with a general discussion of the strengths and limitations of abstracts (including how they can be misleading in their own right), and then hopefully there will be a steady trickle of content. 

So once again watch this space, and if anyone has any ideas or comments on the feasibility of what I’m trying to do here (or wants to help!), do get in touch.  I know I already have one recruit in the form of the lovely Hayley (from A Healthy Dose of Skepticism), but this is definitely a case of the more the merrier!

A wild idea to improve science communication

As I discussed in my last post, science communication is really important.  That’s especially true when it comes to healthcare, because the way we communicate medical science affects the decisions people make about their own health.  As Ben Goldacre points out in his book Bad Science, there is almost no teaching in school science lessons about risk or other real-world science, yet something like half of the science stories in our national press are medical ones.

So what can we do about it?  One obvious answer is teach these things in a useful and interesting way in schools, but that’ll take time.  In the mean time, a colleague of mine had a great idea today: get the biomedical journals to help us by including a plain language summary.  This isn’t a particularly new concept, and in fact the Cochrane Collaboration already does it on all of their systematic reviews (here‘s a recent example), but it’s a simple thing that might help increase public understanding of basic science.

Let’s look at an example to see how it might work.  A good sample paper was published online in the New England Journal of Medicine (NEJM) on the 1st of October, entitled “Trastuzumab Emtansine for HER2-Positive Advanced Breast Cancer”.  The story was widely reported (by The Daily Telegraph, Daily Mail, Daily Mirror, and Channel 4 News among others), and also covered by the ever-excellent Behind the Headlines.   The abstract of the original research, as is usual, is publicly available both on the NEJM’s website, and at Pubmed. Abstracts are designed to give an overview of the important points of a journal article, and they’re great if you’re comfortable with the language used and are aware of their limitations.

But what if you’re not a doctor, scientist, pharmacist, or someone other wise used to reading stuff like this?  A lot of the language and concepts used in abstracts is completely meaningless unless you have the necessary training and experience to interpret them.  Taking the story above as an example, the abstract contains the following wonderful sentence:

Among 991 randomly assigned patients, median progression-free survival as assessed by independent review was 9.6 months with T-DM1 versus 6.4 months with lapatinib plus capecitabine (hazard ratio for progression or death from any cause, 0.65; 95% confidence interval [CI], 0.55 to 0.77; P<0.001), and median overall survival at the second interim analysis crossed the stopping boundary for efficacy (30.9 months vs. 25.1 months; hazard ratio for death from any cause, 0.68; 95% CI, 0.55 to 0.85; P<0.001).

Yes, that’s all one sentence.  It contains 77 words (the recommended maximum sentence length to make text readable is 20-25 words), not to mention words and concepts that will be alien to most people.  What’s a confidence interval?  A hazard ratio?  A stopping boundary for efficacy?  For the intended audience that sentence is chock-full of useful information; for Joe Average it’s useless.

So what if journals started adding a bit?  What if the usual structure was Background, Methods, Results, Conclusions, Plain language summary?  We wouldn’t need the full abstract to be “translated”, just 2-3 sentences that give the reader the gist of the meaning.  For the breast cancer story we’ve been concentrating on, I’d suggest something like the following:

This trial looked at a drug called T-DM1, which is designed to treat a type of breast cancer called HER-2 positive breast cancer. T-DM1 was compared to a combination of two drugs that are already available.  It found that cancer progression was delayed by about three months in women who received T-DM1.  These women also lived roughly six months longer.

That’s a first attempt and just an example; I’m sure people more skilled with words than I am could improve it vastly.  But hopefully it illustrates the point; it takes relatively little effort to make the key points of a clinical trial much more accessible to the general public.

So what to do about it?  Well, I plan to pick some key biomedical journals and just ask them nicely.  I’d like you to join me, because we all know that a crowd of people making noise is more influential than one person alone.  To make it easier I’ll make a list of contact details in an update to this post, as soon as I have time – hopefully this evening.  If you have any suggestions for publications we should write to, leave a comment or send me a tweet (@Skanky_fish).  If you have any ideas to improve this little scheme, please do likewise.  It’s a very simple idea, and hopefully one that could make a small difference in the public understanding of medical science.

Why is science communication important?

Science communication is, in general, very poorly accomplished in our society.  Newspapers like snappy headlines because they’re catchy, and they sell papers.  The public likes snappy headlines because they’re eye-catching, and easy to digest.  But what if that’s all you see?  What if you don’t have time to read the paper, but catch sight of the headline?  A recent example is the news story that “Frozen chips are a cause of cancer“, as reported by The Daily Telegraph, the Daily Mail, the Metro and the Daily Express.  If you just saw that headline you’d probably be frightened, although if you read the articles you might be comforted to some extent.  It turns out they’re reporting the results of a study that found that frozen chips can contain a chemical called acrylamide, which is known to cause cancer in mice.

Or worse than just reading the headline, what if you read the whole article in the Telegraph, which doesn’t point out that there are uncertainties around the cancer-causing properties of acrylamide.

That’s just one recent example, taken from the excellent Behind the Headlines section of NHS Choices (if ever you see a medical story in the press that you find disturbing, or if you just want to know more, it’s always worth checking Behind the Headlines for a balanced write-up of the story). But hopefully that example illustrates why good science communication is important.  If you only get part of the story, or you get a distorted version of it, there is a very real risk of harm.  In this example the harms are pretty minor – you might not get to eat as many chips as you’d like (and it’s very easy to argue that that’s a benefit, not a harm), and you  may suffer some anxiety or stress over chips already eaten.

But what if the story’s more important.  What if you read the headline about a link between deafness and painkillers, and stopped taking your pain medication (when actually it’s only regular use, and hearing impairment rather than deafness)?  Or if you read that altered sleep patterns are a warning sign of Alzheimer’s and started to self-diagnose and panic (when actually this study looked only at mice, and not humans) ? The possibilities for harm are very real.

And that’s why good science communication is so important.  One of the things I hope to do with this blog is take a leaf out of Behind the Headlines’ book, and try to present important scientific research in a reader-friendly way.  I’ll be starting with the recent story that genetically modified corn causes cancer in rats; there’s been a lot of coverage on that, but because it’s something I feel strongly about I thought I’d stick my oar in too.  It might take a couple of days (the paper is complex) but watch this space!