Should you believe the fitness hype?

I see this over and over again among otherwise very intelligent people; an odd belief that the latest “big” thing in exercise or weight loss will be a magic bullet that suddenly brings them the body they’ve always thought they should have. Zaggora hot pants (Burn more calories!), Skechers Shape Ups (reduce cellulite!), green coffee extract (100% natural!) – the list is basically endless. Leaving aside the notion that this will somehow make them happy (for I haven’t the knowledge or skills to even begin to tackle that), why do these bright people fall for it?  I can’t answer that either. It’s potentially very harmful though – this tweet from @nchawkes says it rather well:

People end up spending frightening amounts of time, money and energy on these promises, and even when there’s temporary success (often due to diving into a new regime with a positive opinion, in my totally-un-evidence-based opinion) ultimately there’s stagnation at best, failure or regression at worst. These things are hugely destructive to body image and overall self-image.

So if I can’t explain the fascination with these things, the least I can do is provide a small extra weapon in the battle against profiteering and misinformation in the fitness world.  (Aside: it’s worth noting that much of the misinformation is spread amongst well-meaning friends, just trying to help one another; this type is just as difficult to address as any other dearly-held belief).

My first pearl of wisdom is hardly novel: anything that seems too good to be true, is. The cold hard truth is that you can’t permanently change your body without permanently changing your diet and lifestyle; they needn’t be massive, life-altering changes, but they must happen. You also can’t permanently change your body by throwing money at it instead of good quality food and exercise (unless we’re talking surgery; that’s pretty permanent).

My second piece of advice is: apply critical thinking. Is there something you’re naturally skeptical about, or distrustful of? Apply that same level of suspicion to diet and lifestyle advice. New device guarantees weight loss in one workout? Great. What’s the mechanism? Does it seem plausible? Is it more likely that it’s just helping dehydrate slightly, thereby losing water via sweat? Never ever forget that water’s heavy; 1kg (2.2lb) per litre to be precise. Doubt everything. 

Thirdly, and maybe most importantly (and predictably), demand evidence. Good quality evidence at that. Be ruthless. Be picky. Crucially, don’t accept anecdotes. These are everywhere in weight loss fads, to the point that I feel they’re worthy of a specially-adapted version of the anecdote rules:

  • Did the person gain the advertised benefit, and maintain it?
  • Was the advocated treatment the only one used?
  • If it’s really so good, why aren’t doctors and fitness professionals everywhere advocating it?

I’m hoping to look at some individual claims in more detail, but hopefully this post will at least serve as a cue to get you thinking about the way you look at claims in the weight-loss industry.

Advertisements

Homeopathic Harms Vol 4: OK, there’s SOME evidence

Last time I discussed the problem of missing evidence of harm in homeopathy trials and consequently in systematic reviews.  This time, I’m going to discuss some evidence of harm that we DO have. Sadly, it’s not comforting.

In December 2012, a systematic review of the adverse effects of homeopathy was published in the International Journal of Clinical Practice (aside: for a quick explanation of systematic reviews and adverse effects, take a look at volume 2 in this blog series).  The authors of this review searched five databases of medical literature totalling nearly 50 million published trials (though likely with considerable overlap), and found just 38 articles that discussed case reports and case series of adverse events with homeopathy.

It’s worth noting at this point that if systematic reviews are the pinnacle of the evidence pyramid, case reports and case series are somewhere towards the middle or bottom, depending who you ask.  They’re not ideal, because they’re not rigorous – they rely on someone not only noticing an adverse event and linking it to homeopathy, but taking the time to sit down and write about it and submit it to a journal.  Then of course they’ve got to find a journal willing to publish it.  If any of these steps don’t happen, there’s no published evidence for the rest of us to base our decisions on.  So if our systematic review found 38 published reports, the obvious question is “how many were never recognised, written up, or published?”  We’ll never know the answer to that.  Sadly in the absence of high quality reports of harm from the published clinical trials, this is the highest level of evidence we have.

Back to the review.  The 38 retrieved reports contained information relating to 1,159 people from all over the world.  Surprisingly, only 17 of the reports related to indirect harms – the results of substituting conventional care with homeopathy – although some of those indirect harms were severe.  Several people were admitted to hospital (including intensive care) due to replacing their conventional medicines with homeopathy, at least one was left with permanent effects, and one person died.

That leaves 1,142 people who suffered *direct* adverse effects as a result of using homeopathy.  This seems rather counter-intuitive, and I’m at a loss to explain to explain many of them given that your average homeopathic remedy contains precisely no active ingredient.  The authors of the review suggest that perhaps allergic reactions or ingestion of toxic metals (like arsenic or mercury) might be partly to blame.  They also suggest that low dilutions of remedies might be a potential source of adverse effects, but point out that the vast majority of these reports were associated with remedies at 12C potency or below.  To be clear, 12C is the dilution factor at which the chance of a remedy containing even one molecule of the original parent substance is effectively zero.

But whatever the mechanism, it seems clear that the review provides evidence of direct harm being caused by homeopathy.  Some of these harms were reported simply as “mild”, with no other details offered.  Some were potentially very distressing, like dermatitis, hair loss, and migraine.  Some were very serious indeed, including anaphylaxis (life-threatening allergy), acute pancreatitis, cancers, and coma.  Once again the consequences of the effects included hospitalisation, admission to intensive care units, and death.  For a treatment modality generally touted as totally safe, that’s a pretty alarming set of side effects.

So what can we learn from it?  There’s a valid argument to be made that there’s little point conducting more randomised controlled trials of homeopathy, because all of the good quality ones end up showing the same thing: no benefit over placebo.  But where more trials are conducted, we should be demanding that all adverse effects are collected and reported in the same manner as trials of new medicines.  Case series and reports are not proof of causation, but there is a bulk of evidence here that is concerning, and which should be addressed.  The best way to do that is in good quality trials.

In the mean time, is there anything else we can do?  Yes there is – in the UK at least.  The medicines regulator in the UK, the MHRA, runs the Yellow Card Scheme.  This is a mechanism by which anyone can report any side effect they experience after taking a medication.  I would strongly urge anyone who has suffered an adverse event after using homeopathy (or who knows someone who has) to visit www.mhra.gov.uk/yellowcard.  It’s quick and simple, and will help make remedies safer for everyone.  Similar schemes will be coming into effect throughout the EU soon, but if you live elsewhere please check and see if there’s anything similar.  We need all the data we can get!

Homeopathic Harms Vol 3: Poor Advice

The third post in mine and @SparkleWildfire’s blog series on the harms of homeopathy is now online!  Here’s a little taster :

Indirect harms due to homeopathy can, as we’re trying to cover in these posts, come in various different guises. In my opinion, there is none more dangerous than this: poor advice from homeopathic practitioners.

To set yourself up as a homeopath in the UK, you don’t need any medical background. You also don’t need to register with any regulatory bodies or undergo any standardized training. Medical homeopaths, i.e. doctors who practice it on the side, are of course regulated by the GMC, but your common or garden variety homeopaths could basically be anyone.

To read the rest, head on over to A Healthy Dose of Skepticism.

 

Does homeopathy have a place in therapy?

Given the current blog series I’m collaborating on regarding the potential harms of homeopathy, I thought it might be useful to stop for a moment and discuss its appropriate place in therapy. Do I believe that informed, consenting adults should be able to choose homeopathy as part of a treatment regime? Yes.

Are most people fully informed, and therefore able to give full consent? No, I don’t believe they are.

If people are making their treatment decisions based simply on assertions like “it’s safe” or “it’s gentle and natural” or (worst of all) “it works for me”, they are not fully informed. (To see why “it works for me” isn’t adequate, take a look at my post on anecdotes). This creates an ethical problem that should be insurmountable for any decent healthcare provider.

The evidence in favour of homeopathy simply does not reach the standard that we demand of conventional medicines. The evidence that it has the potential to cause harms (as we are showing with the Homeopathy Harms blog series) is very real.  Does a patient tend to feel better after seeing a homeopath?  Probably.  In these days of seven minute GP consultations the chance to sit down for an hour with someone who wants to listen, and dig deeper, and really *help* you is probably a really lovely thing.  Should we mistake that for thinking that homeopathy is a beneficial discipline? No.  Should we allow double standards by accepting lower quality evidence for homeopathy (or any complementary medicine) than we do for conventional medicine? No way.

But knowing all of this, knowing that the most “potent” homeopathic remedies have precisely no active ingredient, that there is no evidence of benefit beyond placebo effect, that at best they’ll derive no therapeutic effect and at worse they may experience serious side effects, should an adult be allowed to choose homeopathy for themselves? Sure.  Do many users of homeopathy meet these basic criteria for informed consent? I very much doubt it.

Homeopathic Harms Vol 2: where’s the evidence?

We often harp on about the evidence for homeopathy working or otherwise, and I’m not going to touch on that here, because it’s been covered beautifully by many more eloquent writers than me.  What you don’t often see though, is comment on the evidence for homeopathy doing harm.  In the last post in this series the lovely @SParkleWildfire touched on medicalisation, an indirect harm that’s very real but tough to quantify; but what about direct harms?  I’m glad you asked…

In conventional medicine, randomised controlled trials are the best kind of study we can do of a drug to see if it works and if it it’s safe.  What maybe doesn’t mentioned quite so often is that there’s an even *better* form of evidence – the systematic review.  These are produced when someone sits down to do the very tough but remarkably important job of finding every single scrap of evidence they can on a given topic, and pooling it all together to try and get closer to the definitive answer.  The result is a document that represents the best evidence possible for how well a drug (or anything else, for that matter) works, and how safe it is.

One of the biggest and most respected sources of these systematic reviews is the Cochrane Collaboration, who cover all areas of medicine.  Happily, they also have a few reviews related to homeopathy, and that seems as good a place to start as any.  The most recently published is:

 Homeopathic Oscillococcinum® for preventing and treating influenza and influenza-like illness

The authors searched multiple databases of medical literature, covering a time period dating back to the mid-60s and all the way up until August 2012.  That’s a lot of literature.  Out of all the results they found six randomised, placebo-controlled trials of Oscillococcinum that were similar enough to be directly compared.  Since we’re not really interested in efficacy in this review, I’ll skip straight to the safety part: out of these six trials, including a total 1,523 people, there was one reported adverse event.  One. It happened to be a headache. Let’s stop and think about that for a moment.

A good quality randomised controlled trial collects every single adverse event that happens to every single patient.  And the use of the term “adverse event” is very deliberate, because it includes absolutely everything unexpected and unwelcome that happens (and here’s the key part) whether or not it’s likely to be related to taking the drug.  That might sound counter-intuitive, but the reason is simple – we want to pick up every possible side effect of drugs, and sometimes side effects are…weird.  So it might sound odd to include as an adverse event that someone got hit by a bus, but what if the drug they were taking made them dizzy, or confused, or clumsy?  It’s not unreasonable to suggest that any one of those things could end up in getting you involved in a traffic accident.  So every single little thing is recorded, and once the trials is over you do some sums to work out the key question – are these things *more likely to happen in the people who took the drug*? If 20 people broke a leg but they were equally spread out among the trial groups then nothing further needs to be said; if 19 of them were on the drug being studied then there might be something to worry about.  The flip side of that of course is that if 19 were in the placebo group, you might want to wonder if the drug is (perhaps unintentionally) promoting better balance and co-ordination, for example (or if everyone in the placebo group was a keen but inept snowboarder).

Is that one single adverse event out of over 1,500 people taking Oscillococcinum starting to look fishy yet?  What about if I drop in the snippet that some of the people involved (327, to be precise) took the remedy every day for four weeks, to see if it stopped them from getting flu in the first place?  How many times in four weeks would an average, healthy person experience something that you could call an adverse event – a headache, a tummy upset, indigestion, a strained ankle, a touch of insomnia?  I’ve had three of those things in the last 24 hours, and I wouldn’t say I’m a particularly remarkable individual.

So hopefully you can see from this that there’s simply a huge, yawning hole in the evidence about safety in homeopathy.  There are ways and means to address this (though they’re far from perfect), and I’ll address one of those in my next post in this series.

Homeopathic Harms Vol 1: Medicalisation (Cross-posted from A Healthy Dose of Skepticism)

As discussed in my previous, brief post, this is part of a series of blog posts written by my and my good friend HJo.  This piece is cross-posted from her blog A Healthy Dose of Skepticism; look out for more to follow soon!

In February 2013, my friend Nancy and I delivered a Newcastle Skeptics in the Pub talk entitled Homeopathy: Where’s The Harm? As a follow up to this, we’ve decided to write a series of blog posts about a number of points we covered in the talk. Here is the first:  

Doctor’s appointments: often you feel like you’re in and out before you know it, and they can’t get you out the door quick enough. They have a target number of minutes to spend with each patient, and sometimes you can feel like they don’t have as much time as you’d like to discuss all the things you want to with them.

There is, then, one aspect of homeopathic practice which can be superior to that of conventional medicine: the consultation. A homeopath might spend an hour or more assessing each individual, not just asking about particular symptoms but about their personality as well, how they think and feel about the world. I’ve never been to see a homeopath, but I’d imagine this is really valuable to a patient, particularly those with minor mental health complaints. I know myself that when I’ve been to see a good GP who I feel has really listened to me, I leave feeling a bit better already.

I suspect that the consultation itself may be part of what provides benefit to patients, rather than the sugar pills that are given out at the end of it. I’m not aware of any evidence that compared individualised homeopathic treatment to the OTC stuff though, which would be the only way to tease out and quantify any benefit from the consultation.

So what’s the problem here? If a consultation with someone who appears to listen to you and care makes you feel better, where’s the harm in that? The sort of subtle, indirect harms that we’ll be discussing in this series of posts are often theoretical and would be very, very difficult to assess via hard, clinical evidence, so you’ll have to bear with me while I discuss them with you and see if they make sense at the end of it. Consider the following story:

Imagine I’m quite an anxious person (in actual fact I am, so it doesn’t take that much imagining to those who know me). Imagine I’m particularly anxious at the moment because I maybe have a public speaking event (something like Skeptics In The Pub, say!) to deliver in a few week’s time. I might be finding it hard to sleep, I find I’m worrying about it quite often, and getting some physical symptoms- my heart is beating quite fast at times, say, and my stomach hurts at times, but it’s nothing too serious.

I go to visit a homeopath (admittedly, this would be an unlikely thing to do if I was actually talking about myself) who takes time to discuss with me my problems. I get on well with them, and feel like they are really listening to me. During the discussion, I find that vocalising my anxieties helps me to rationalise them and my fears are allayed somewhat. Just the act of talking about it makes me feel better- in other words, the homeopath is delivering a talking therapy service to me. By the end of the consultation, I’m already feeling more in control of my anxieties, yet I’m still given some tablets to take home, and I dutifully follow the instructions I’m given.

As I’ve discussed elsewhere, there is a stigma about mental health issues. This also, unfortunately, extends to talking therapies too. Its quite likely that some people would be happier to declare “I’m seeing a homeopath” than “I’m seeing a counsellor” in front of their friends or acquaintances. The handing over of the sugar pills at the end of the consultation will no doubt suggest the talking bit is more “justified”, and they can convince themselves that they’re not mad, or the sort of weak person who would have to resort to a talking therapy. And thus, the stigma is reinforced. Talking therapies shouldn’t be something to be ashamed of. You don’t need some inert sugar pills to justify and hide the fact that, now and then, you just need to be able to talk to someone about your problems or feelings.

There are wider issues with this kind of thing too. The visit to the homeopath has made me feel better. I’ve been to see someone, left with some pills in my hand, and I’ve improved, reinforcing the fact that I feel better when given something to take. Let’s say that in the next few months, I feel a bit rubbish because I’ve had a bit of a cold and I’m left with a cough that’s been there for a couple of weeks. I go to see my Dr, who tells me that my chest is clear, and the cough should clear up of its own accord. However, I’ve expected to get something out of the visit- I don’t want to leave the surgery with no pills in my hand, as I know that last time I left a consultation about my health I was given pills at the end of it and I felt better. It’s left to the Dr to explain to me that I don’t need antibiotics, and this can be a notoriously difficult thing to do. Some Drs might relent and give me a prescription for an antibiotic, contributing to the catastrophic situation we’re in now with antibiotic resistance. If the Dr doesn’t give me a prescription, I’m left with a bad taste in my mouth and a bit of mistrust in the conventional health care system. ‘Next time I’m feeling ill’, I think, ‘I’ll go back to that homeopath. They take me seriously because they gave me pills’.

And so the cycle goes on….

H Jo

Homeopathic Harms: A blog series

Recently my good friend and I gave a talk to the Newcastle chapter of Skeptics in the pub on the harms associated with homeopathy.  It seemed to us that while the perceived benefits (or otherwise) are often covered in great depth, no one really looks at the harms.  The response to the talk was great, and we both felt like we covered some things that could do with immortalising in a form that’s a little more…permanent.  With that in mind, we’re teaming up to write a series of blog posts on the subject.  You can find HJo’s posts over at A Healthy Dose of Skepticism (and mine will of course be here), but we’ll  cross-post the first one or two and share links to the rest for ease of reading.  Hopefully we can bring something new to the table, or at least make some people stop and think twice about the implications of homeopathic remedies.

EDIT: The first two posts in the series are available for your entertainment!

Volume 1: Medicalisation

Volume 2: Where’s the evidence?

Plain language summary: calcium supplements and heart attacks

The research:

Calcium supplements with or without vitamin D and risk of cardiovascular events: reanalysis of the Women’s Health Initiative limited access dataset and meta-analysis

The summary:

This trial took information from 36,282 women who had been through the menopause, and looked at them to see whether taking calcium supplements made them more likely to have a heart attack.  Half of the women were given calcium and vitamin D supplements, while half were given a placebo (sugar-pill).  The trial found that taking calcium and vitamin D increased the risk of heart problems slightly, including heart attacks.  Some women took their own personal calcium supplements as well as those provided by the study, and these women had no increased risk of heart attack.

Re-analysis of some older trials found that calcium and vitamin D increased the risk of heart attacks and strokes.  The way that calcium supplements are used should be examined, to see if change is needed.

The caveats:

This paper appears to find that women who take the highest amount of calcium (their own tablets plus the ones provided by the study) have no increase in risk compared to women who don’t take any calcium at all.  If this was a true effect, we would expect women who take the most calcium to have the highest risk. Other authors have published papers that find no evidence of risk with calium and vitamin D supplements.

This paper is discussed in more detail in this blog post.

Calcium and heart attacks – an example of bad science

Preamble: the following post contains discussion of terms that aren’t commonly used in every day language.  I’ve attempted to explain them in the glossary, but some are probably still as clear as mud.  Please ask questions if something’s totally confusing, or give me suggestions on how to make the concepts clearer. And with that, on with the story…

I talked briefly last week about how abstracts of scientific papers can be misleading, so we need to read the whole paper in order to critically appraise it. Sadly, sometimes when you go through that critical appraisal process you find that the claims made in the abstract just don’t hold up. Critical appraisal is a broad topic with lots of components, so in this post I’m just going to focus on one aspect: do the numbers say what the authors claim that they say?

I’m going to take as my example a paper that I’ve read recently that actually really irritated me. Conveniently it’s open access, so anyone who fancies can go and read: “Calcium supplements with or without vitamin D and risk of cardiovascular events: reanalysis of the Women’s Health Initiative limited access dataset and meta-analysis”. As the title suggests, this paper has taken some large datasets and examined whether people who take calcium supplements are at higher risk of heart attacks and strokes. The conclusion of the abstract tells us:

Calcium supplements with or without vitamin D modestly increase the risk of cardiovascular events, especially myocardial infarction, a finding obscured in the WHI CaD Study by the widespread use of personal calcium supplements. A reassessment of the role of calcium supplements in osteoporosis management is warranted.

So how did they reach that conclusion? Let’s take a look. The data for this analysis comes from the calcium and vitamin D trial of the Women’s Health Initiative (also known as the WHI), which was designed to see if giving those supplements to 36,282 women who had been through the menopause would reduce their risk of hip fracture (they do, but the reduction’s not huge, and we can’t be sure that it’s not just due to chance). They later re-analysed the data, and found that after seven years the supplements had no effect on the risk of heart attacks or strokes. So far so good.

Bolland et al, the authors of the paper we’re looking at, had some concerns about this analysis – it turns out that just over half of the participants were taking their own calcium supplements. “Hang on!”, you might say, “how can we tell what effect calcium has, if some are them are taking their own calcium AND the stuff the study doctors gave them?!”. And you’d be right – this is potentially a big confounder.

So Bolland and chums took the WHI data, separated it according to whether the women were taking their own calcium or not, and then separated it again by whether they were then given study calcium or a placebo sugar pill. Confused yet? Me too, and we haven’t even really started. To simplify it slightly, here are the four groups of women we’ve ended up with:

A. Women who only took WHI calcium

B. Women who took nothing at all except placebo

C. Women who took their own calcium AND WHI calcium

D. Women who only took their own calcium (plus a placebo)

They then looked at lots of different outcomes or endpoints in these women. It’s normal practice in a trial to have one “primary outcome”, and maybe a few secondary ones. In this case, we have nine:

  1. Clinical heart attacks (that is, heart attacks where the person had symptoms, sought medical attention, and received treatment).
  2. Total heart attacks (includes all of the heart attacks in the group above, but also adds in ones that were only detected later by changes seen on ECG tests)
  3. Revascularisation (people who had a coronary artery bypass graft, or other procedure to promote healthy blood flow to the heart muscle)
  4. Stroke
  5. Combination of total heart attacks plus all deaths from coronary heart disease
  6. Combination of clinical heart attacks plus revascularisations.
  7. Combination of clinical heart attacks plus strokes
  8. Combination of total heart attacks, plus all deaths from coronary heart disease, plus revascularisations.
  9. Death from any cause

Seem a bit over the top? That’s because it is. There are very good reasons we normally only choose one primary endpoint for a trial, and maybe 3 or 4 secondary ones. The first is so that the authors can do some sums ahead of time, and work out how many people they need to enrol to answer the question properly. This is referred to as “statistical power” and it’s an important topic, but it needs its own blog post to do it justice. I’ll get to it one of these days. In any case Bolland et al had no control over how many people were enrolled, as they were using someone else’s data.

The second reason applies though, and it’s this. When a scientist does the sums at the end of a trial to figure out the results, usually they will include what’s known as a p value. The p stands for probability, and the p value very simply represents how likely it is that the results of the trial happened by chance. Any time the p value is less than or equal to 0.05 (the same as 5%), we say the results are statistically significant; that is, they’re likely to be accurate, and not due to chance.

That was a rather quick and simplistic explanation, but the one thing you need to take away from it is this: if you ever see p = 0.05 written down, there is a 95% chance that that the results you’re reading are accurate, and a 5% risk that they are due to chance, and are wrong. Put another way, that is a one in twenty risk of an incorrect result.

And we know that so far, Bolland et al have nine endpoints. Except…they don’t. As outlined above, they split the women in to four groups, which I’ve called A, B, C & D for simplicity. Then they compared groups A and B for each of the nine endpoints, and they compared groups C and D for each of the nine endpoints. So actually, there are eighteen comparisons here.

For each comparison, a hazard ratio is reported. Simply speaking, in this case, the hazard ratio represent how likely a person in group A is to have a heart attack (for example) compared to someone in group B, after a certain amount of time. For example, the hazard ratio for heart attacks in group A vs. group B was 1.22. We know that the women took supplements for seven years, so the hazard ratio tells us that for these post menopausal women, seven years of calcium and vitamin D supplements makes you 1.22 times (or 22%) more likely to have a heart attack. Or does it? The next section tells you…maybe not.

A confidence interval is also reported for each result, which is a useful partner to the p value.  For the example hazard ratio above, the 95% confidence interval was 1.00 to 1.50, and that’s very interesting, because  a hazard ratio of one means there’s no difference.  Because the 95% confidence interval includes one as one of the possible values, that tells us that maybe this effect isn’t as big as we thought.

Right, enough pre-amble; what did they find?  This post is quite long enough, so I’ll save that part for next time.

(Lack of) Clinical trial reporting

Dear readers, I have neglected you.  Truth be told, I have half a dozen posts sitting in draft format, and no time to finish them off and present them to you.  So in the mean time, here’s a little nugget that will hopefully enrage you as much as it did me.

Missing clinical trial data is a huge problem in medicine.  The fact is that anyone can conduct as many trials of a drug as they like, but are under no obligation to tell anyone that they have done so, or to publish the results.  Various attempts have been made to correct this ridiculous situation, but it still goes on.  This means that whenever a drug is prescribed, despite the very best efforts of everyone from the doctor prescribing it to the medicines management teams who put it on the formulary, there is a very real risk that it either doesn’t work or is not safe.

This is a ludicrous, outraging state of affairs, and you can read all about it in Ben Goldacre’s excellent book Bad Pharma.

Now, the EU appears to be making another attempt at correcting the situation, by proposing legislation requiring that the results of all clinical trials be reported directly to them.  “Great!” I hear you cry, “that means we’ll get all the data and can make good decisions!”  But actually, as the good Dr Goldacre pointed out on twitter yesterday evening, it means nothing of the sort.  Here’s the proposed wording for the new EU Clinical Trials Directive, which is available online for anyone to view: (skip to article 34, item 3, page 49 in the pdf)

Within one year from the end of a clinical trial, the sponsor shall submit to the EU database a summary of the results of the clinical trial.

However, where, for scientific reasons, it is not possible to submit a summary of the results within one year, the summary of results shall be submitted as soon as it is available. In this case, the protocol shall specify when the results are going to be submitted, together with an explanation.

I’m sure it’s very clear to everyone that the wording of the directive is so woolly that the average 5 year old could probably worm their way out of complying with it.

So there you have it.  This is the proposed new law governing regulatory oversight in the entire EU.  The first vote on it appears not to be ‘til April 2013, so maybe there’s time to kick up a fuss about this yet.  I’m currently investigating who would be best to get in touch with and hassle – when I find out I’ll share, and we can maybe get something done about this.