May 2014

Message from the President - May 2014
ECNP e-news
Message from the President
Tuesday 27 May 2014

The reason I got interested in drug treatment in psychiatry is that in some people drugs make such a major difference. This clinical impression can be very powerful. Psychotic patients preoccupied by delusions and assailed by hostile voices can recover in a few weeks of starting to take a drug that blocks dopamine receptor function. When they then stop their antipsychotics, albeit slowly and with great optimism, within the next two years a stereotyped set of crippling will often symptoms return. There seems to me as little reason to doubt cause and effect as there would be to deny that psychedelic drugs can produce visual hallucinations and euphoria which wear off as drug clears.

The treatment effects (and the psychedelic effects) were so obvious at the start of the psychotropic era that doing a double blind randomised clinical trial (RCT) would have seemed a pointless addition to the evidence base. Things are rather different now. In fact, evidence based medicine deliberately debunks clinical observation as opinion or, worse still, authority. This is the lowest tier of the evidence hierarchy. Instead we can only be really sure a treatment works if its effects survive meta-analysis, lumping together as many trials as possible to increase the number of patients studied.

I understand the reasons for wanting randomisation, blinding and large numbers. And this is great if patients entering studies are fully representative, they are treated for appropriate periods of time and the end points are clinically meaningful. Our understanding that statins reduce the risk of death from cardiovascular disease represents a complete vindication of the approach; we could never have reached that position by simply observing our own patients.

However, for psychiatric indications, I believe the assumed value of RCTs has become greatly over-inflated by evidence based medicine. Most RCTs in psychiatry are paid for by pharma companies. These companies must convince regulators that new drugs are better than placebo. So can they recruit representative patients? The ethical challenge which progressively reduces feasibility is the need for doctors to feel that trials have equipoise: in other words, the treatments being offered are pretty much equally likely to work. That clearly sets up a conflict in studies where the comparator is placebo. The most ethical study becomes the study in which everyone responds or no one responds. The result appears to be very heterogeneous rates of recruitment and very heterogeneous results across sites in multi-centre trials. Moreover, the list of inclusion/exclusion criteria is often so long as to render the resulting sample highly atypical, and never representative of the most ill patients we actually see in practice. Are the patients that are recruited treated for long enough? Well, time is money: many acute treatment studies in psychiatry are planned as snap-shots of 6-8 weeks. But worse still, the artificial nature of clinical trials and the difficulties of recruitment mean drop out rates are high. This has disastrous consequences for the power to detect effects. Finally, our outcomes have been reduced to largely arbitrary counts of symptoms. These are measures almost never used by clinicians because they are tedious to obtain by interview and so offer little intuitive meaning of what has happened to the patients.

So we become conditioned to small effect sizes in RCTs. Indeed, those of us who have advised on or attempted to recruit into an industry designed RCT are amazed when trials work at all, ever, to demonstrate that a drug has the predicted effect. Everything favours a null result and not surprisingly therefore null results are common.

Until evidence based medicine became widely popularised, the failures of RCTs were regarded much as one regards private grief, of no great concern to a wider public. However, the unthinking elevation of the RCT to the position of gold standard has come back to bite us very badly because it is assumed that the effect sizes in trials can safely be extrapolated to real life. With the inclusion of every failed trial in a meta-analysis, the trivially small effect sizes that often result become a reason to doubt whether drugs work at all. The question too seldom asked is, are we looking at the failure of the trials or the failure of the drugs?

Pharmacoepidemiology offers a long neglected way of using naturalistic data to establish drug efficacy. For antipsychotics and mood stabilisers, this has been demonstrated very recently using Swedish databases (http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(14)60379-2/fulltext). The outcome was documented violence. This is a great outcome because it is not uncommon in psychiatric populations, it is a proxy for severity of illness and it has obvious clinical and societal relevance. It can also be repeated (unlike death, for example). Patients act as their own controls before, during and after treatment and observation periods can be long. The antipsychotics demonstrated 50% reductions in risk and the mood stabilizers 20% reductions in risk for patients during treatment. Mood stabilizers had effects which were confined to patients with bipolar diagnoses. So the study demonstrated large effects of treatment and has the potential to identify specificity: mood stabilizers are often used in non-bipolar patients judged to be at risk of violence. Similarly large effects on the risk of violence have been published in an identical study of stimulants for ADHD (http://www.nejm.org/doi/full/10.1056/NEJMoa1203241).

Commentary on this work largely missed the point because it concentrated on the violence. Its true relevance for anyone interested in applying neuroscience to psychiatry lies in the enormous reassurance it offers us that the medicines we develop and use are really very effective. Sometimes, when things are clinically obvious, they are right. At the very least, evidence based medicine applied to psychiatry needs to adopt a more nuanced approach to the evidence hierarchy.

 

Best regards, 

 

Guy Goodwin, ECNP President

Add a comment
Contact ECNP
To the website
Share this on:
Follow us:
Facebook Twitter mail to a friend Facebook Twitter


Comments:

As a carer and also a psychopharmacologist, I was very impressed with this message. The points about recruitment of severely ill patients into studies (with their requirments for run-ins, elaborate screening etc) is spot on - when faced with say an acute psychotic episode none of this is appropriate or even ethical. So the study may be totally unrepresesentative and possibly clinically unhelpful. The Fazel et al results are very interesting and future studies might perhaps look at all contacts with police including complaints from neighbours, calls for help from patients experiencing distressing delusions etc, which in my (limited) experience is a useful marker of (un)wellness.

by: Sue Wilson (27/05/2014 17:32)