Sunday, 12 October 2014

The Shape of Training review: a bizarre analysis of ‘evidence’

I have taken the time to review the ‘evidence’ that the Shape of Training review has relied upon in order to make some rather sweeping recommendations about the future of UK medical training.  One would hope that such major reforms would have been based on solid evidence, analysed in a robust and systematic fashion.  All my comments relate to the Annexes andAppendices document that is linked here

In the introductory section MMC is described “as a programme of radical change to drive up the quality of care for patients through reform and improvement in postgraduate medical education and training”.  This is plain wrong, MMC was an underhand attempt to introduce the sub-consultant grade by avoiding proper contract negotiations with the BMA.  The review’s distortion of the reality of MMC is either down to ignorance, or something rather more sinister.  The cherry picking of themes from the Tooke and Collins reviews is also unfair and not representative of these reviews’ recommendations.  At this stage it is worth repeating the Shape of Training review’s proposed purpose:

“The purpose of the Review is to ensure that doctors receive high-quality education and training that support high-quality patient care and improved outcomes.”

One would therefore imagine that their evidence review should look for evidence which can prove any changes they propose have proven benefits in terms of outcome.  Let us see if they can support their claimed purpose with evidence.  Interestingly a false dichotomy is proposed very early on:

“Theme 2 - Workforce needs: Specialists or generalists”

This is a false dichotomy and a demonstrative of a massively over simplistic attitude, one size certainly does not fit all.  Hence by proposing this false choice from the start, it is obvious the review intends to try to push us down one avenue, but why?  The lack of any decent definition of a ‘generalist’ is strange, surely this needs defining from the start?  There is also a lot of talk of ‘credentialing’ but a hard definition of what this word means is lacking, almost as if the review intends to use it as a flexible tool with which they can use however they wish at a later date.

Now to the evidence section, the start is fine, it is made clear that much evidence is anecdotal and that many previous reviews of training have been non-systematic.  Sadly things then start to go downhill when the discussion of ‘generalism’ and ‘integration’ is largely based on the work of ‘think tanks’ and some other rather potentially biased sources.  There is also a complete failure to define what ‘generalism’ and ‘integration’ mean, laying foundations of sand from the very beginning.
Sadly despite making the reader aware of the lack of decent evidence available, the review then goes about reviewing the literature and non-peer reviewed work of various think tanks using some rather vague search terms a thoroughly non-systematic methodology, one that is so open to bias it is not true.  Amazingly the review of evidence cites the GMC’s Good Medical Practice, hardly a robust and objective source of evidence.  So in conclusion the ‘evidence’ on generalism and integration has been cherry picked from a wide variety of frequently non-peer reviewed sources and sources that are open to significant political bias such as think tanks.

The first major question asked was:

“What evidence is there for the effectiveness of UK postgraduate medical education? What factors impact on the quality of postgraduate medical education in the UK?”

The search strategy was systematic, the problem with this is that this is entirely pointless if the evidence is not analysed in a systematic fashion, which it isn't, it is analysed in a fashion that resembles cherry picking.  Strangely the review then proceeds to describe some cherry picked qualitative studies relating to the impact of EWTD and MMC.  Numerous other questions are posed and not answered, often because the evidence just isn't there.  It is therefore particular strange to see this lack of evidence summed up in such a skewed and positive fashion:

“The research and discursive literature on post-MMC postgraduate medical education presents a mixed, albeit mainly positive, picture. Concerns have been raised in the major reviews, surveys and smaller research studies about the impact of system demands and working regulations on the training opportunities available to trainees. These concerns come primarily from trainers who compare their personal experience of a previous system of training with medical education as it is now. There is little objective evidence to suggest that training opportunities and experiences are diminishing.”

I find this summary astonishing, there is a clear element of propaganda here, the attempt to spin MMC positively is being repeated and there is a hint of denial when it comes to acknowledging the obvious, MMC and EWTD have hit experience levels and have impacted on training and patients.  It is contradictory that it deems on one hand that ‘there is little objective evidence that training opportunities and experiences are diminishing’ but on the other hand it deems the literature on post-MMC medical education is ‘mainly positive’ despite the lack of objective evidence to back this assertion up.

The reliance on GMC documents in the next section is interesting; there are repeated references to the GMC’s work in relation to the needs of patients/training and service needs.  It is interesting that the author of this ‘review’ has worked closely with the GMC on several projects and has not published on medical education at all from what I can see.  She is also rather obviously not a doctor, rather a large problem in terms of understanding the evidence concerning training doctors.  The second section concludes by again saying the evidence is not great:

“Whilst much has been written about the state of medical education in its current iteration, the evidence regarding whether it meets service, workforce, patient and individual doctors’ needs could be stronger. Much of the work that has been done has been opinion-based surveys.  There is a danger though, in using opinion as a proxy for hard evidence, particularly when current views may not have past counterparts against which comparisons can be made.”

The third section of the review aimed to compare the UK’s training systems to those of other countries, unsurprisingly this section concluded:

“An evidence review of this size could not realistically absorb all of the published literature on PGME….This evidence review did not find any themes in the debates or developments that are underway elsewhere that are not yet taken account of in the Review.”

Bizarrely the review’s third section asked “What do key opinion formers and stakeholders consider to be the future of medicine - pressures, opportunities and developments?”  This is a very strange question to ask and a strange group of people from whom to gather ‘evidence’ (opinion really).  If the Shape of Training review was just about improving training for the best quality patient care, then why are the views of stakeholders of any relevance in terms of evidence?  They are not of relevance; this should be about evidence, not about what stakeholders want as a result of their various vested interests.  The opinion in this section was largely gathered from the King’s Fund, the Centre for Workforce ‘Intelligence’ (my commas) and the RCGP.  There is a very brief mention for the BMA, but it is very brief indeed.  Unsurprisingly the conclusions of this section read much like a piece of think tank dogma “Regarding service demand, the shift in locus from hospital to community and the move to integrated care will impact on medical career options” and in this way stilted opinion has been transformed into evidence. 

The final question is “4: Does current UK postgraduate training give doctors the knowledge, skills and experience to meet future need for patient involvement in their care and treatment?”.  Again the search strategy seems open to bias and cherry picking.  The question appears deliberately vague and non-specific, almost as if it has been designed to generate no solid evidence based answer:

“No papers looked specifically at whether current postgraduate medical education prepares doctors for working with patients and the public in such a way as current trends dictate.” 

A strange comment, especially given that the term ‘way as current trends dictate’ has not been defined by the author.  How could the author even hope to define the ‘way current trends dictate’, this is a massively subjective judgement that is open to huge amounts of bias.  There is much discussion of generalism, but very little mention of how vague and subjective ‘generalism’ is as a term.  Again the conclusions are rather wishy washy and the lack of evidence is mentioned “There is a lack of research in this area, in terms of the impact of changes on the medical workforce, medical education and whole person approaches on patient outcomes and service quality.”

The integration section then reads much like a think tank review “calls for increased integration at all levels and in all areas of health and social care have come from influential bodies”, rather than summarising who has called for the rather nebulous concept of ‘integration’, perhaps the author would have been better to actually define ‘integration’ and see if there is any good evidence that integration as defined is of any benefit to anyone?  As the foundations have not been adequately prepared, this integration section reads like a biased piece of political propaganda, not like ‘evidence’: “An acknowledgement of integration as the direction of travel for UK health and social care is vital to the decisions made in the Review.”  Rather than assessing if there is solid evidence of a benefit of integration to medical training or patient care, the review just accepts that this is the way politicians want to go and doesn't question the dogma.

In terms of consultation the review “asked 19 open-ended questions and allowed for free-form responses” and received less than 400 responses, including both organisations and individuals.  Of note less than a hundred consultants and doctors in training responded in total, hardly an impressive return.  The consultation questions were designed and analysed in a qualitative fashion, almost as if any objective quantitative analysis of opinion had to be avoided.  Strangely the consultation summary frequently attempts to summarise opinion but consistently fails to provide any objective quantitative data to firmly back up these assertions.  Some of the questions were rather leaning in how they were asked, almost as if they were looking for a certain response.  The lack of definition of key terms such as a ‘generalist’ was mentioned but not adequately addressed, strange given how much the review goes on to use this and other such poorly defined vague terms. 

Interestingly this element of the consultation response “most individuals and organisations argued that generalists would require a longer training period or reconstruction of training to capture the breadth of experiences needed to provide competent general care” has been totally ignored by the review’s recommendations to shorten the length of training.  The opinion that training could not be shortened was also ignored “however, many respondents warned general specialty training is not necessarily the shorter training option, particularly in craft or small specialties”.  Broadly the consultation was highly flawed, not only was the response tiny, the vague qualitative responses could be interpreted in any way the review saw fit according to its preconceived ideas.  The consultation summary is much like a religious text, it is so vague and non specific that it can be interpreted exactly as one’s preconceived ideas dictate.

Thus overall the evidence review has totally failed to provide any solid evidence base upon which to justify the massive overhaul of the structure of medical training as recommended by the review.  The confused and almost haphazard way in which vague questions were posed and not answered doesn’t lead to any kind of evidence based conclusions of note.  While the review’s keenness to document the opinion of stakeholders as ‘evidence’ and its decision to include the work of potentially biased sources such as think tanks as ‘evidence’ are arguably highly flawed.  This methodologically dubious merging together of objective published evidence and politically motivated opinion is suspicious in itself; it has allowed evidence free political dogma to be accepted as evidence based fact by the review.  In conclusion the review does not give the impression that evidence has been objectively analysed to meet the long term needs of medical training and patients.  Importantly in this context the review’s recommendations appear misguided, politically motivated and evidence free.

1 comment:

David Colquhoun said...

The impression that this post leaves is that educational research is in a very poor state. Too much of it seems to be in the hands of sociologists who think that "action research" (i.e don't worry about control groups) can tell you something useful.