Tag Archives: history of medicine

So, Just What is the Point of the History of Medicine?

ing0ea9e4f5ccabaafef2ce05512e9a0599I remember reading Roger Cooter’s Writing History in the Age of Biomedicine around the start of my PhD. And I thought it was a strange book — not, I should stress, because there is anything untoward about its writing-style (in fact, the opposite: it reads brilliantly). Neither is there anything objectionable about its structure: though eight of the book’s ten chapters have been published before, Cooter has provided a neat little preamble to each, allowing him, now at the end of his career, to expose the intellectual/epistemic conditions that previously informed each essay (as he explains here). I thought this was very nicely done. Indeed, I thought that Writing History in the Age of Biomedicine expertly consolidates a number of issues on the purpose of the history of medicine in wider academia, and Cooter does well to imbue his argument with vigour and force. But I found Writing History in the Age of Biomedicine strange because of how I responded to it, for precisely what I admired about the book I also puzzled over. And as my reading has deepened over the past couple of years, and my range of influences grown, I’ve become more critical of Cooter — specifically, what he identifies as the purpose of the history of medicine.

I have to admit something at the outset, though. Even on a third reading, it is hard not to be seduced by Writing History in the Age of Biomedicine. Cooter forcefully impresses upon the reader the importance of and need for academic history. Yet his position goes beyond a simple call for more and more research (or research for it’s own sake). Rather, he advocates for a particular focus to academic history, for the discipline, according to Cooter, faces unparalleled threats from neoliberalism and the growing rise of neuroscience. The former imposes not only greater levels of scrutiny and exposure to ‘audit culture’ within higher education, Cooter claims, but also insists on ‘never-ending growth and “economic progress”’. In so doing, it denigrates the study of the past (for why examine history if it has nothing to offer the ‘present-centric economist thinking about the future’?)[1]

Cooter’s criticism of neoliberalism is coupled with warnings of the threat posed by the turn to neuroscience in various disciplines, and which is regarded by Cooter as both an extension of, and coinciding with, neoliberal dogma. This, in part, helps to explain Cooter’s objection to the growing role of neuro-disciplines in the present century. But his aversion to neuroscience is also animated by what he identifies as the threat it poses to academic history. Cooter chides the arrogance of the ‘neuro enthusiasts’, their absolutism and tendency to absorb academic history into their own neuroscientific paradigms. History, in such accounts, is only seen as useful when seeking to further neuroscientific insight; it has no role in explaining how the neurosciences came to dominate in the 21st century nor otherwise challenge their existing orthodoxy.[2] This pivots to the third of Cooter’s concerns — that many academics blindly acquiesce in the rise of neuro (e.g., through the study of affect or the emotions).[3] This, he argues, amounts to a presentification of the past, an overloading of it with our present-day penchant for neuroscientific explanation.

Cooter contends that a two-fold strategy is needed to extricate history from the ghetto it now finds itself in. Firstly, the humanities have to be sundered from the hard sciences if they are to offer necessary but critical interrogation of the latter. Wishy-washy interdisciplinarity (e.g., the medical humanities) will not do.[4] Instead, medical historians must vigorously agitate against the ‘reductive’ forces of the natural sciences and the narrative of unbroken progress that they are wedded to.[5] With recourse to the study of the past, medical historians must take up the mantle of social critique — ‘critical history’ — to challenge the current dominance of biomedicine in the 21st century.

Yet Cooter also argues that historians, at the same time as they place a check on the neurosciences, must also reflect on the values that inform their own discipline. In other words, they must engage in rigorous self-policing — the turning, that is, of the critical gaze (normally reserved for the object of study) back onto the historian and her methodologies, concepts, frameworks, etc.[6] The historian’s ignorance of her own position, Cooter warns, leaves the discipline vulnerable to being side-lined or de-funded altogether in the face of wider influences — without a firm location within history, the historian is in danger of misunderstanding her role and the relevance of her discipline.[7] And without a critical understanding of historical epistemology (‘the constructedness of [historical] thought’), historians are in danger of losing themselves in engaging with the more powerful neuro-disciplines, of becoming swallowed up by the ‘new biological regime of truth’.[8] Hence why the ten chapters that compose Writing History in the Age of Biomedicine are prefaced with commentaries on the context on which they were produced: they are part of Cooter’s attempt to demonstrate how self-critique should look, how we should reflect upon (and thereby check) the various forces operating on our writings. Indeed, in language that is notably more open-ended, Cooter suggests that a greater engagement with one’s own position in history, and the impact of this vis-a-vis on history-writing, may bring with it a greater degree of ‘honesty and credibility’ to a form of research that still (within some quarters) seeks to perpetuate the ruse that history can be written ‘objectively’ and value-free.[9]

This call for self-reflection is to be lauded; in my opinion, it remains one of the most memorable points made in Writing History in the Age of Biomedicine. Historians do need to better reflect on the forces operating upon their research. They do need to jettison the notion that they can access the past unmediated by present-day concerns, values, technologies, etc (as I’ve already argued here). And on the subject of neoliberalism’s threat to academic history, I am also in deep agreement with Cooter. Whilst some of his rhetoric may be overblown — explained, perhaps, by his wish to provoke and startle historians out of their established ways of writing — his fundamental point about neoliberalism’s threat is sound.

But where I depart from Writing History in the Age of Biomedicine is in both the solution Cooter proposes and the purpose he assigns to the history of medicine. There are a number of inconsistencies in Cooter’s position. For example, it is unclear, as one reviewer has noted, how history can be both alive to contemporary threats whilst eschewing the use of present-day systems-of-value to study the past.[10] Equally, I am at a loss to understand how, according to Cooter, reducing human subjectivity to neuroscience is bad, but reducing everything to historical explanation, as advocated also by Cooter, is better (are not both equally reductive?).[11]

More worryingly, Writing History in the Age of Biomedicine is cut through with three unresolved contradictions. The first concerns historians, and their aversion to ‘theory’. Cooter insists on the need for historians to self-reflect on their position and the cultures that they are embedded in. He warns that the future of history-writing is in ‘the hands of historians themselves’, that ‘prayers for survival simply will not suffice’ and that ‘the time for procrastination and pious hope is past’[12]. Elsewhere, however, Cooter has written of the sluggishness with which social historians of medicine took to developments in Foucauldian scholarship (and, even then, implied that this was more a case of cherry-picking than critical engagement).[13] His comments on historians in general are more caustic, lampooning them for not ‘getting an ethical grip on themselves’ and for casually taking more and more ‘turns’ in history without deeper thinking of what this actually entails.[14] To be clear, I agree with Cooter — historians do neglect critical theory, and are frequently late to the party in engaging with new theoretical developments. But it remains a mystery how or why historical scholars will turn to self-critique, having eschewed all engagement with critical theory thus far.

In a similar vein, further questions are posed by Cooter’s animosity towards academics and their inability to mobilise against threats to their profession (at least within the British context). For instance, he complains about the willingness of historians to engage in competition over income-generation, and chides British academics for not being better unionised (in contrast to continental Europe). Cooter does concede that the ‘neoliberal forest […] has been difficult to penetrate’ by even those who wanted to, their efforts limited by a lack of time and opportunity.[15] But combined with his repeated complaints against the acquiescence of academics to the neuro-turn the result is another quandary — if academics have thus far failed to resist the effects of neoliberalism (or, for that matter, see it as much of a problem), then how and why will they want to do so now?

Fundamentally, I think, there is something unsatisfying about Cooter’s call for self-critique. It is animated by a belief that historians have to reflect on the systems-of-value that they bring to the study of the past, that, unless they are careful, historians might confirm rather than challenge existing power-relations. Yet Cooter also acknowledges that objectivity is not possible in historical research, and that ‘objectivity’ is itself a political category. What he proposes, however, is a system where historians should work tirelessly to expunge all present-day values from their research, as if, even though research is never objective, we should have a go anyway, that the past is some virginal territory which historians must not contaminate. Cooter’s logic is puzzling — if objectivity is not possible, and is itself a social construct, then why bother with it at all? Why persist with existential hand-wringing over presentism when we will never, ever be able to read the past without present-day values? Why expend energy spinning around in a never-ending cycle of self-critique?

But I think my beef with Writing History in the Age of Biomedicine stems from the role that Cooter assigns to historians in policing new instantiations of biomedical power. In my opinion, it comes across as a reactionary — that is, it reads like an attempt to lock others out of debate, to colonise an object of research so as to bolster the ontological foundations of academic history.

And we’ve heard it before, for there is now a typical narrative-structure employed in many historical studies (indeed, I have found it handy to utilise myself). It proceeds by arguing that there is a particular object that is regarded by non-historians as timeless or culturally-universal and/or entirely new to human thought and with no even half-related precedent. The historian then intervenes to demonstrate that said object is not transhistorical, universal and/or novel but is instead shaped by socio-historical forces. This is history in a reactive mode, directed towards a perceived challenge — useful for justifying academic research and the historian’s place in wider debate, but limited by its perception of other disciplinary paradigms as threatening. Thus, though Cooter suggests that the response of academic history to the neuro-disciplines should be one of attempting to critique, and thereby disrupt, the latter’s centrality in academia, this sounds like history in the reactive mode again, now directed to a new threat that ostensibly requires taming.  My point is that it’s a very narrow way of conceiving the historian’s role. And although I think Cooter is on to something with his talk of self-critique,  Writing History in the Age of Biomedicine feels like a missed opportunity to reflect on the narratives and arguments that we use in history to bolster our discipline. Cooter falls into auto-pilot, only furthers the idea that all contemporary developments are opportunities for historians to historicise. And it is this lack of imagination – more than anything else, I think – that will sideline academic history yet further.

 

 

References
[1] Roger Cooter with Claudia Stein, Writing History in the Age of Biomedicine (New Haven and London: Yale University Press, 2013), p. 33 and 4.
[2] Ibid., pp. 9-10.
[3] Roger Cooter, ‘Neural Veils and the Will to Historical Critique: Why Historians of Science Need to Take the Neuro-Turn Seriously’, Isis, vol. 105, no. 1 (2014), p. 147.; Cooter, Writing History in the Age of Biomedicine, p. 206.
[4] Cooter claims that interdisciplinarity often places humanities scholars under the thumb of scientists and is usually advanced by penny-pinching bureaucrats in HE. See Cooter, Writing History in the Age of Biomedicine, pp. 37-39. Relate this to his criticisms of neoliberalism and ‘audit culture’ above.
[5] Ibid., pp. 10-11.
[6] By way of background reading, consider Cooter’s comments on the loss of political relevancy amongst social historians of medicine in Roger Cooter, ‘After Death/After-‘Life’: The Social History of Medicine in Post-Postmodernity’, Social History of Medicine, vol. 20, no. 3 (2007), pp. 441-464.; and Roger Cooter, ‘Re-Presenting the Future of Medicine’s Past: Towards a Politics of Survival’, Medical History, vol. 55, no. 3 (2011), pp. 289-294. On the explicit influences on Cooter’s thought, see the respective arguments by Scott and Butler on the need for, and inventiveness of, self-critique in Joan W. Scott, ‘History-Writing as Critique’ in Keith Jenkins, Sue Morgan and Alan Munslow (eds), Manifestos for History (London and New York: Routledge, 2007), pp. 19-38.; and Judith Butler, ‘Critique, Dissent, Disciplinarity’, Critical Inquiry, vol. 35, no. 4 (2009), pp. 773-795.
[7] Cooter, ‘After Death/After-‘Life’, p. 442.
[8] Cooter, Writing History in the Age of Biomedicine, 12-13 and p. 16.
[9] Ibid., p. 7.; Cooter, ‘Neural Veils and the Will to Historical Critique’, p. 154.
[10] See Jouni-Matti Kuukkanen, ‘A Craving for Critical History’, History and Theory, vol. 53, no. 3 (2014), p. 432. We might also ask whether self-critique is easier to achieve in retrospect, when looking back on your work from a distance. Self-critiquing in situ, and then making that explicit in such a way as to satisfy a peer-review process, might be more of a challenge.
[11] As argued in Jonathan Toms, ‘So What? A Reply to Roger Cooter’s ‘After Death/After-“Life”: The Social History of Medicine in Post-Postmodernity’, Social History of Medicine, vol. 22, no. 3 (2009), p. 615.
[12] Cooter, ‘Re-Presenting the Future of Medicine’s Past’, p. 294.; Cooter, Writing History in the Age of Biomedicine, p. 40.
[13] Cooter, ‘After Death/After-‘Life’’, pp. 449-450.
[14] Cooter, Writing History in the Age of Biomedicine, pp. 207-208. Also see the comments on historians’ ‘resistance’ to their own self-interrogation in ibid., pp. 11-12.
[15] Cooter, ‘Re-Presenting the Future of Medicine’s Past’, p. 290.

The Use of Medical Testimony in Personal Injury Cases

Coal-miner Thomas Brennan appeared before the Court of Session in Edinburgh in 1955 to seek £3,000 reparation from his employer, the National Coal Board (NCB), whom he claimed had failed to adequately protect his safety. Brennan referred to an incident that had occurred in a coal-mine five years previously. On 10th February 1950, Brennan had been proceeding to his place of work via an underground roadway owned and operated by the NCB. Yet the roadway was slippery and steep, according to Brennan, and it was because of this, he claimed, that he fell with such force that he sustained a hernia. He further averred that he had developed traumatic neurasthenia following the accident, characterised, according to Brennan’s GP, by nervousness, insomnia, hand tremors and dizziness. The NCB disputed Brennan’s account, arguing that there were inconsistencies in the claimant’s story and that his hernia had, in fact, pre-dated his fall by around a decade.

Cases like this are central to my PhD. I focus on the medico-legal sequelae of traumatic accidents in twentieth-century Britain, pivotal to which are concepts like traumatic neurasthenia, neurosis or hysteria — labels which, though marked by considerable semantic slippage, were normally used in this period to refer to the sequelae of industrial or road traffic accidents by the numerous medical professionals who treated, examined and assessed accident-victims. Such accidents typically produced physical injuries of a mild or moderate nature, it was argued, yet also vague and long-lasting symptoms like headaches, dizziness, mood changes, restlessness, sleeplessness, gastric disturbance, social withdrawal or lack of appetite, libido or concentration. Often, these symptoms were causally attributed by psychiatrists, neurologists, orthopaedic surgeons and general practitioners to the systems of compensation and insurance made prevalent by private motorcar ownership and heavy industry. The thinking ran that post-accident symptoms, whilst often understandable, were unconsciously exaggerated or prolonged by the sufferer through the effort required to make and sustain a claim for compensation. As one neurologist commented in the 1940s: ‘The cumbersome machinery [of compensation] itself involves endless delays during which the workman’s symptoms, originally a “traumatic neurosis,” become transformed into a “condition neurosis” in the sustained effort required in a fight for compensation.’[1]

One theme that I am particularly interested in is the use of expert medical testimony in personal injury cases, and especially when claimants allege long-term traumatic sequelae. Brennan’s trial had no shortage of medical testimony, including from his GP, two psychiatrists and the NCB’s own doctor. Much of it related to whether or not Brennan had a hernia prior to his fall. But doctors were also asked to account for the claimant’s psychological sequelae. His GP, Dr. Robert Aitken, explained:

During the time [Brennan] was coming to me while he was still at work he was developing a condition — a hysterical condition. It was a form of traumatic hysteria. He said he was dizzy but we could find nothing wrong with his brain. He said he felt the skin on his legs and thigh was dead and he made all sorts of complaints for which we could find no organic cause. This condition is described as traumatic neurasthenia. I found no physical cause for this condition. […] I think that the man’s troubles are, as we say, upstairs. I am satisfied that the man’s condition prevented him from doing his work. There is no doubt about that.[2]

 

The involvement of medical experts in civil litigation has aroused little attention from historians and legal scholars, most of whom are more interested in criminal than civil law (or in PTSD and shell-shock than whiplash and traumatic neurosis). Those few studies to examine personal injury litigation have related the involvement of expert medical witnesses to the desire, on the part of insurers, to identify malingers, or else to the need for courts to deduce any motives on the part of the claimant.[3]

These arguments have some merit, but I think could be extended, following Jane F. Thrailkill’s suggestion, to include further reference to the unconscious: for, from the nineteenth century onwards, physicians argued they had privileged insight into the claimant’s unconscious, and could use this to illuminate not only motive but also offer an explanation of how the claimant’s post-accident sequelae had developed.[4] This assisted courts in several ways, not least in assessing the severity of the claimant’s disability. But medical testimony was also useful, I want to suggest, because of the perceived imperfections of the claimant’s memory.

I think it’s helpful at this stage to introduce a conceptual framework to understand the relationship between courts and memory. I want to suggest that, at least in personal injury cases, the modus operandi of the court was to act as a memory-retrieving machine: through the reconstruction of the accident and its sequelae, civil courts activated and acted as conduit for multiple forms of recollection — from claimants and their relatives, from eyewitnesses of the original accident and from expert medical witnesses who had examined the claimant. In effect, the court’s job was to contract different rhythms and durations of temporality into the one, single, homogenous time of the court. Yet this machinic process was subject, like the operation of any machine, to breakdown, interruption or atrophy depending on how its various components interacted. Judge or jury could be dissuaded by medical testimony if it contradicted their established ways of thinking about temporality or causality. As psychiatrist David Henderson, writing in 1956, explained:

The difficulty the psychiatrist is faced with in cases of compensation is the long interval which has elapsed between the accident and the psychiatrist’s examination. Months or years may have elapsed, and during that time the claim, instead of getting less, has usually become greatly increased, and the claimant’s condition aggravated and set […] Often the alleged disability is entirely out of proportion to the precipitating cause, but it may be difficult to prove that the accident has not been the main factor, especially when the person has been in employment until the time of the accident. For instance, a man 28 years old, who had suffered no serious physical injury but experienced a degree of shock, claimed four years later, when I examined him, that he suffered from “turns” and had had a serious loss of memory. In fact, his memory disturbance was a massive amnesia only compatible with a diagnosis of hysteria: the accident had been the precipitating factor, but it was not easy to convince a judge or jury of the true position.[5]

In other words, the court-as-memory-retrieving-machine was circumscribed in its movements and potential, governed by an over-arching set of rules and codifications — what memories judge and jury were willing to accept and also, we could add, what precedent and certain legal concepts permitted.

Indeed, many of these rules and codifications are still around today, in civil and criminal courts alike. Consider one further aspect of the court’s memory-retrieving machine — it pivots on a linear model of recollection. By this, I mean that courts insist upon an unmediated, near-perfect ability to recall past experiences and details. That memory is usually a dynamic process, and that recollection is impossible to insulate from other experiences and emotions, is not countenanced by the court. As has recently been argued with respect to sexual abuses cases (e.g., R. v Ghomeshi), courts require an unbroken, linear model of recollection, where the witness (or complainant) has to able to recall past events in such a way as to be unmediated by later experiences. Or as neurologist James Kirkwood Slater complained in 1948:

The law is well aware that students of applied psychology have all manner of recommendations for revolutionising the so-called commonsense method of obtaining evidence which for so long has stood the test of time. […] For instance they tell us that scores of memory variations can be discriminated. Let your friends, they say, describe how they have before their minds yesterday’s dinner table and the conversation around it, and there will not be two whose memory shows the same scheme and method. They urge that we should not ask a short-sighted man for the slight visual details of a far distant scene, yet it cannot be safer to ask a man of the acoustical memory type for strictly optical recollections…[6]

 

It is by bearing this in mind that we can properly grasp the function of the expert medical witness in personal injury cases: claimants, doctors argued, often had an unconscious or imperfect recollection of the events that had followed their accident. The claimant’s memory of their accident was too heavily coloured by the events that followed it (i.e., the various medical assessments and treatments the claimant had undergone). Indeed, in the cases that I have sampled, claimants were rarely cross-examined about their post-accident sequelae, with attention instead focussing on where they were at the time of their accident, what attempts they had made to check their own safety, etc.

Thus, when he testified in his case, Brennan was asked only briefly about his neurasthenic condition. Legal counsel were more interested in probing the account offered by medical experts. As Dr. Aitken observed:

[Brennan] is quite unaware of the whole business. He believes that something has happened as a result of the accident in his pelvic region — his groin region — and he believes this is the cause of all the trouble and he, accordingly, gets in a very unstable state. He is not capable of a sustained effort either in thinking or action. He isn’t capable of sitting down to thrash out a problem. […] If you asked him about his accident his hands would shake […] At times now when you are speaking to him you feel isn’t grasping properly what you are saying to him.[7]

Hence the involvement of medical experts: for the memory-retrieving machine to function, doctors were needed to bridge the divide between the claimant and the Court.

 

References
[1] James K. Slater, ‘Trauma and the Nervous System: With Particular Reference to Compensation and the Difficulties of Interpreting the Facts’, Edinburgh Medical Journal, vol. 53, no. 11 (1946), p. 640.
[2] National Archives of Scotland, CS258/1958/1704, ‘Notes of Evidence in Jury Trial: Thomas Brennan V. The National Coal Board’, 1958, p. 97.
[3] E.g., Danuta Mendelson, ‘English Medical Experts and the Claims for Shock Occasioned by Railway Collisions in the 1860s: Issues of Law, Ethics, and Medicine’, International Journal of Law and Psychiatry, vol. 25, no. 4 (2002), pp. 303-29.; Karen M. Odden, ‘Able and Intelligent Medical Men Meeting Together’: The Victorian Railway Crash, Medical Jurisprudence, and the Rise of Medical Authority, Journal of Victorian Culture, vol. 8, no. 1 (2003), pp. 33-54.
[4] See Jane F. Thrailkill, ‘Railway Spine, Nervous Excess and the Forensic Self’ in Laura Salisbury and Andrew Shail (eds), Neurology and Modernity: A Cultural History of Nervous Systems, 1800-1950 (Basingstoke, Hampshire and New York: Palgrave Macmillan, 2010), pp. 96-112.
[5] David Henderson, ‘Psychiatric Evidence in Court’, British Medical Journal, vol. 2, iss. 4983 (1956), p. 4.
[6] James K. Slater, ‘The Medical Man in the Witness Box’, Edinburgh Medical Journal , vol. 55, no. 10 (1948), p. 590.
[7] ‘Notes of Evidence in Jury Trial: Thomas Brennan V. The National Coal Board’, pp. 98-99.