Tag Archives: history of psychiatry

Research That Sticks: ‘Accident Neurosis’ and Its Sequelae

51crZuKsdtL._SX328_BO1,204,203,200_I’ve spent the past few days flicking through Dagmar Herzog’s excellent Cold War Freud, and there is one section that particularly sticks out for me. It deals with the period between the 1960s and 1980s, on the relationship between psychiatric studies of Holocaust survivors and later research on Vietnam veterans (the latter would lead to the codification of post-traumatic stress disorder – PTSD – in DSM-III in 1980).

Herzog’s argument is that the focus on Vietnam and PTSD by historians has overlooked the role that the research on Holocaust survivors played in shaping later medical theories of trauma, especially PTSD. Moreover, Herzog draws attention to the effect that the Holocaust research had on psychiatrists in the 1970s and 1980s, who were then treating refugees, many of whom, the psychiatrists claimed, were suffering from PTSD-like symptoms as a consequence of their exposure to rape, genocide and torture. According to Herzog, one group of psychiatrists, led by J. David Kinzie, in an effort to deduce whether a Western concept like PTSD could be applied to non-Western populations, began experimenting with their application of the diagnosis, and, in so doing, revisited and were influenced by earlier work on ‘concentration camp syndrome’ and related disorders.[1]

Herzog’s point about the retroactive reading of the Holocaust literature is somewhat under-determined (and buried in her end-notes). But what particularly interests me is both the effect of the Holocaust research on what came afterwards, and of the re-reading that Kinzie’s team undertook in the late 1980s to understand the traumatic experiences that they were now documenting in refugees, a re-reading that was inspired by the codification of PTSD in 1980: for it remains my hypothesis, necessarily tentative, that the history of trauma could be profitably rethought as doubled or folded – that is, that psychiatric research on trauma both shaped the research that followed, just as this later research folded back, shaped or inflected what came before. For this reason, I am interested in how certain medical studies endure, in how they stick – how they have an effect on later medical knowledge, and/or how they are re-animated, as in Kinzie’s case, by physicians looking back to comprehend what’s in front of them.

How this maps onto my own research on the history of trauma can be illustrated with reference to the work of neurologistpractneurol-2011-August-11-4-247-F1.large Henry Miller (b. 1913; d. 1976. Pictured right). He remained one of the most influential and heavily-cited contributor to debates over traumatic sequelae in postwar Britain, shaping much of the research that was published from the mid-1960s onwards. This legacy, I want to show, stems both from the effect of Miller on later studies (colouring, conditioning, shaping them); but also from the effect of these later studies on Miller’s earlier work, as it was read anew, re-animated and re-signified by the research that followed the mid-1960s. In brief, I think of Miller’s lectures as folded or sticky events, as simultaneously cause and consequence of a new way of studying traumatic sequelae.

Miller ’s influence stemmed from two lectures he gave to the Royal College of Physicians of London in 1961, which were later published in the British Medical Journal two months later.[2] Their focus was the disorder ‘accident neurosis’ – a label that Miller applied to a collection of symptoms that followed a traumatic accident but which bore an inconstant relationship to the severity of that accident. The typical case of accident neurosis was a working-class, 40-something man, who was involved in a minor accident (either on the road or at work) that resulted in modest, trivial physical injuries, typically to the head; the patient nevertheless presented with long-lasting psychiatric or concussional symptoms – headaches, dizziness, depression, anxiety, intolerance to noise, inability to focus, loss of appetite, etc. Because there was no consistent relationship between the severity of an injury and the consequent psychiatric symptoms – indeed, the more severe accidents rarely resulted in long-lasting emotional problems – Miller concluded that the neurotic sequelae of accidents stemmed from the systems of compensation and insurance that had grown alongside the Welfare State: that is, patients, whether deliberately or unconsciously, exaggerated or magnified their otherwise trivial symptoms so as to secure a larger compensation settlement.

What was novel about Miller is that, in his ‘accident neurosis’ lectures, he reported on follow-up studies with 50 former patients of his whom he had re-examined after they had left the medico-legal process. Furthermore, Miller’s lectures made deliberate use of statistics to avoid what he identified as the limitations of earlier research, for his complaint was that there had been few substantive studies published of ‘accident neurosis’, with most taking the form of ‘occasional contributions, often more conspicuous as expressions of opinion than for their factual content.’[3] These statistics substantiated Miller’s hypothesis that neurotic sequelae bore no relation to the severity of the trauma: for example, of the 50 patients whom Miller had followed-up with post-settlement, only two had psychiatric problems disabling them from work (three had psychiatric disorders but still worked). Indeed, 41 out of 45 of those who had been employed prior to their accident had returned to work by the time of Miller’s follow-up, with 45 out of the 50 symptom-free (they could only ‘muster […] a few trivial residual symptoms’, Miller noted, like nervousness in traffic).[4]

Granted, Miller placed greater stress on deliberate malingering than his contemporaries did. Otherwise, however, his ‘accident neurosis’ research sat comfortably within the medical consensus over neurotics and their claims for compensation. Indeed, immediately following their publication, Miller’s lectures were praised in an editorial in The Times, and welcomed, albeit more cautiously, by an editorial in The Lancet.

Yet, in the decade after 1965, the consensus began to shift. Neurologists and psychiatrists began to hold that concussive sequelae were causally related to brain damage, even if the patient’s psychology may elaborate the symptoms. Miller’s lectures were therefore seen as passé. That they nevertheless held considerable sway within the legal profession (it was claimed that they were regularly cited in court) was much to the chagrin of doctors. This incited many of the medical studies published in the 1970s, which attempted to expose the logical inconsistencies in Miller’s lectures (Miller, for example, had argued that sports injuries were less likely to cause neurotic sequelae than traffic accidents, but this was regarded as an unfair comparison: the latter typically occurred at higher velocity). But Miller’s undimmed popularity with the legal profession also encouraged a change of strategy, for critics complained that it was Miller’s statistics that were the most frequently referenced aspects of his accident neurosis study. Consequently, many physicians began to argue that Miller’s lectures could only be countered by undertaking fresh follow-up studies, by obtaining new statistical data on head injuries.

What was happening, in effect, was that Miller’s research was beginning to alter how physicians studied ‘accident neurosis’. To study or publish on the disorder, you now required a decent set of statistics on patients whom you had followed up with. Yet the lectures were to have one further consequence, for the greater use of statistical studies encouraged comparisons, re-readings and critique of Miller’s earlier findings. However paradoxically, Miller had instigated a change in how the post-concussional syndrome was studied, but that change then led, as a consequence, to repeated re-investigation of Miller’s methodology and conclusions. Research published post-Miller had thus not only to produce statistical data from following-up with patients, but it had also to reflect carefully on how these had been produced, to prevent them from perpetuating what were regarded as the earlier biases of Miller: Miller had reconfigured the study of traumatic sequelae, but, in so doing, had invited further reading of his work.

This is best demonstrated with reference to neurologist Reginald Kelly, the work of whom was published in the 1970s and was, of all the contributors to the medical debate, the most critical of Miller. One of Kelly’s first publications on ‘accident neurosis’, published in 1972, was explicit in wishing to replicate and through this test Miller’s statistical study, but with a broader range of follow-up patients (i.e., not just those referred from the insurance-company, but referrals from GPs and elsewhere). Kelly derived the bulk of them (112) from referrals from other physicians, and a smaller group (40) from insurers. Of them, he noted, the 112 referrals had an average recovery-time of three months, versus 14 months in Miller’s sample and 19 months in Kelly’s insurance sample. Moreover, although he admitted that neurologists did not see all cases of head injury, Kelly argued that his sample showed neurotic symptoms, as defined by Miller, as more common in the referred patients than the medico-legal claimants (75% of the former, against 65% from the latter). Kelly also claimed that his statistics demonstrated that most neurotic symptoms allayed with proper treatment and before settlement (78 out of the 84 patients, against only 6 out of the 26 medico-legal claimants).[5]

Furthermore, whilst Kelly had no complaint with the use of statistics per se, he contended that Miller’s figures presented a skewed picture of neurosis, writing that if the stats ‘contradict what has been clinically obvious’, then the ‘source of the figures and the prejudices of the statistician’ should be questioned. He claimed, for example, that Miller’s sample were comprised of patients referred to him by insurance-companies, and therefore represented those whom the insurers thought that they could challenge (the ‘genuine’ suffers would have had their cases settled long ago, as would those who had already recovered pre-settlement). Moreover, Kelly pointed out that an insurance-case could drag on for many months, with the most severely injured were less likely to be pestered by insurance-officials in any event: in other words, Miller had been studying the most hardened neuroses.

The research that appeared in the 1980s and 1990s continued along the route established by Kelly, with recurrent comparisons and re-readings made of Miller’s methodology, both critical and complimentary. For the present purposes, the above discussion captures sufficiently well what I identify as a trend within the medical study of trauma, in which individual pieces of research do not fit comfortably within chronological or linear accounts of historical development. Rather, as Herzog’s discussion of the Holocaust studies hints at, research on trauma is sticky, folded. It effects change, but how that research is later understood will also be effected by that change. My point, in other words, is that the history of trauma encourages an approach premised less on linear models of time and causality, than on one that acknowledges the contingent, doubled nature of temporality.

 


References

[1] As Herzog explains: ‘Over time, as Kinzie’s team worked to refine their psychotherapeutic approach to traumatized refugees, they increasingly familiarized themselves with and built on the writings of individuals who had worked with Holocaust survivors, including Leo Eitinger, William Niederland, and Hilel Klein –  as well as Robert Jay Lifton. Through detailed reports on individual cases and elaborations of their own treatment approaches […]  they advanced the view that “posttraumatic stress disorder” specifically as it had been formulated in DSM-III was indeed the best descriptor and that medical professionals everywhere needed to learn to recognize its signs.’ Dagmar Herzog, Cold War Freud: Psychoanalysis in an Age of Catastrophe (Cambridge: Cambridge University Press, 2016), p. 261, en 91.

[2] See See Henry Miller, ‘Accident Neurosis: Lecture I’, British Medical Journal, vol. 1, no. 5230 (1961), pp. 919-925.; Henry Miller, ‘Accident Neurosis: Lecture II’, British Medical Journal, vol. 1, no. 5231 (1961), pp. 992-998.

[3] Miller, ‘Accident Neurosis: Lecture I’, p. 920

[4] Ibid., p. 925.

[5] R. Kelly, ‘The Post-Traumatic Syndrome’, Pahlavi Medical Journal, vol. 3 (1972), pp. 532-533.

Advertisements

So, Just What is the Point of the History of Medicine?

ing0ea9e4f5ccabaafef2ce05512e9a0599I remember reading Roger Cooter’s Writing History in the Age of Biomedicine around the start of my PhD. And I thought it was a strange book — not, I should stress, because there is anything untoward about its writing-style (in fact, the opposite: it reads brilliantly). Neither is there anything objectionable about its structure: though eight of the book’s ten chapters have been published before, Cooter has provided a neat little preamble to each, allowing him, now at the end of his career, to expose the intellectual/epistemic conditions that previously informed each essay (as he explains here). I thought this was very nicely done. Indeed, I thought that Writing History in the Age of Biomedicine expertly consolidates a number of issues on the purpose of the history of medicine in wider academia, and Cooter does well to imbue his argument with vigour and force. But I found Writing History in the Age of Biomedicine strange because of how I responded to it, for precisely what I admired about the book I also puzzled over. And as my reading has deepened over the past couple of years, and my range of influences grown, I’ve become more critical of Cooter — specifically, what he identifies as the purpose of the history of medicine.

I have to admit something at the outset, though. Even on a third reading, it is hard not to be seduced by Writing History in the Age of Biomedicine. Cooter forcefully impresses upon the reader the importance of and need for academic history. Yet his position goes beyond a simple call for more and more research (or research for it’s own sake). Rather, he advocates for a particular focus to academic history, for the discipline, according to Cooter, faces unparalleled threats from neoliberalism and the growing rise of neuroscience. The former imposes not only greater levels of scrutiny and exposure to ‘audit culture’ within higher education, Cooter claims, but also insists on ‘never-ending growth and “economic progress”’. In so doing, it denigrates the study of the past (for why examine history if it has nothing to offer the ‘present-centric economist thinking about the future’?)[1]

Cooter’s criticism of neoliberalism is coupled with warnings of the threat posed by the turn to neuroscience in various disciplines, and which is regarded by Cooter as both an extension of, and coinciding with, neoliberal dogma. This, in part, helps to explain Cooter’s objection to the growing role of neuro-disciplines in the present century. But his aversion to neuroscience is also animated by what he identifies as the threat it poses to academic history. Cooter chides the arrogance of the ‘neuro enthusiasts’, their absolutism and tendency to absorb academic history into their own neuroscientific paradigms. History, in such accounts, is only seen as useful when seeking to further neuroscientific insight; it has no role in explaining how the neurosciences came to dominate in the 21st century nor otherwise challenge their existing orthodoxy.[2] This pivots to the third of Cooter’s concerns — that many academics blindly acquiesce in the rise of neuro (e.g., through the study of affect or the emotions).[3] This, he argues, amounts to a presentification of the past, an overloading of it with our present-day penchant for neuroscientific explanation.

Cooter contends that a two-fold strategy is needed to extricate history from the ghetto it now finds itself in. Firstly, the humanities have to be sundered from the hard sciences if they are to offer necessary but critical interrogation of the latter. Wishy-washy interdisciplinarity (e.g., the medical humanities) will not do.[4] Instead, medical historians must vigorously agitate against the ‘reductive’ forces of the natural sciences and the narrative of unbroken progress that they are wedded to.[5] With recourse to the study of the past, medical historians must take up the mantle of social critique — ‘critical history’ — to challenge the current dominance of biomedicine in the 21st century.

Yet Cooter also argues that historians, at the same time as they place a check on the neurosciences, must also reflect on the values that inform their own discipline. In other words, they must engage in rigorous self-policing — the turning, that is, of the critical gaze (normally reserved for the object of study) back onto the historian and her methodologies, concepts, frameworks, etc.[6] The historian’s ignorance of her own position, Cooter warns, leaves the discipline vulnerable to being side-lined or de-funded altogether in the face of wider influences — without a firm location within history, the historian is in danger of misunderstanding her role and the relevance of her discipline.[7] And without a critical understanding of historical epistemology (‘the constructedness of [historical] thought’), historians are in danger of losing themselves in engaging with the more powerful neuro-disciplines, of becoming swallowed up by the ‘new biological regime of truth’.[8] Hence why the ten chapters that compose Writing History in the Age of Biomedicine are prefaced with commentaries on the context on which they were produced: they are part of Cooter’s attempt to demonstrate how self-critique should look, how we should reflect upon (and thereby check) the various forces operating on our writings. Indeed, in language that is notably more open-ended, Cooter suggests that a greater engagement with one’s own position in history, and the impact of this vis-a-vis on history-writing, may bring with it a greater degree of ‘honesty and credibility’ to a form of research that still (within some quarters) seeks to perpetuate the ruse that history can be written ‘objectively’ and value-free.[9]

This call for self-reflection is to be lauded; in my opinion, it remains one of the most memorable points made in Writing History in the Age of Biomedicine. Historians do need to better reflect on the forces operating upon their research. They do need to jettison the notion that they can access the past unmediated by present-day concerns, values, technologies, etc (as I’ve already argued here). And on the subject of neoliberalism’s threat to academic history, I am also in deep agreement with Cooter. Whilst some of his rhetoric may be overblown — explained, perhaps, by his wish to provoke and startle historians out of their established ways of writing — his fundamental point about neoliberalism’s threat is sound.

But where I depart from Writing History in the Age of Biomedicine is in both the solution Cooter proposes and the purpose he assigns to the history of medicine. There are a number of inconsistencies in Cooter’s position. For example, it is unclear, as one reviewer has noted, how history can be both alive to contemporary threats whilst eschewing the use of present-day systems-of-value to study the past.[10] Equally, I am at a loss to understand how, according to Cooter, reducing human subjectivity to neuroscience is bad, but reducing everything to historical explanation, as advocated also by Cooter, is better (are not both equally reductive?).[11]

More worryingly, Writing History in the Age of Biomedicine is cut through with three unresolved contradictions. The first concerns historians, and their aversion to ‘theory’. Cooter insists on the need for historians to self-reflect on their position and the cultures that they are embedded in. He warns that the future of history-writing is in ‘the hands of historians themselves’, that ‘prayers for survival simply will not suffice’ and that ‘the time for procrastination and pious hope is past’[12]. Elsewhere, however, Cooter has written of the sluggishness with which social historians of medicine took to developments in Foucauldian scholarship (and, even then, implied that this was more a case of cherry-picking than critical engagement).[13] His comments on historians in general are more caustic, lampooning them for not ‘getting an ethical grip on themselves’ and for casually taking more and more ‘turns’ in history without deeper thinking of what this actually entails.[14] To be clear, I agree with Cooter — historians do neglect critical theory, and are frequently late to the party in engaging with new theoretical developments. But it remains a mystery how or why historical scholars will turn to self-critique, having eschewed all engagement with critical theory thus far.

In a similar vein, further questions are posed by Cooter’s animosity towards academics and their inability to mobilise against threats to their profession (at least within the British context). For instance, he complains about the willingness of historians to engage in competition over income-generation, and chides British academics for not being better unionised (in contrast to continental Europe). Cooter does concede that the ‘neoliberal forest […] has been difficult to penetrate’ by even those who wanted to, their efforts limited by a lack of time and opportunity.[15] But combined with his repeated complaints against the acquiescence of academics to the neuro-turn the result is another quandary — if academics have thus far failed to resist the effects of neoliberalism (or, for that matter, see it as much of a problem), then how and why will they want to do so now?

Fundamentally, I think, there is something unsatisfying about Cooter’s call for self-critique. It is animated by a belief that historians have to reflect on the systems-of-value that they bring to the study of the past, that, unless they are careful, historians might confirm rather than challenge existing power-relations. Yet Cooter also acknowledges that objectivity is not possible in historical research, and that ‘objectivity’ is itself a political category. What he proposes, however, is a system where historians should work tirelessly to expunge all present-day values from their research, as if, even though research is never objective, we should have a go anyway, that the past is some virginal territory which historians must not contaminate. Cooter’s logic is puzzling — if objectivity is not possible, and is itself a social construct, then why bother with it at all? Why persist with existential hand-wringing over presentism when we will never, ever be able to read the past without present-day values? Why expend energy spinning around in a never-ending cycle of self-critique?

But I think my beef with Writing History in the Age of Biomedicine stems from the role that Cooter assigns to historians in policing new instantiations of biomedical power. In my opinion, it comes across as a reactionary — that is, it reads like an attempt to lock others out of debate, to colonise an object of research so as to bolster the ontological foundations of academic history.

And we’ve heard it before, for there is now a typical narrative-structure employed in many historical studies (indeed, I have found it handy to utilise myself). It proceeds by arguing that there is a particular object that is regarded by non-historians as timeless or culturally-universal and/or entirely new to human thought and with no even half-related precedent. The historian then intervenes to demonstrate that said object is not transhistorical, universal and/or novel but is instead shaped by socio-historical forces. This is history in a reactive mode, directed towards a perceived challenge — useful for justifying academic research and the historian’s place in wider debate, but limited by its perception of other disciplinary paradigms as threatening. Thus, though Cooter suggests that the response of academic history to the neuro-disciplines should be one of attempting to critique, and thereby disrupt, the latter’s centrality in academia, this sounds like history in the reactive mode again, now directed to a new threat that ostensibly requires taming.  My point is that it’s a very narrow way of conceiving the historian’s role. And although I think Cooter is on to something with his talk of self-critique,  Writing History in the Age of Biomedicine feels like a missed opportunity to reflect on the narratives and arguments that we use in history to bolster our discipline. Cooter falls into auto-pilot, only furthers the idea that all contemporary developments are opportunities for historians to historicise. And it is this lack of imagination – more than anything else, I think – that will sideline academic history yet further.

 

 

References
[1] Roger Cooter with Claudia Stein, Writing History in the Age of Biomedicine (New Haven and London: Yale University Press, 2013), p. 33 and 4.
[2] Ibid., pp. 9-10.
[3] Roger Cooter, ‘Neural Veils and the Will to Historical Critique: Why Historians of Science Need to Take the Neuro-Turn Seriously’, Isis, vol. 105, no. 1 (2014), p. 147.; Cooter, Writing History in the Age of Biomedicine, p. 206.
[4] Cooter claims that interdisciplinarity often places humanities scholars under the thumb of scientists and is usually advanced by penny-pinching bureaucrats in HE. See Cooter, Writing History in the Age of Biomedicine, pp. 37-39. Relate this to his criticisms of neoliberalism and ‘audit culture’ above.
[5] Ibid., pp. 10-11.
[6] By way of background reading, consider Cooter’s comments on the loss of political relevancy amongst social historians of medicine in Roger Cooter, ‘After Death/After-‘Life’: The Social History of Medicine in Post-Postmodernity’, Social History of Medicine, vol. 20, no. 3 (2007), pp. 441-464.; and Roger Cooter, ‘Re-Presenting the Future of Medicine’s Past: Towards a Politics of Survival’, Medical History, vol. 55, no. 3 (2011), pp. 289-294. On the explicit influences on Cooter’s thought, see the respective arguments by Scott and Butler on the need for, and inventiveness of, self-critique in Joan W. Scott, ‘History-Writing as Critique’ in Keith Jenkins, Sue Morgan and Alan Munslow (eds), Manifestos for History (London and New York: Routledge, 2007), pp. 19-38.; and Judith Butler, ‘Critique, Dissent, Disciplinarity’, Critical Inquiry, vol. 35, no. 4 (2009), pp. 773-795.
[7] Cooter, ‘After Death/After-‘Life’, p. 442.
[8] Cooter, Writing History in the Age of Biomedicine, 12-13 and p. 16.
[9] Ibid., p. 7.; Cooter, ‘Neural Veils and the Will to Historical Critique’, p. 154.
[10] See Jouni-Matti Kuukkanen, ‘A Craving for Critical History’, History and Theory, vol. 53, no. 3 (2014), p. 432. We might also ask whether self-critique is easier to achieve in retrospect, when looking back on your work from a distance. Self-critiquing in situ, and then making that explicit in such a way as to satisfy a peer-review process, might be more of a challenge.
[11] As argued in Jonathan Toms, ‘So What? A Reply to Roger Cooter’s ‘After Death/After-“Life”: The Social History of Medicine in Post-Postmodernity’, Social History of Medicine, vol. 22, no. 3 (2009), p. 615.
[12] Cooter, ‘Re-Presenting the Future of Medicine’s Past’, p. 294.; Cooter, Writing History in the Age of Biomedicine, p. 40.
[13] Cooter, ‘After Death/After-‘Life’’, pp. 449-450.
[14] Cooter, Writing History in the Age of Biomedicine, pp. 207-208. Also see the comments on historians’ ‘resistance’ to their own self-interrogation in ibid., pp. 11-12.
[15] Cooter, ‘Re-Presenting the Future of Medicine’s Past’, p. 290.

The Use of Medical Testimony in Personal Injury Cases

Coal-miner Thomas Brennan appeared before the Court of Session in Edinburgh in 1955 to seek £3,000 reparation from his employer, the National Coal Board (NCB), whom he claimed had failed to adequately protect his safety. Brennan referred to an incident that had occurred in a coal-mine five years previously. On 10th February 1950, Brennan had been proceeding to his place of work via an underground roadway owned and operated by the NCB. Yet the roadway was slippery and steep, according to Brennan, and it was because of this, he claimed, that he fell with such force that he sustained a hernia. He further averred that he had developed traumatic neurasthenia following the accident, characterised, according to Brennan’s GP, by nervousness, insomnia, hand tremors and dizziness. The NCB disputed Brennan’s account, arguing that there were inconsistencies in the claimant’s story and that his hernia had, in fact, pre-dated his fall by around a decade.

Cases like this are central to my PhD. I focus on the medico-legal sequelae of traumatic accidents in twentieth-century Britain, pivotal to which are concepts like traumatic neurasthenia, neurosis or hysteria — labels which, though marked by considerable semantic slippage, were normally used in this period to refer to the sequelae of industrial or road traffic accidents by the numerous medical professionals who treated, examined and assessed accident-victims. Such accidents typically produced physical injuries of a mild or moderate nature, it was argued, yet also vague and long-lasting symptoms like headaches, dizziness, mood changes, restlessness, sleeplessness, gastric disturbance, social withdrawal or lack of appetite, libido or concentration. Often, these symptoms were causally attributed by psychiatrists, neurologists, orthopaedic surgeons and general practitioners to the systems of compensation and insurance made prevalent by private motorcar ownership and heavy industry. The thinking ran that post-accident symptoms, whilst often understandable, were unconsciously exaggerated or prolonged by the sufferer through the effort required to make and sustain a claim for compensation. As one neurologist commented in the 1940s: ‘The cumbersome machinery [of compensation] itself involves endless delays during which the workman’s symptoms, originally a “traumatic neurosis,” become transformed into a “condition neurosis” in the sustained effort required in a fight for compensation.’[1]

One theme that I am particularly interested in is the use of expert medical testimony in personal injury cases, and especially when claimants allege long-term traumatic sequelae. Brennan’s trial had no shortage of medical testimony, including from his GP, two psychiatrists and the NCB’s own doctor. Much of it related to whether or not Brennan had a hernia prior to his fall. But doctors were also asked to account for the claimant’s psychological sequelae. His GP, Dr. Robert Aitken, explained:

During the time [Brennan] was coming to me while he was still at work he was developing a condition — a hysterical condition. It was a form of traumatic hysteria. He said he was dizzy but we could find nothing wrong with his brain. He said he felt the skin on his legs and thigh was dead and he made all sorts of complaints for which we could find no organic cause. This condition is described as traumatic neurasthenia. I found no physical cause for this condition. […] I think that the man’s troubles are, as we say, upstairs. I am satisfied that the man’s condition prevented him from doing his work. There is no doubt about that.[2]

 

The involvement of medical experts in civil litigation has aroused little attention from historians and legal scholars, most of whom are more interested in criminal than civil law (or in PTSD and shell-shock than whiplash and traumatic neurosis). Those few studies to examine personal injury litigation have related the involvement of expert medical witnesses to the desire, on the part of insurers, to identify malingers, or else to the need for courts to deduce any motives on the part of the claimant.[3]

These arguments have some merit, but I think could be extended, following Jane F. Thrailkill’s suggestion, to include further reference to the unconscious: for, from the nineteenth century onwards, physicians argued they had privileged insight into the claimant’s unconscious, and could use this to illuminate not only motive but also offer an explanation of how the claimant’s post-accident sequelae had developed.[4] This assisted courts in several ways, not least in assessing the severity of the claimant’s disability. But medical testimony was also useful, I want to suggest, because of the perceived imperfections of the claimant’s memory.

I think it’s helpful at this stage to introduce a conceptual framework to understand the relationship between courts and memory. I want to suggest that, at least in personal injury cases, the modus operandi of the court was to act as a memory-retrieving machine: through the reconstruction of the accident and its sequelae, civil courts activated and acted as conduit for multiple forms of recollection — from claimants and their relatives, from eyewitnesses of the original accident and from expert medical witnesses who had examined the claimant. In effect, the court’s job was to contract different rhythms and durations of temporality into the one, single, homogenous time of the court. Yet this machinic process was subject, like the operation of any machine, to breakdown, interruption or atrophy depending on how its various components interacted. Judge or jury could be dissuaded by medical testimony if it contradicted their established ways of thinking about temporality or causality. As psychiatrist David Henderson, writing in 1956, explained:

The difficulty the psychiatrist is faced with in cases of compensation is the long interval which has elapsed between the accident and the psychiatrist’s examination. Months or years may have elapsed, and during that time the claim, instead of getting less, has usually become greatly increased, and the claimant’s condition aggravated and set […] Often the alleged disability is entirely out of proportion to the precipitating cause, but it may be difficult to prove that the accident has not been the main factor, especially when the person has been in employment until the time of the accident. For instance, a man 28 years old, who had suffered no serious physical injury but experienced a degree of shock, claimed four years later, when I examined him, that he suffered from “turns” and had had a serious loss of memory. In fact, his memory disturbance was a massive amnesia only compatible with a diagnosis of hysteria: the accident had been the precipitating factor, but it was not easy to convince a judge or jury of the true position.[5]

In other words, the court-as-memory-retrieving-machine was circumscribed in its movements and potential, governed by an over-arching set of rules and codifications — what memories judge and jury were willing to accept and also, we could add, what precedent and certain legal concepts permitted.

Indeed, many of these rules and codifications are still around today, in civil and criminal courts alike. Consider one further aspect of the court’s memory-retrieving machine — it pivots on a linear model of recollection. By this, I mean that courts insist upon an unmediated, near-perfect ability to recall past experiences and details. That memory is usually a dynamic process, and that recollection is impossible to insulate from other experiences and emotions, is not countenanced by the court. As has recently been argued with respect to sexual abuses cases (e.g., R. v Ghomeshi), courts require an unbroken, linear model of recollection, where the witness (or complainant) has to able to recall past events in such a way as to be unmediated by later experiences. Or as neurologist James Kirkwood Slater complained in 1948:

The law is well aware that students of applied psychology have all manner of recommendations for revolutionising the so-called commonsense method of obtaining evidence which for so long has stood the test of time. […] For instance they tell us that scores of memory variations can be discriminated. Let your friends, they say, describe how they have before their minds yesterday’s dinner table and the conversation around it, and there will not be two whose memory shows the same scheme and method. They urge that we should not ask a short-sighted man for the slight visual details of a far distant scene, yet it cannot be safer to ask a man of the acoustical memory type for strictly optical recollections…[6]

 

It is by bearing this in mind that we can properly grasp the function of the expert medical witness in personal injury cases: claimants, doctors argued, often had an unconscious or imperfect recollection of the events that had followed their accident. The claimant’s memory of their accident was too heavily coloured by the events that followed it (i.e., the various medical assessments and treatments the claimant had undergone). Indeed, in the cases that I have sampled, claimants were rarely cross-examined about their post-accident sequelae, with attention instead focussing on where they were at the time of their accident, what attempts they had made to check their own safety, etc.

Thus, when he testified in his case, Brennan was asked only briefly about his neurasthenic condition. Legal counsel were more interested in probing the account offered by medical experts. As Dr. Aitken observed:

[Brennan] is quite unaware of the whole business. He believes that something has happened as a result of the accident in his pelvic region — his groin region — and he believes this is the cause of all the trouble and he, accordingly, gets in a very unstable state. He is not capable of a sustained effort either in thinking or action. He isn’t capable of sitting down to thrash out a problem. […] If you asked him about his accident his hands would shake […] At times now when you are speaking to him you feel isn’t grasping properly what you are saying to him.[7]

Hence the involvement of medical experts: for the memory-retrieving machine to function, doctors were needed to bridge the divide between the claimant and the Court.

 

References
[1] James K. Slater, ‘Trauma and the Nervous System: With Particular Reference to Compensation and the Difficulties of Interpreting the Facts’, Edinburgh Medical Journal, vol. 53, no. 11 (1946), p. 640.
[2] National Archives of Scotland, CS258/1958/1704, ‘Notes of Evidence in Jury Trial: Thomas Brennan V. The National Coal Board’, 1958, p. 97.
[3] E.g., Danuta Mendelson, ‘English Medical Experts and the Claims for Shock Occasioned by Railway Collisions in the 1860s: Issues of Law, Ethics, and Medicine’, International Journal of Law and Psychiatry, vol. 25, no. 4 (2002), pp. 303-29.; Karen M. Odden, ‘Able and Intelligent Medical Men Meeting Together’: The Victorian Railway Crash, Medical Jurisprudence, and the Rise of Medical Authority, Journal of Victorian Culture, vol. 8, no. 1 (2003), pp. 33-54.
[4] See Jane F. Thrailkill, ‘Railway Spine, Nervous Excess and the Forensic Self’ in Laura Salisbury and Andrew Shail (eds), Neurology and Modernity: A Cultural History of Nervous Systems, 1800-1950 (Basingstoke, Hampshire and New York: Palgrave Macmillan, 2010), pp. 96-112.
[5] David Henderson, ‘Psychiatric Evidence in Court’, British Medical Journal, vol. 2, iss. 4983 (1956), p. 4.
[6] James K. Slater, ‘The Medical Man in the Witness Box’, Edinburgh Medical Journal , vol. 55, no. 10 (1948), p. 590.
[7] ‘Notes of Evidence in Jury Trial: Thomas Brennan V. The National Coal Board’, pp. 98-99.