Version classiqueVersion mobile
OpenEdition Books

Text and Genre in Reconstruction

 | 
Willard McCarty

3. Textual Pathology

Peter Garrard

Texte intégral

1. Introduction

1In very broad terms, the theme of this chapter is disruption of brain function and its effects on higher order linguistic structure. More specifically, I will outline the changes caused by a particular species of neurodegenerative pathology – Alzheimer’s disease – on the physical apparatus of the brain, the impact of these changes on the brain’s ability to execute the cognitive tasks involved in the production and comprehension of language, and the extent to which this functional disturbance is evident in the products of a particular form of linguistic output, namely the production of a literary text. If literary aesthetics is the study of sensory, emotional and intellectual beauty in literature, and neuroscience the study of how the brain does what it does, then here I open a backdoor on the question with which this volume is occupied by considering physical causes for defects of such beauty where we would most expect to find it.

2I shall begin by providing a few brief, orienting explanations of brain structure in health and disease before moving on to the infinitely more complex and controversial subject of how this inchoate mass of axons, dendrites, synapses and neurotransmitters instantiates a level of organisation that we perceive and experience as cognitive activity.

3Although a treatise on battlefield surgery, including detailed macroscopic descriptions of the cranial cavity and cerebral cortex, survives in a 17th century BC papyrus, and although Claudius Galenus of Pergamum (129-200 AD), experimented on the nervous systems of a number of different mammals, it is nonetheless neat, comforting, and if nothing else memorable, to consider the origins of modern neurology as synonymous with two Englishmen: Head and Brain. Sir Henry Head’s (1861-1940) experiments on cutaneous sensation were largely conducted on himself and a cadre of dedicated colleagues (patients’ subjective reports were considered too unreliable as a basis for scientific theorising). His legacy was taken up by Sir Walter Russell (later Lord) Brain (1895-1966), who wrote a textbook of neurology (Brain’s Diseases of the Nervous System) that is now in its eleventh edition (Donaghy 2001) and for many years edited the journal that might as well have been named after him. Sadly, this line of descent did not continue in the manner of little Lord Tangent (the only son of Lord and Lady Circumference in Evelyn Waugh’s Decline and Fall) with Viscount Lobe, and the Hon. Mrs. Sarah Bellum. Happily though, it also escaped that fictional family’s painful and ignominious extinction.

2. The Brain

4When removed from its protective bony casing, the brain of an adult human appears as a multi-lobed organ with a furrowed surface and has a volume of about 1.4 litres. It weighs around a kilogram and a half, which is on averag approximately two percent of total body weight. The brain is a paired organ, consisting of two mirror-identical hemispheres. Slice through one of these hemispheres, and it becomes apparent that it consists of a surface layer (or cortex), overlying a deeper material (known as white matter), and that the ridges and furrows on the surface are a result of a large outer surface folding in order to fit within the rigid confines of the containing skull. In general, as you move away from the cortex into the deeper structures of the brain – midbrain, cerebellum, brainstem – you encounter structures critical to reflexive rather than ratiocinative activity: the cerebellum for maintaining balance and coordination; the upper brainstem for ensuring that eye movements are yoked together; the lower mediating involuntary protective or vegetative phenomena such as blinking, coughing and breathing. It is when these structures are disconnected from higher centres, that one encounters the clinically ambiguous and philosophically difficult states such as brainstem death (in which all cognitive, reflexive and vegetative activity has ceased while the body lives on), the ’vegetative state’ (where there are brainstem reflexes without apparent cortical activity), or the ’locked in state’, in which cognitive activity is present but invisible because of complete muscular paralysis.

5Magnify up and you find that both white matter and cortex are composed predominantly of nerve cells (neurons), with a characteristic if variable morphology consisting of a cell body, from where the cell’s growth, metabolism and behaviour is controlled, a narrow process or axon, which makes contact with other neurons, and a complex of extensions known tautologously as the dendritic tree, with which the axons of other neurons make contact.

6In contrast to this structural complexity a single neuron is, in functional terms, a rather boring entity, being limited to only two states (active or inactive), two effects (excitation or inhibition), and one property (which I will call unit memory). When it is inactive, the inside and outside of a neuron are maintained in a state of electrical equilibrium, with the interior slightly negatively charged relative to the exterior. When activated, the direction of polarity rapidly changes; this change is propagated along the length of the cell until it reaches the axon terminus, which makes contact (synapses) with a dendrite of a neighbouring cell. At this point, the cell’s depolarised state causes the release of a protein (neurotransmitter), which acts at the membrane of the neighbouring neuron. This chemical signal may be either excitatory (encouraging the neighbour into an activated state), or inhibitory (making the neighbour more resistant to activation). Any individual neuron will be subject to inputs from many thousands of axon termini, some of them sending excitatory signals, others inhibitory; and whether or not the second neuron becomes active or not therefore depends on the numerical sum of large numbers of positive excitatory or negative inhibitory signals received.

7The unit memory of an individual neuron, which enables it to alter in response to previous activity – such that it becomes permanently more or less likely to respond to similar stimuli in the future – is a property with a biological basis that can be demonstrated in laboratory preparations, and is known as ’long term potentiation’, or LTP (Bliss and Lomo 1973). LTP is also assumed to be the biological basis for the learning that we experience at the psychological level, which is held to depend on this sort of plasticity taking place in large-scale neural assemblies. Donald Hebb predicted all this in 1949, long before its biological basis was worked out:

When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased. (Hebb, 2002/1949)

8These days, Hebb’s insight is usually summarised by the more memorable phrase ’cells that wire together fire together’.

9Interspersed with neurons are a large number of other cells, collectively referred to as glia. Glia outnumber neurons by a factor of ten to one, and subserve a diverse range of functions: some simply act as structural support for neurons, some regulate the chemical balance of the internal environment, and others form components of the brain’s immune system. A particularly important species of glial cell ensheathes the axons of neurons in the deep white matter of the brain, forming a membrane that enhances transmission of electrochemical activity, as well as insulating it from interfering activity in its neighbours. Thus, the cells of the white matter are able to carry out their primary function: forming channels of communication between geographically distant brain regions. Cortical neurons are devoid of this insulating sheath, hence the colour difference on the cut slice; in the cortex, therefore, informational units combine, at both short and long range, to produce a complex system with an effectively limitless number of states.

3. Cognition and the Brain

10Unless we commit ourselves to the implausible doctrines of dualism (Popper and Eccles 1977), and claim that mental activity is different in kind from the physical changes that are occurring, from millisecond to millisecond, inside the brain, we are left with the question of how all this neurochemical activity gives rise to what we experience and see as cognitive activity and its products. There has been no serious scientific attempt to support the dualist position for over two decades, and it now seems clear that dualism is not so much a solution, as a reversal and a postponement of the problem: for if neural activity is not the basis of mental activity, then what is it for? To cool the blood, as Aristotle speculated? And if mental activity has a different basis, then why does it seem so difficult even to begin scientifically to characterise it?

11David Marr, the visual neurophysiologist, was the first to articulate the idea that the brain, or for that matter any informational system, is susceptible to description at different levels (Marr 1982). He identified these as, at the highest level, the goals of the system, followed by the methods used to achieve the goal, and the material means by which such a method is implemented. We might think of a digital clock, a clockwork timer and an hourglass as sharing a common goal (the division of time into equal portions), while exploiting the physical properties of a range of materials in order to achieve it. When applied to the brain, the top level clearly maps on to psychological constructs (memory, decision making, communication), while neural structures represent the means. To articulate description of the brain’s method – its algorithms – is a challenge that is synonymous with the modern discipline of cognitive neuroscience.

12To claim that the brain is a computer still prompts a sharp intake of breath from those with an aversion to over-simplistic analogies. Clearly the danger of any analogy is to take it too far and too literally, leaving room for the inevitable reductio ad absurdum (’if the brain is a computer, then where is the keyboard?’). Of course, for those who appreciate the power of analogy, even this imaginary piece of chicanery invites a very simple answer: a keyboard is a device for inputting information to a desktop computer, and therefore maps neatly on to the brain’s sense organs. That such an explanation is unlikely to be considered a satisfactory response by the analogyphobic is hard to comprehend: I have never heard anybody object to the assertion that the heart is ’a pump’ or the kidney ’a filter’. The unexceptionable claim of cognitive neuroscience, however, is not so much that the brain is ’a computer’, or even ’like a computer’ but that a computer and a brain (and their physical components) are examples of the same general kind of thing; both are devices for representing, storing and manipulating information. The analogy-averse are perhaps responding, over-sensitively, to the equally self-evident fact that one is infinitely more advanced, complex, powerful and versatile at accomplishing these goals than the other.

13So if we accept that the brain is implementing its goals in the matrix of billions of boring, interconnected single units that I began by describing, then the next questions to arise are, first, what sort of work does it do, and secondly, how does that work result in the goals being accomplished? A series of constraints, moral and technical, mean that we can only investigate these fundamental questions scientifically in an indirect manner, though there are experimental approaches, some of which exploit techniques for in vivo visualisation of brain activity while it works towards different cognitive goals. The best known of these techniques is functional magnetic resonance imaging, or fMRI, which produces maps of regional changes in blood flow (a surrogate marker of local neural activity) during performance of a well-defined cognitive task. Another approach is to rely on the occurrence, through accident or disease, of distortions in cognitive activity following damage or disruption to brain function, and then to try to correlate the physical properties of the damaged region with the details of the functional deficit (cognitive neuropsychology).

14A third approach is to try to simulate the achievement of a goal using computational units with similar properties to those employed by the brain. This endeavour is referred to by a variety of different names, including ’connectionist modelling’, ’parallel distributed processing’ and ’neural network theory’: all excellently descriptive names for the same simple idea. Take an informational unit which, like a neuron, can be either active or inactive, and allow it to influence, in a positive or negative way, any number of similar units on to which its activity extends. Set the resulting network in motion, and watch activation propagate through it. Make the connections between units modifiable, such that those that are subjected to large amounts of activation become more sensitive to similar activation in the future, and vice versa. Leave such a network alone, and it will eventually settle into an unchanging steady state. However, it can also be provided with a target state to achieve, and provided with regular feedback on whether it is getting closer to or further from this target. If this feedback process is accompanied by small incremental changes in the responsivity of its units, then the target state will eventually be reached.

15To appreciate that a similar incremental process may underpin some forms of biological learning, think of the process of learning to throw a basketball through a hoop: if the ball falls short, or overshoots, we adjust our technique accordingly, and after many such sessions we find our initial throws more accurate than they were when we started out. Thus, learning (a task or skill) is at least one goal that can be instantiated in a network of simple units with modifiable connections.

16What about more complex and abstract cognitive activity, such as the ability to generalise or draw inferences? We can think of this in terms of the essential cognitive goal of forming generic concepts from experience of only specific instances. For instance, seeing Rex, Rover, and Snowy in the park, hearing them referred to as ’dogs’, and then correctly deciding that Lassie (whom we have never seen before) is a member of the same category. There are various ways of accomplishing such a goal: one would be to learn, in addition to the defining characteristics of Rex, Rover and Snowy, the necessary and sufficient conditions for membership of the class of things to which they belong. Another is to store descriptive representations of each example, and to look for similarities between the stored representations and anything novel. It should be self-evident that the first of these two strategies is not employed in this instance (though it is in others, such as the ability to distinguish between a square and a rectangle): for any set of necessary and sufficient characteristics of a dog that you might give me (fur, legs, ears, a bark, etc.) I could provide an example of a dog that was missing at least one of them, but which you would be forced to concede was still a dog. The second strategy is undoubtedly more flexible, though also risks giving rise to widely differing definitions.

Figure 1: Simplified connectionist network consisting of three layers of active (filled) or inactive (unfilled) units, and weighted connections. By modifying its weights in response to error, the network can ’learn’ to associate each of the input patterns (a-e) with activity in either of the two units in the output layer: a form of categorisation task.

17The problem can be modelled in a network of simple units, in which the task to be learned is the assignment of a series of inputs to one of two categories (see Figure 1). It turns out that what the network comes to represent is an average, or prototype, of these various patterns; patterns close to the prototype (whether or not the network has been trained on them) are assigned; those that do not reach this threshold are rejected. So the network not only learns, it generalises to new materials. Better still, if a similar task is given to human subjects, most will claim to recognise a prototype as familiar even if they had never previously been exposed to it (Whittlesea 2002).

4. Neurodegeneration

18The point of the above was to provide a sense of how cognitive activity might be thought of as being instantiated in the intact cerebral cortex. It is not difficult to imagine that, when the structure of the cortex becomes disrupted – when all the long and short range connections that have built up to underpin learning, memory, language and reason, start to break down – the brain becomes less proficient at carrying out these tasks. This loss of structure and organisation can have a number of causes, but by far the commonest is the onset and progression of Alzheimer’s disease.

19For reasons that are poorly understood, Alzheimer’s disease begins when one of the proteins made by the brain (a protein whose function is still incompletely understood) undergoes a conformational change, becomes insoluble, aggregates, and accumulates inside and between cortical neurons, disrupting function both in the neuron itself and at the synapse. In common with many complex systems, a degree of redundancy is built into the high-level organisation of the brain, enabling it to function as normal even when a proportion of its structural components have become damaged or lost. There follows a further phase, during which functional disruption takes place but is so mild that the patient and his/her associates are unaware of its true nature. These phases are illustrated in Figure 2 and can usefully be referred to as the preclinical and prediagnostic. The length of the prediagnostic phase depends on a number of variables, including the sufferer’s willingness to seek help, and the diagnostic capabilities of the doctor, but typically lasts between six and twelve months. The length of the preclinical phase is more difficult to determine, though it has been argued to extend for many years or even decades (Ohm et al. 1995). I will return to this graph and this point towards the end of the chapter.

Figure 2: Hypothetical relationship between neuronal loss (x-axis) and cognitive function (y-axis), in a neurodegenerative disorder such as AD. Cognitive function
remains intact in the face of early degeneration (preclinical phase), and undetectably deficient thereafter (prediagnostic phase) until sufficiently obvious to allow a diagnosis to be made.

5. Effects of Neurodegenerative Pathology on Language

20Once a diagnosis of Alzheimer’s disease has been established there is (at present) no way back, and accumulating damage brings about increasing disruption to various cognitive abilities. In Alzheimer’s disease the first ability to suffer is usually the acquisition and retention of new information (usually referred to as ’episodic memory’). As more and more of the brain succumbs, however, the neural circuits required for maintenance of attention, visual discrimination, and language also begin to undergo degeneration, resulting in progressive functional decline in all these abilities. Since language is the theme of this chapter, let us look briefly at how these difficulties manifest themselves.

21One of the earliest difficulties encountered by patients with Alzheimer’s disease is one of word-finding (an inability to call to mind the correct word to describe a concept one wishes to convey). This is a familiar – it might even be claimed universal – experience, and it almost certainly becomes commoner with increasing age. There is not, as far as I am aware, empirical data to support the latter assertion, and even if there were, it would not necessarily imply that incipient degenerative change was responsible, since there are both demand and supply side changes to cognitive activity at different stages of life. By way of a personal anecdote, I was recently asked to give my opinion on a patient in front of an audience of clinical colleagues, and was forced to stop in mid-sentence by a sudden inability to produce the word ’confabulate,’ despite having used it in an aside to a colleague five minutes earlier. I am sure that this experience has been shared by many, if not all, readers.

Figure 3: Sample stimuli used in a naming test. This straightforward technique may detect difficulties with word retrieval and verbal output difficulties early in the course of dementia.

22Everyone is familiar with the so-called ’tip-of-the-tongue’ phenomenon. In fact it is so widespread that scientists, including a group at University College London, have used ’tip-of-the-tongue induction’ as a technique for examining the brain’s word production system (Vigliocco et al. 1997). The subject is provided with definitions of low-frequency words, and asked to produce the term thus defined. For example: [what is the word for] ’a bittersweet longing for things, persons, or situations of the past’; and ’a navigational instrument for measuring the angular elevation of the sun or a star above the horizon’. This seemingly easy task becomes far from straightforward when the subject is put under time pressure, leading to the frequent ’tip-of-the-tongue’ states for words such as nostalgia and sextant.

23I mention this here principally to sharpen the contrast between normal, everyday word production difficulties and the profound, clearly pathological problems that are seen in patients with Alzheimer’s disease and other forms of dementia. For such patients retrieval of even simple vocabulary items becomes a daily and progressive problem. Even in the early stages, patients will exhibit marked difficulty producing common, concrete nominal terms in response to pictures such as those in Figure 3. Later, they may even fail in the considerably simpler task of selecting the correct referent from an array when the item is spoken by the experimenter.

Figure 4: Percentage of subjects successfully naming various items. Subjects’ ability to name pictures of items is strongly influenced by how familiar they are with the to-be-named item.

24Moreover, there is typically a complete overlap between items that cannot be visually matched in this way and those that cannot be named. There is also a clear and graded relationship between the quality of the naming response (i.e. how close the subject came to producing the correct word), and the same subject’s ability to produce a list of the item’s features (Garrard et al. 2005). Also, if we arrange the test items in terms of how frequent or familiar they are likely to have been in the lives of an average 60 or 70 year old, it is those at the more unfamiliar end of the spectrum that present the greatest difficulty. See Figure 4.

25A somewhat more naturalistic technique for studying the phenomenon of language breakdown systematically is to give the subject a task that requires the production of continuous speech: this could be the recounting of a familiar story (the story of Cinderella is typically employed), or a story implicit in a picture. For historical reasons, the picture that is most often used is the ’cookie theft scene’ from the Boston Diagnostic Aphasia examination, first published by Harold Goodglass and Edith Kaplan in 1972 (Goodglass 2001). The picture shows a kitchen scene with various evolving events represented, such as an overflowing sink, a daydreaming housewife, and a boy falling off a stool as he reaches into a jar labeled ’cookies’ behind her back.

26Here is what a typical cognitively normal 60 or 70 year old said when asked to describe this scene:

It looks like a very chaotic situation. Children are trying to pinch cookies. They are pinching them but he is falling off a stool. The sink is overflowing whilst mummy is drying the dishes. She seems to be ignoring the children. Looks like a very tidy kitchen. There is a garden outside. The girl is putting her finger to her lips saying Shh! so that mum will not hear. Mum seems to be absorbed in something else while she is at the sink while the water runs over. Very difficult to understand that. Her foot is in water.

27This is a fluently produced account, narrative in intent, and organised into structurally acceptable sentences, economical in its use of words, which constitute a good selection of distinct nouns and verbs. Here, by contrast, is the output of a patient with established dementia:

The little girl is looking up to her brother. She holds up her left hand and puts her other hand into her mouth to help him. The boy has picked up…a cookie or something…. says so on the jar. Going to give it to the girl balancing in a way. The girl is just holding a plate and various pieces of …well… something useful. Standing at a window. Whether the window is open is not quite clear to me. The thing where the water is running out. The girl doesn’t bother. The window is open. Plate and two cups. House outside changed it movement.

28Note that this subject utters roughly the same number of words. Despite one or two pauses the rate of production is similar. There is a hint of syntactic disintegration (subtly in ’looking up to her brother’, more obviously in the fragmentary final sentence). What is perhaps most striking, however, is the impoverished variety of vocabulary used: both daughter and mother are referred to as ’the girl;’ the word ’tap’ is replaced by a circumlocution; the sentence structure is shorter and simpler; there is little sense of narrative – only of piecemeal description.

29Most neuropsychologists working in the field of language breakdown in Alzheimer’s disease have come to the conclusion that these data imply a breakdown in the representation of word and object meaning in the degenerating brain: in neuropsychological terms this would be referred to as a disintegration of ’semantic memory’ (Garrard et al. 1997). It is assumed further that this faculty forms part of a long-term memory system accrued over a lifetime of experience with the world and its contents, and represented in the brain in the form of a distributed network of information.

6. The Iris Murdoch Project

30The patient who produced the second of the two ’Cookie Theft’ descriptions was Iris Murdoch, one of the most acclaimed writers of the twentieth century. Between 1952 and 1995 she published 26 novels, as well as several volumes of poetry and (mainly philosophical) non-fiction. She died in 1999, with a much-publicised diagnosis of Alzheimer’s disease, a diagnosis which was later proven at post mortem. She wrote her final novel Jackson’s Dilemma (Murdoch 1995) a year or two before the diagnosis was made, but during this time, subtle evidence of cognitive difficulty was beginning to emerge. It seems clear, in other words, that Jackson’s Dilemma was written during the prediagnostic period (see Figure 2, above).

31Henry James’s famous comment that a writer is ’ […] present on every page of every book from which he sought so assiduously to eliminate himself’, is reflected in the critics’ assessments. Of her debut novel, Under the Net, published in 1954, Kingsley Amis wrote that it revealed a ’brilliant talent’. Praise was lavished on her eighteenth novel The Sea, the Sea when it came out in 1978, and was later endorsed by the award of the Booker Prize in the following year. But the reception, in 1995, of her final work of fiction was altogether different. Many critics tried to hide their lack of enthusiasm under a cloak of respect. Others were little short of insulting, including one who compared the book to the work of ’a thirteen year-old schoolgirl who doesn’t get out enough’ (Quoted in Porlock, 1995).

  • 1 John Bayley (personal communication).

32An obvious and inescapable question, therefore, is whether the distinctive quality of Jackson’s Dilemma reflected the effects that incipient Alzheimer’s disease was exerting on its author’s linguistic – and a fortiori literary – abilities. The reasons why such a question may be both possible and interesting to answer are worth spelling out: first, it is rare indeed to have the opportunity to examine cognitive processes in any detail during this intriguing prediagnostic phase; it is rarer still to be able to do so retrospectively, enabling the products of cognition to be examined while the subject is still blithely unaware of any problems. Moreover, the existence of twenty-five previous similar works provides a within-patient control sample of exceptional size and quality. Add to this what is known about Iris Murdoch’s highly individual approach to writing: she would carefully work out characters and plots for up to eight months before spending six months writing out the book in longhand (she never used a typewriter, let alone a wordprocessor). There is no evidence that she agonised over choice of words, indulged in repeated revisions of passages, or made extensive use of a dictionary or thesaurus (in fact she neither needed nor owned such aids1). Work in progress would occupy the pages of a large pad, which she carried around and added to whenever and wherever she found herself unoccupied (for instance, while waiting for a train, or visiting her husband in hospital where he was recovering from a broken leg) (Bayley 1999). Her publishers would be sent longhand manuscripts (and often complained that they could not read her handwriting), but she eschewed any editorial interference (Wilson 2003). In other words, what we get from the published texts represents only a minimal change from the words that were first committed to paper during the initial act of creation.

33A second, more speculative question might be to ask what the study of texts produced under conditions of cognitive impairment can reveal about the process of literary creativity. The idea that there may be a ’neuroscience of literature’ is likely to sound at best far-fetched, at worst heretical. Yet there is a burgeoning scientific literature on the effects of frontal lobe damage on creative potential in the visual arts (Miller and Hou 2004). In addition the stylistic changes in the output of Willem de Kooning, whose paintings became diminishingly abstract with the progress of the more advanced stages of Alzheimer’s disease are well documented (Espinel 1996). Such observations of the effects of brain damage on creativity in the visual sphere have given rise to hypotheses about the roles (both positive and negative) of specific cognitive processes – such as self-monitoring, inhibition, emotional regulation, and abstract ideation – in creative expression. Similar observation in the literary domain would lend justification to more generalised theoretical models, and perhaps also to an empirical approach to the neural basis of the artistic temperament.

34There is an opposing line of argument that the distinctive quality of Jackson’s Dilemma is the result, not of prediagnostic Alzheimer’s disease, but of a deliberate and ground-breaking experimentation with novelistic form: that the apparent lapses in maintaining a consistent authorial point of view that we see in Jackson’s Dilemma are deliberate and subtle artistic touches; the strangely inverse relationship between Jackson’s prominence in the narrative and his importance to the advancement of the plot, the lack of anything in his fictional life that can genuinely be termed a dilemma, and his relative anonymity as a character, all reflect a deliberate decision on the part of the author to ’privilege the peripheral over the central’ (Todd 2001). But in the best traditions of getting one’s retaliation in first, I would point out that, in respect of the work on which I now report, the book was analysed at the linguistic rather than stylistic level, on the basis of a priori hypotheses derived from two decades of study of language breakdown in Alzheimer’s disease.

35True, Iris Murdoch may conceivably have set out to write in a stylistically innovative fashion, but it seems scarcely plausible to suppose that the end point of this exploration should map on to language breakdown in Alzheimer’s disease across a range of different dimensions. Alternatively, just suppose that Murdoch had, with tragic foresight, set out to write as if she was suffering from Alzheimer’s disease, turning Jackson’s Dilemma into the dementia analogue of Mark Haddon’s The Curious Incident of the Dog in the Night Time (2003), whose fictional narrator carries a diagnosis of Asperger’s syndrome. Although the latter has been praised in the literary world for its credibility as well as originality, such enthusiasm was not shared by those with special insight into the mind of autistic individuals.2 For these reasons it will be fascinating, if or when an Alzheimer’s disease equivalent does appear, to compare its linguistic characteristics to those which Karalyn Patterson, John Hodges and I discovered when we subjected the text of Jackson’s Dilemma to a series of systematic analyses (Garrard 2005).

36We began, naturally enough, by selecting the works for comparison with Jackson’s Dilemma, and decided to use one from the early period, together with a further work from the 1970s, which was the most prolific and highly acclaimed period of Murdoch’s writing career. Under the Net (1954), The Sea, The Sea (1978), and Jackson’s Dilemma (1995) are stylistically diverse works: the early work is energetically comic, while the middle and late novels rarely stray into light-hearted territory; Under the Net and The Sea, The Sea are both written as first person narratives, while Jackson’s Dilemma is written (most of the time) from the point of view of an independent and omniscient narrator. Jackson’s Dilemma and Under the Net are both approximately half the length of The Sea, The Sea, though the latter is divided into significantly fewer chapters (8, compared with 20 for Under the Net and 13 for Jackson’s Dilemma). Clearly, these variables do not correlate with one another across this particular subset of books; nor, more importantly, do they vary in any systematic way across Murdoch’s works as a whole.

  • 3 www.concordancesoftware.co.uk

37Next, dialogue was eliminated from the text to be analysed. The rationale behind this decision was that the choice of vocabulary in such passages is likely to vary, and may be atypical, depending on the character portrayed. The resulting data were converted to word lists and frequency counts using Rob Watt’s ’Concordance’ software.3

38What about the hypotheses? We saw from the examples given above that word production, albeit in the somewhat unnatural context of picture naming, correlates strongly with the frequency and familiarity of the word to be produced. Moreover, in a large scale regression analysis of such naming responses given by Alzheimer’s patients, lexical frequency (the number of times-per-million that a given word typically enters spoken or written language) is strongly predictive of success, while word length has no such effect: few patients were able to produce the name ’elk’ appropriately, but most could recognise and name an elephant.

  • 4 www.psy.uwa.edu.au/mrcdatabase/uwa_mrc.htm [accessed 10/12/09].

39Using the Medical Research Council’s online psycholinguistic database4 we were able to match up to 80% of the vocabulary used in all three books with recorded values of written frequency. Word length was obviously obtainable for all words. Comparing mean word length across the three novels, there was a wide range of values in all three works, and considerable overlap between them. In contrast, when we compared lexical frequency, there was a clear and consistent pattern of difference, indicating a higher mean frequency in Jackson’s Dilemma than in either of the two earlier works; the precise analogue of the frequency effect in naming (high frequency items preserved, low frequency items lost) that we had predicted (see Figure 5).

40As I noted when discussing the Cookie Theft picture description earlier, deficiencies in spontaneous speech production are much more notable for their lexical and semantic than their syntactic properties. It is much more difficult to quantify syntactic integrity, and there is huge variation within the cognitively normal population. Moreover, even if we believe Pinker’s notion that the ability to use syntax is some kind of linguistic universal, whose possession may even be genetically determined (Pinker 1994), derangement of syntax can manifest in a rich variety of ways. One can look, for instance, at the relative use of different common parts of speech: in our study, the proportions of these did not vary, using a chi-square test, between the three books, when either word tokens or unique word types were considered. Other measures, such as grammatical complexity, cannot reliably be automated and are therefore not suitable for large-scale enterprises such as this, though sentence length correlates strongly with complexity, and is often taken as a surrogate measure of it. Here we saw a hint of a change not only in the form of a slight shortening of the mean sentence length in Jackson’s Dilemma compared with The Sea, The Sea, but also in a change in the opposite direction between the first and mid-career novels.

Figure 5: Comparative mean word frequency (usages per million words).

41So far so good, but my instinct is that, to exponents of digital stylometrics at any rate, I might be taken to task for relying too heavily on top-down approaches. So it is heartening to mention that perhaps the most striking finding of the study came from a more data-driven analysis.

42At intervals of 10,000 word tokens, we plotted the cumulative numbers of different word types. Figure 6, below, provides a graphic illustration of the rate of introduction of novel word forms as the three works progress. Writing in English, a language with over a quarter of a million words, and possessing a wide repertoire of foreign and technical usages, we would not expect an author such as Iris Murdoch easily to exhaust her vocabulary in the course of a 100,000 word novel. These slopes suggest that Under the Net is characterised by a dynamic use of vocabulary from beginning to end, and that this quality is even more prominent in The Sea, The Sea (the effect, one assumes, of twenty-five years’ experience of creative writing). Compare the slope plotted for Jackson’s Dilemma, with its gentler take-off, and earlier and more pronounced flattening out, suggesting a much lower limit to the vocabulary remaining available for use.

Figure 6: Cumulative word-type counts at 103 word intervals in three novels of Iris Murdoch, from early (Under the Net), middle (The Sea, The Sea) and late (Jackson’s Dilemma) periods. The flattened rate of increase in the late book implies increased recycling of previously used words, presumably due to a restricted available vocabulary.

43What we do not yet know is whether these three observations, consistent though they are, indicate a chaotically fluctuating range of values, or a consistent trend towards greater stylistic and grammatical sophistication, towards a wider and lower frequency vocabulary, the trend propelled forward by maturity, experience, confidence and success, and then backwards by Alzheimer’s disease into decline. We both suspect and hope that the latter is the case, because if it turns out to be so, then the timing of this decline will provide a unique window on that elusive preclinical phase of Alzheimer’s disease.

  • 5 For the record: Lisa Maloney, Dr. Arnab Majumdar, Helen Gould and Thurza Honey.

44Iris Murdoch’s steady and prolific literary output over four and a half decades presents a unique opportunity to try to answer these questions, though the enterprise will be vast and time-consuming. However, thanks to the patience and dedication of a series of students and assistants who have been associated with my research group,5 we are beginning to make inroads, and I would like to spend the remainder of this chapter presenting some preliminary data from these further (unpublished) analyses.

45Starting out as we did with three novels, we felt it an acceptable investment of time to acquire fully digitised versions of each. The acquisition period also gave us time to think about what methods of analysis would be appropriate to the dataset. But a longer wait, while the remaining twentythree books were scanned and proof-read could not be justified. So Arnab Majumdar and I decided to concentrate on the final two decades of Iris Murdoch’s career; that is to say, on the eight novels that she wrote from 1978 to 1995: The Sea, The Sea (1978) Nuns and Soldiers (1980); The Philosopher’s Pupil (1983); The Good Apprentice (1985); The Book and the Brotherhood (1988); The Message to the Planet (Murdoch 1990); The Green Knight (1994); and Jackson’s Dilemma (1995). Our aim was to trace back the changes that had been found in the three book comparisons, looking for a point at which this change first began to emerge. If such a point could be shown to exist it would be reasonable to suppose that it represents the earliest changes of Alzheimer’s disease, and perhaps even a marker for the elusive preclinical phase of the disease. Alternatively, a continuous trend may be identified from the time The Sea, The Sea was written, suggesting that the preclinical phase of the disease stretched back over many decades.

46Since the findings of the three book comparison were, in most cases, statistically very robust, we felt justified in using a random sampling approach to save time. We therefore used sequential random numbers to select pages, lines, and words in each text for further analysis. Two hundred words were sampled from each text using this method, and psycholinguistic variables examined as before. We looked first at the ratios of content to function words (which can be considered a hybrid measure of a lexical and syntactic character). This ratio remained at a level of around 0.5 throughout this late creative period. A flat line also characterised the imageability scores (i.e. how concrete or abstract the words tended to be – another powerful predictor of word retrieval in dementia (Bird et al. 2000)). Yet there was no indication at all that Iris Murdoch’s vocabulary became more concrete towards the end of her life. In fact it was, once again, only with lexical frequency that we found any convincing evidence of change. As illustrated in Figure 7, the mean lexical frequency of the random word sample remained at a consistent level until the publication of The Book and the Brotherhood in 1990. At this point, there is a marked increase in the mean lexical familiarity of the sampled words, a trend that continues to be reflected in the penultimate novel. Between The Green Knight and Jackson’s Dilemma however, the trend shows a partial reversal.

Figure 7: Mean values of lexical frequency for randomly sampled text taken from all of IM’s novels written between 1978 and 1995.

47What might be the significance of this time series? The most uninteresting explanation might be that the differences between books simply reflect the range of variability to be expected from random samples of 200 words, a possibility that we were not able to rule out on the basis of tests of statistical significance. Further, similar analyses are therefore warranted, based either on entire novels or on larger random samples. From the pattern of change, however (i.e. an Alzheimer’s disease-like marked increase in frequency beginning at a plausible time-frame relative to the author’s eventual illness) it seems likely that the data are highly informative. If genuine, the observed differences index the onset of Alzheimer-like cognitive changes between ten and thirteen years before death. Attribution of the components of this lead-time to preclinical and prediagnostic stages is a more speculative eenterprise, but it is interesting to note that the sharp reversal in the trend towards a higher frequency vocabulary associated with Jackson’s Dilemma appears roughly to coincide with the emergence of what were later recognised as early signs of dysfunction and may therefore represent a compensatory effort on the author’s part. It may be that compensatory behaviour – particularly if unconscious and automatic – marks the onset of this stage of disease progression in other patients, and/or across other cognitive domains.

7. Hypotheses, Methods and Future Directions

48Where to go next with this text corpus, and more generally with this investigative enterprise? One of the most rewarding aspects of the media blitz that followed the appearance of our work online in December 2004 was the interest that it aroused in other research fields. Through contact with the Centre for Iris Murdoch Studies at Kingston University I have since discovered a wealth of written material – letters, annotations, notebooks – spanning the period of cognitive decline, which simply cry out to be systematically evaluated. Contact with the Centre for Computing in the Humanities at King’s College London opened my eyes to the thoroughly data-driven analyses which John Burrows (2007) among others have used in authorship studies. Now that a digital archive of the majority of the Murdoch novels is available, I hope to be able to implement these techniques with a number of aims, including replication of our findings from the original study, and focusing in on the elusive preclinical period and the temporal locus of change. Consistent and statistically robust patterns of change from these analyses would raise the possibility of finding similar effects in other bodies of text, such as letters and diary entries, whose timing can be pinpointed more accurately. Equally important to this enterprise will be other writers and historical figures who suffered from dementing illnesses late in life, analysis of whose work may yield results of importance to literary history and authorship, as well as to the understanding of the cognitive effects of Alzheimer’s disease. One body of work in particular – the verbatim transcriptions of Commons and Lords debates (Hansard) – may well turn out to demonstrate linguistic changes referable to the presymptomatic and preclinical periods of the Alzheimer’s disease that was eventually diagnosed in one of the most fascinating and enigmatic political figures of our times. If so, then one of the most enduring mysteries of British political history (the reasons behind Wilson’s sudden and unforeseen departure from the office of Prime Minister) may come a step closer to being solved.

Notes

1 John Bayley (personal communication).

2 http://iautistic.com/autism-myths-the-curious-incident-of-the-dog-in-the-nighttime.php

[accessed 10/12/09].

3 www.concordancesoftware.co.uk

4 www.psy.uwa.edu.au/mrcdatabase/uwa_mrc.htm [accessed 10/12/09].

5 For the record: Lisa Maloney, Dr. Arnab Majumdar, Helen Gould and Thurza Honey.

Table des illustrations

Légende Figure 1: Simplified connectionist network consisting of three layers of active (filled) or inactive (unfilled) units, and weighted connections. By modifying its weights in response to error, the network can ’learn’ to associate each of the input patterns (a-e) with activity in either of the two units in the output layer: a form of categorisation task.
URL http://books.openedition.org/obp/docannexe/image/650/img-1.jpg
Fichier image/jpeg, 38k
Légende Figure 2: Hypothetical relationship between neuronal loss (x-axis) and cognitive function (y-axis), in a neurodegenerative disorder such as AD. Cognitive functionremains intact in the face of early degeneration (preclinical phase), and undetectably deficient thereafter (prediagnostic phase) until sufficiently obvious to allow a diagnosis to be made.
URL http://books.openedition.org/obp/docannexe/image/650/img-2.jpg
Fichier image/jpeg, 22k
Légende Figure 3: Sample stimuli used in a naming test. This straightforward technique may detect difficulties with word retrieval and verbal output difficulties early in the course of dementia.
URL http://books.openedition.org/obp/docannexe/image/650/img-3.jpg
Fichier image/jpeg, 14k
Légende Figure 4: Percentage of subjects successfully naming various items. Subjects’ ability to name pictures of items is strongly influenced by how familiar they are with the to-be-named item.
URL http://books.openedition.org/obp/docannexe/image/650/img-4.jpg
Fichier image/jpeg, 20k
Légende Figure 5: Comparative mean word frequency (usages per million words).
URL http://books.openedition.org/obp/docannexe/image/650/img-5.jpg
Fichier image/jpeg, 18k
Légende Figure 6: Cumulative word-type counts at 103 word intervals in three novels of Iris Murdoch, from early (Under the Net), middle (The Sea, The Sea) and late (Jackson’s Dilemma) periods. The flattened rate of increase in the late book implies increased recycling of previously used words, presumably due to a restricted available vocabulary.
URL http://books.openedition.org/obp/docannexe/image/650/img-6.jpg
Fichier image/jpeg, 33k
Légende Figure 7: Mean values of lexical frequency for randomly sampled text taken from all of IM’s novels written between 1978 and 1995.
URL http://books.openedition.org/obp/docannexe/image/650/img-7.jpg
Fichier image/jpeg, 20k

Auteur

Peter Robinson is co-director of the Institute for Textual Scholarship and Electronic Editing, University of Birmingham, textual editor specializing in Chaucer, software developer and founder of Scholarly Digital Editions. Since the 1980s he has pioneered digital techniques for editing, most notably the use of philogenetic software applied to the study of large textual traditions. Currently he is working with the Institute for New Testament Textual Research, Birmingham, on both the Nestle-Aland 28 and the Digital Nestle-Aland editions. His Canterbury Tales Project is a normative starting point for scholars interested in digital editing.

Acheter