Desktop versionMobile version

Text and Genre in Reconstruction

 | 
Willard McCarty

Introduction

Willard McCarty

Full text

1. The Question in Principle

1In his Alfred Korzybski Memorial Lecture, the great neurophysiologist Warren McCulloch relates a story from his youth. When in 1917 he entered Haverford College, a Quaker institution in the United States, Rufus Jones called him in and asked him about his intentions:

’Warren,’ said he, ’what is Thee going to be?’ And I said, ’I don’t know.’ ’And what is Thee going to do?’ And again I said, ’I have no idea; but there is one question I would like to answer. What is a number, that a man may know it, and a man, that he may know a number?’ He smiled and said, ’Friend, Thee will be busy as long as Thee lives.’ I have been, and that is what we are about. (McCulloch 1988/1960: 2)

2Changing what needs to be changed in the above quotation, the central complex of questions that work in digital textual studies has been orbiting all these years emerges: his ’we’ is us, the writers and readers of this book, and conjoined to (rather than substituted for) ’number’, is ’text’ (and so ’book’). It is this complex of questions that is asked here again by some of the leading scholars in the field. What is text that we may read it in all its forms and genres, and find meaning in the statistical behaviour of its words? What are we that we may find the marks on the pages of books intelligible and put them there so that others of our kind may read?

3Asking such big questions and claiming, as is so often done, that the digital medium has fundamentally altered the conditions for asking them are both apt to give pause. Haven’t such questions always been asked or at least been implicit in scholarly work? Isn’t the role of the editor much the same as it was in ancient Alexandria (and perhaps in even earlier times) and then added to in the centuries which followed? Waters muddied by decades of hype and kept that way by constant demands for innovation make responding to such reasonable objections quite difficult. Claims of revolutionary effects are clearly not good enough; arguments, such as are offered here, are badly needed. But neither can the fact of such claiming be simply dismissed with some form of the Preacher’s sentence, that ’there is no new thing under the sun’. Change and continuity require each other to be meaningful; for both the meaning is in the detail. Reduction of text to data is a trade-off: manipulability, including quantification and other transformations, is gained; meaning, and with it ’context’ as a meaningful term, is lost. Effectively all would indeed be lost as far as the humanities are concerned if the change were one-way, the machine substituted for human intelligence. Nothing like that is the case for scholarship. Like other tools, computing augments it, gives it greater reach. Furthermore, because the computer is, as we will see, dynamically reconfigurable by design, it can in turn be augmented with new intelligence. Computing machines and scholarly intelligence change each other, recursively. A perfect illustration may be seen in John Burrows’ essay, first in this volume.

4What can this recursive machine do with text that is worthy of your notice? Let me propose the following features which make a genuine difference. Chief among these is (1) the automation which brings the timescale of forbiddingly laborious tasks within normal human bounds. From this fact of temporal advantage the rest can be derived. In particular, (2) the capacity to store and retrieve amounts of text large enough to permit access to and processing of unread but relevant material gives us the automated digital library, which remains an objective of research. On the theoretical side is (3) the conceptual language and ultimately software which gives us a standard, communicable way of describing processes of interest to us and of testing the descriptions, then implementing and distributing them. In consequence of the rigours of using this language, which requires complete and explicit specification, there arises (4) the struggle to articulate what normally goes without saying in our editions and editing practices. The mutability if not instability of the digital medium results in (5) the strong tendency for scholarship produced with it toward the conversational, improvisational and experimental. Hence, (6) the world-wide communication network implied by the above has developed, and is a necessity for exchange of scholarship at a pace commensurate with experimental, often collaborative work. My principal claim is not about the reality of these features. That, I would suppose, is beyond dispute. Rather I claim that they make a genuine difference for two reasons: first, that nothing gets done if it is too laborious or time-consuming, and second, that beyond a certain level of complexity things begin to happen which could not be predicted logically, though we may foresee them. The essays in this volume exemplify and explore these differences actually made.

2. The Question Historically Considered

5In the early days, when computing was rare within the humanities, it was deployed almost exclusively to take the place of long-established manual operations. The focus of the majority was on alleviating the burden of drudgery, reducing error and increasing the efficiency of scholars’ time.

6A widespread fear of automation in the wider world and the deep worry of commentators that new means were obscuring humane ends were reflected among textual scholars by the curiously repeated and seemingly nervous reassurance that the purpose of the computer was not to replace but to support the humanist ’in the work which only he can accomplish’, as Franklin J. Pegues said in a review of the 1964 IBM Conference on Literary Data Processing (1965: 107). In a prescient article in the inaugural issue of Computers and the Humanities two years later, the literary critic Louis Milic, amongst other things, complained that, ’satisfaction with such limited objectives denotes a real shortage of imagination among us. We are still not thinking of the computer as anything but a myriad of clerks or assistants in one convenient console’ (1966: 4).

7As in artificial intelligence and machine translation, early humanists began the 1960s with stirring visions and early successes only to plough into a morass of difficulties by mid decade. Then began the characteristic cycle of sifting for that which we now call ’evidence of value’. In 1976 – to choose one example out of many – the Aquinas scholar Roberto Busa noted the ’rather poor performance’ of literary computing that had resulted from pursuing such limited objectives as Milic identified. To him the failure to do better pointed back to a profound ignorance of language, of ’what is in our mouths at every moment’, and so to the need for fundamental research.

8Similarly, in an oft-cited article published two years later, Susan Wittig (1978) examined Margaret Masterman’s stirring vision of a ’telescope of the mind’ (Masterman 1962), observing how far short of it scholars had come. Like Jerome McGann more recently (2004b), she concluded that the fault lay with an utterly inadequate conception of text and recommended, like Busa, fundamental research into the question of what it is.

9In the digital humanities ideas and machines interact asynchronously to deepen the fundamental problems, rather than solve them. While the revolution proclaimed for computing has turned out to be more a going around in circles than a liberation from the hard slog of scholarship, what matters for research is the nuclear bundle of questions that governs the orbital path. To be fair, the revolutionary path isn’t a closed circle either. Our accumulating body of work demonstrates that it’s more of a spiral. But paying attention to the forward-pointing axis means minding the questions at the centre.

3. The Contents

10Seven of the nine essays collected here originated as papers delivered at the London Seminar in Digital Text and Scholarship from Autumn 2006 to Spring 2008. The remaining two essays were commissioned to complete the volume. Altogether the collection is arranged in two parts, the parts united by the question of text though divided by the perspectives they take on it.

11The first part is analytic and microscopic, with a focus on text as a fundamentally probabilistic medium whose hidden devices the patient use of statistical tools is allowing us gradually to unravel, and so giving us new understanding of our relation to language. The second part is synthetic and macroscopic, concerned with how the digital medium affects, reflects and bodies forth ideas of textuality, and especially concerned with its transforming potential in both scholarly and popular genres. In both parts, contributors probe how what we thought we safely knew or had in hand, disintegrates when seen from the digital perspective, and places us, scholars, not merely in the position of witnesses and guessers but in the role of makers, for whom the emergent potentialities of the medium constitute essential information. As one of the authors, Alan Galey, points out, ’The digital humanities’ most productive response […] has been to ask ’why speculate when we can prototype?’ – that is, to regard the future of the book as something we create, not just observe and comment upon.’ (p. 108). Scholars are becoming end-makers rather than mere end-users of digital tools.

12In the first part of this volume we hear from both sides of the same question – from two literary scholars engaged with the empirical aspects of writing (Burrows and Lancashire) and a cognitive neurologist, trained in the Classics, who studies literature in English (Garrard). The humanities, we know, do not progress by turning the uncertain into the certain, rather the opposite. But quantitative, even scientific approaches to the study of text, as here, while they provide an ever firmer basis for investigation of literature’s relationship to the creatures we are, also pry open cans of wonderfully wriggly worms.

13The essays of the second part place us imaginatively in the messy workshops, editorial workspaces and seminar rooms where experiments in the design of digital genres are taking place. We are made privy to the arguments, far from settled, indeed digitally unsettled, about what exactly it is that we think we are doing with texts. We are disabused of the silly but persistent notion that a solid, well-understood but obsolete physical object, the codex book, is being replaced ’real soon now’ by another not so solid, not so well-understood but fabulously better object, the e-book, electronic newspaper or, in the case of textual scholarship, the digital edition. We are brought up against not only undoubted change but also uncertain and highly contingent outcomes. We are, by the uncertainty of it all and by its dependence on human choice as well as historical accident, invited to participate in the shaping of the future. The last 60 years of work with digital text inform the arguments of the contributors to this volume and remind us how much goes into the cultural assimilation of technical inventions.

3.1. Analysis of Text

14In ’Never Say Always Again’ John Burrows, the pre-eminent scholar of computational stylistics in the Anglophone world, reflects on ’the numbers game’ by presenting three case studies to illustrate his most recent methods for discriminating authorship. His title-word ’game’ is worth noting as a clue to his working method, which is experiment-like and seriously playful in its recursive alternation of statistical trials and literary-critical judgement. He says, without fanfare, ’that work by different authors, work in different genres, work of different eras, work in different national forms of English can all comprise statistically distinguishable groups’ (p. 28). It is difficult to overestimate the significance of this statement, which announces the probabilistic quality of literature. We know from research in natural language processing that probabilistic methods have proven highly successful in automatic treatment of ordinary human discourse (cf. Manning and Schütze 1999). However, we also know anecdotally and by studying linguistic corpora that such discourse is highly repetitive, and so we might be inclined to dismiss the success of probabilistic methods as trivial. But Burrows shows that the most artfully crafted prose, however much the variatio sermonis may have been the author’s intention, yields to statistical methods at the deepest levels we know how to reach. Ian Hacking’s The Taming of Chance, which Burrows cites, begins by declaring with equally quiet authority that ’ [t]he most decisive event of twentieth century physics has been the discovery that the world is not deterministic’, that its principles of order are stochastic (1990: 1). In Mind and Nature Gregory Bateson argues that culture is transmitted by ’a sort of hybrid or mix-up’ of replication and learning, and that learning ’gathers its solutions’ out of the random play of the world (2002: 45). In other words, in terms of our subject, literary texts emerge from this mix-up of mimesis and random opportunity, hence are accessible stochastically, and hence are real as the physical world itself is. Burrows raises the troubling question of confidence – how much can we place in statistical analysis? Here is the beginning of an answer. In ’Cybertextuality by the Numbers’ Ian Lancashire constructs a theory of authoring from a synthesis of cybernetics, writers’ self-testimony, cognitive psychology and computational text-analysis. The core of his argument, and a most valuable contribution to this volume, comes with his conclusion that authors, and so our species, have been able to overcome basic limitations of the human mind by means of writing. ’We have’, he says, ’unrelentingly developed both cognitive and mechanical technologies consciously so as to gain control of our making’ (p. 69). He uses computational models and tools to frame the problem of how writing happens and to provide a means of detecting evidence for the role it plays in human development. The cybernetic idea of the feedback loop, he argues, allows us to explain in detail how text is so much more than marks on the page. It is, among other things, an Engelbartian technology of augmentation, a creative extension beyond nature by means of art, and so creative of a new nature (Engelbart 1962). Nevertheless the phenomenology of tool-use as a whole for example in the writings of Michael Polanyi (1969) and, more recently, Walter Vincenti (1990), is highly relevant and helps to connect cybertextuality with a broad range of work elsewhere. Following the classical approach of physiological research – to investigate a function of the body by studying a relevant pathology – Peter Garrard describes how the loss of structure and organization in consequence of Alzheimer’s, reflected in degeneration of linguistic abilities, may be used to infer the nature of healthy cognition. As a case study he describes research into the possible effects of Alzheimer’s on the final novel of Iris Murdoch, Jackson’s Dilemma, which presents a rare opportunity to study textual pathology before the author herself could have been aware of its effects and to compare the text against a large corpus of work very close to the author’s original manuscripts. This is, Lancashire notes, ’a uniquely important case study’ highlighting the modularity of mental language processing and, in this case, the working memory central to his study (p. 44). Garrard considers the criticisms and arguments surrounding Jackson’s Dilemma carefully, but he finds both by a systematic top-down analysis from his hypotheses to the data, and by a bottom-up, data-driven approach, striking confirmations. In a nutshell, Garrard provides a fine instance, refreshingly clinical, of the fact that text embodies embodied thought.

3.2. Synthesis of Textual Genres

15The particular focus of this volume’s second half is the macroscopic or telescopic view from textual data to the forms we give them.

16Alan Galey, in ’The Human Presence in Digital Artefacts’, argues that this view begins with the tensions ’between the surface orderliness of scholarly resources and the stubborn irregularity of textual materials’ (p. 93). These tensions are the daily concern of textual editors but not usually of their scholarly clientele, let alone the reading public. They reveal not only that any interface is cognitively thick and complex in proportion to the text it re-presents, but also that textual irregularities can never be completely modelled for or by computer processing. Models in the sense intended here, as elsewhere in the digital humanities, always simplify by omission of that which others may regard as important, and so are never all-encompassing. Galey shows that the technical concerns of design are inseparable from the irresolvable aesthetic, symbolic and hermeneutical dimensions of editorial work. Thus, he argues, there can be no definitive digital resource, no digital monument against time, not even in the sense of a single modelling device. Galey’s argument from these stubborn irregularities, from what text is, concludes in an invitation to us to become (as I am fond of saying) end-makers in the designing of digital genres. As the great Australian ethnographic historian Greg Dening used to insist (1998), the point is to think present-participally rather than nominally – of the future of textual editing as a communal process.

17Edward Vanhoutte’s declared purpose in ’Defining Electronic Editions: A Historical and Functional Perspective’ is to propose a definition of what an electronic edition is, and to frame it in terms of work done to date. Against the background of the history of electronic textual editing, he discusses Peter Robinson’s model of cooperative, distributed editions and Peter Shillingsburg’s knowledge sites (both discussed in following chapters), Espen Ore’s self-sufficient archive and both Robinson’s and Jerome McGann’s models of the reproductive edition. Vanhoutte’s defining method for the electronic edition follows from an application of Espen Aarseth’s taxonomy of how texts are traversed. (Again, note the significant emphasis on readerly process rather than structural product.) The typologies inherited from editions in print do not suit the digital environment but offer a way forward. He speaks in combinatorial terms, of a set of interoperable tools the end-user would deploy to construct ’new genres of editions’. The question of what these genres might be reflects back on the question of what text is that allows it to be edited. The design of tools raises the question of operational primitives – or, less problematically, of commonplace operations discovered in practice, as (one suspects) most tools have been.

18Peter Robinson’s practical work over many years itself constitutes the raw material for an historical study of how ideas for editing in the digital medium have developed. His chapter for this volume, ’A specification towards distributed editions’, thus represents as much experience with the conceivable alternatives as anyone could muster. Here he specifies what might be required to create the ’fluid, cooperative and distributed’ scholarly editions that many scholars, such as Peter Shillingsburg in the following essay, have advocated. He proposes specific mechanisms to label components of such an edition, outlines how these components should be held on distributed-edition servers and how software tools on the reader’s computer and on the server might interact. He sketches out the functionality readers and scholars require. In appendices he gives instances of how attributes of distributed editions may be used by various projects, describes the relations of components and discusses stand-off encoding, which Paul Eggert takes up in a following chapter. The manifest failure of the standalone ’e-book’ to replace the printed codex, as Robinson illustrates in an opening anecdote, and the manifest success of distributed online resources lend strong support to his argument.

19In ’How Literary Works Exist: Implied, Represented and Interpreted’ Peter Shillingsburg writes as a digitally informed scholarly bibliographer and book historian with four decades of practical experience and theoretical reflection. His is a critical activist’s project to tease out the nature of textual existence and representation in order to address not so much a digital future for the book but the future of the book in a digital world. He begins, then, where one must – with the codex, recipient of nearly two millennia of creative attention. He sees that, on the one hand, speculations about and experiments with the tools are weak and rootless without detailed knowledge of the book as it has been; and that, on the other hand, textual editors face an unavoidable challenge to migrate their skills and concerns to the digital medium. He takes the incursion of digital representation into textual editing as an urgent opportunity for understanding the book as a physical object, medium of communication and locus of understanding. The ontological question of what text is, he reminds us, may in its abstract formulation turn us away from the prior question of how we in fact actually encounter text and what the form of the codex has shown itself capable of doing. (Here practice corrects theory and forces us to revise it for another go at the stubborn truth of things.) He concludes that we should acknowledge editing as an attempt to deal with complex materials in a wide variety of ways; that editing in the digital world should serve as a foundation to be maintained and extended; and that a large and future community of scholars can contribute to basic, ongoing editorial work communally.

20In ’Text as Algorithm and as Process: A Critique’, Paul Eggert orbits the basic problem that complete explicitness and absolute consistency pose for representation and manipulation of cultural artefacts. The twin computational demand stirs up fundamental questions for the prospect of a digital edition, the central one being, he notes, what are texts and how do they function? Since text-encoding is central to edition-making, at least now and for the foreseeable future, the imperative to ask this question is undeniable, since every tag, however factual, signifies an interpretative intervention. ’We have to think about text, its material condition and its reception if we are to understand what it is that we are encoding when we say that we are encoding texts.’ He takes strong issue with Jerome McGann’s notion of the ’bibliographic code’, arguing that there is no such renderable system of signifiers. However useful as a metaphor for thinking about and discussing textual features, there is nothing computationally tractable beyond it. (Here computational experience corrects theory and, as before, forces us to revise it for another go.) The full reality of text will always be elusive.

21He asks, how can we stabilise this fluid, ever-changing reality so that we can discuss texts and not just ourselves? Since totalising schemes of encoding can never be implemented, stand-off markup (which Eggert and colleagues at the University of New South Wales have pioneered) seems the best answer. The strategy he recommends then, is, like Shillingsburg’s and Robinson’s, communal, though the means of achieving it may be different as there is a need to provide an effective means for coordinating the many possible versions of the common source of interest: the work. The resulting artefact, one might say, would resemble the ancient variorum commentary, with superior organizational capabilities and collaborative distribution of work but the same objective of progressive accumulation. The volume ends with a bridging study which takes us from the struggles of scholarly editing in academia to the struggles of newspaper publishing in daily life. Marilyn Deegan and Kathryn Sutherland, in ’ ”I Read the News Today, Oh Boy!” Newspaper publishing in the online world’, highlight the problem common throughout this volume: how the shift in media disintegrates everything concerned – understandings, behaviours, objects, institutions. Deegan and Sutherland chart ’a gradual decoupling of news from paper and print, with […] hybrid signs of both experiment and formal nostalgia’ along the way. Some reformations of old forms make obvious sense and find acceptance; others seem emotional curiosities. Deegan and Sutherland chart the shift from mass collective identification via a product constructed by expert editorial teams to mass individuation of dynamically constructed units of what is individually taken to be news. They consider what is gained and what is lost, and how reading habits are reforming – the habits, one might note, of those who also read literary works and use textual editions. ’What has changed’, they conclude, ’is the scale and the fine-tuning of the newspaper’s functions as its economies and its implied reading culture shift from paper to screen and as its conceptual model sets a standard for the electronic delivery of other textual forms than those associated with the news.’

4. The Future

22Scholarly writings in which the computer figures tend to remind us indirectly if not directly that, as mathematicians have also discovered, to compute is to intervene in the world and so to bring ideas and arguments up against stubborn actualities. We soon learn that our obviously meaningful texts are to a significant extent beyond the processing abilities of the best machines we can devise or seem likely to devise. From the rigorous perspective of programming languages all of what one wants to do must be completely and consistently spelled out. But when it comes to text, we learn, it cannot be and will not be. The puzzle from the readerly perspective of the scholar is more that these fundamentally mathematical machines are as effective as they are turning out to be: ’unreasonably effective’, as Eugene Wigner (1960), then Richard Hamming (1980) noted about mathematics itself in relation to the world we call real. In the days when most of the authors and the editor of this volume were imprinted by computing (as the OED says of social animals, brought to ’a state of habitual recognition of or trust in another’) user and computer were separated in space by a glass wall, input/output desk or other insuperable barrier, and in time, by hours or days of waiting for one’s printout to be delivered. This is essentially the situation depicted in 1950 by Alan Turing for his famous test of machine intelligence (1950: 433-4) and by John Searle thirty years later for his equally famous Chinese Room argument (1980). Thus when computing, with the practical realities of its use, was compared with the codex as a new ’machine to think with’ (Richards 1924:

231), it did rather poorly. It proved to be at best something on the side of the main action, a useful auxiliary device for certain highly limited kinds of investigation (often called drudgery), and simply unable to match the referential subtlety of a well-crafted edition in print. This is not, however, what computing is now, and not the computing that the authors in this volume address. Progress, intruding into the humanities, has brought us to a new place from which to consider and redefine old problems.

CC-BY-NC-ND-4.0

The text only may be used under licence CC BY-NC-ND 4.0. All other elements (illustrations, imported files) are “All rights reserved”, unless otherwise stated.

Search OpenEdition Search

You will be redirected to OpenEdition Search