Versión clásicaVersión móvil

Text and Genre in Reconstruction

 | 
Willard McCarty

8. Text as Algorithm and as Process1

Paul Eggert

Texto completo

1. Electronic Text and ’Text’

  • 1 The thinking in this paper has been stimulated by many conversations with my collaborators in the (...)

1Elsewhere in this volume Peter Robinson relates an anecdote from a lecture he gave in 2004 in which he surveyed his audience to discover how many of them had in the previous 12 months acquired an electronic book as opposed to other common digital products. Nearly everyone had done the latter, but only five percent the former. Everyone had bought a printed book. The expectations of the early 1990s about electronic texts and how they would change our reading habits had not materialised by 2004. E-books will succeed, Robinson concludes, only when they have a compelling advantage over their printed counterparts.

2What could this be in the case of scholarly editions? Despite considerable efforts on the part of many scholars around the world since the widespread adoption of the internet in the early 1990s results have been at best modest. We cannot claim that electronic editions are an unqualified success. They have not swept the field. As Robinson notes, while music, film and photographs have not had to be fundamentally re-thought for the new medium in order to succeed, the book will have to be. Nowhere is this truer than in the case of the scholarly edition. That is why, I conclude, fundamental rather than purely technical questions have to be asked when we are considering the fate or future of the electronic scholarly edition.

  • 2 Editorial self-preservation usually means that physical evidence is ignored or suppressed: cf. Egg (...)

3Some basic questions about the nature of written and printed texts have been asked by members of the encoding community as they have struggled to define what it is that they are encoding. This ought not to be surprising. The act of encoding texts for computer processing involves a blatant intervention in text-files of a kind that scholarly editors in the print domain are normally shielded from, however heroic their emendation of corrupted wording of a literary or biblical work may be. Traditionally they have treated many aspects of the physical presentation of text as irrelevant to their pursuit.2 However, this self-preserving instinct finds itself in a tighter corner in the electronic domain, where complete specification is crucial for computer processing.

  • 3 See for instance: Aarseth 1997, Dahlström 2000, Kirschenbaum 2001 and 2002, and Hayles 2001 and 20 (...)
  • 4 For the importance of modelling as a route to knowledge in humanities computing, see McCarty 2004.

4The different requirements of the electronic medium can help to throw new light on some of the enduring questions of what texts are and how they function. Recent commentary has been tending in this direction, bringing bibliography and some aspects of editorial theory to bear on electronic texts.3 My aim here is, accordingly, to inspect some of the recent text-encoding debate, and then the far-reaching proposals put forward by Jerome J. McGann in his provocative book Radiant Textuality (2001). I have some tough things to say. To get to that point I first offer a meditation on textuality, from which certain conclusions flow. At the fundamental level, textuality and electronic textuality, I believe, fold back into one. If this level of clarification can be achieved, then clarification of the continuing dilemmas in the computer representation of text should follow.4

  • 5 For DISCOVERY, see Pichler and Lanestedt 2007 and, more generally, Hayward 2006.

5My aim in the second part of this paper will be to express what can now be described as a convergence in thinking by some pragmatic commentators on the future of electronic editions – people who have not been won over to McGann’s vision but who do see a way forward for an area of scholarly editorial endeavour that has not yet been unambiguously successful. I refer to Peter Shillingsburg, especially in From Gutenberg to Google (2006b); Peter Robinson and his soon-to-be-announced plans, already circulated in draft form, for a new direction for his endeavours in Chaucer and Greek New Testament editing; and some aspects of the DISCOVERY project funded by the European Union’s eContentplus scheme.5

6The argument here has been anticipated in Eggert 2005, where I develop the wider implications of the methodology and aims of an Australian editing experiment called Just In Time Markup (JITM). By 2002 the JITM project had implemented stand-off markup as part of a system to guarantee the authenticity of a text file. We then realised that stand-off markup provided something we had not been looking to achieve: a basis for ongoing collaborative interpretation.

2. Defining ’Text’

  • 6 Renear 1997, 117-24. My counter-argument is in Eggert 2005.

7In 1995 Alois Pichler declared that the aim of encoding must be ’to prepare from the original text another text so as to serve as accurately as possible certain interests in the text’, and he added: ’what we are going to represent, and how, is determined by our research interests... and not by a text which exists independently and which we are going to depict’ (1995: 691, 690). Allen Renear, who has been deeply involved in the TEI (Text Encoding Initiative) movement, rejected this claim as ’antirealist’ (1997). But his objection to Pichler’s argument is based on an under-problematised notion that text is or must be abidingly and objectively real, and that this condition demands encoding that is aimed at elucidating the object’s actual features.6

8The new light thrown upon textuality by developments in editorial theory is relevant to this text-encoding debate. Gone are the days when scholarly editors could safely invoke the authority of Sir Walter Greg’s ’Rationale of Copy-Text’ to justify a reading text established on the basis of final authorial intention (1950-51). The form of presentation – a single text, together with the rejected and variant forms recorded in the back of the book – was itself, in every edition, an enactment of an under-specified and narrow theory of textuality. It was both narrow (in that its quarry was a verbal text abstracted from the material forms that had carried it) and Neoplatonic (the approximation of an ideal) at the same time. By the late 1980s a newly self-conscious understanding of texts was being cultivated. Texts now emerged as always in process, as meaningful both in their verbal forms and their physical presentations, as anchored not only in authorship but also in the publishing process and in their successive readerships. Text now was recognised as having social, performative and artefactual dimensions that editors’ prior concentration upon its abstracted verbal form had not so much ignored (since they at least partially recorded it) as occluded.

9This realisation bears out Pichler’s comment negatively – that is, that we can only represent certain interests in a text – although he was thinking about text-encoding, not critical editions. The realisation also exposes the inadequacy of the realist position espoused by Renear. Texts are anything but self-identical: whatever their ongoing existence consists of, our view of them is perspectival. A phenomenology of text has replaced a simpler ontology. And yet, a phenomenology of multiple perspectives is not necessarily inconsistent with the belief that there is something abiding about texts. We know from experience that we wish, selectively, to absorb texts into our imaginative lives, just as, in the act of reading, we pour part of ourselves into them. We are dealing with something, a persisting something, but what is it? How do we define it?

10We know for one thing that we all live in bodies. Our reality is conditioned by this corporeal existence. These bodies of ours live in an analogue world, but one whose communications and many other functions are increasingly enabled, and extended in their reach and speed, by digital technologies. Books in their analogue format have a comforting familiarity. They sit nicely in the hand. Like cats, they’re up on our laps and we’ve started to handle them – tenderly, almost unawares, indeed we are still savouring the pleasure in store – even as our thinking minds are getting to work on the book’s contents. The stream of words and punctuation – the text – is what we have now begun to read: isn’t it?

11Certainly, traditional page designs cater to this assumption by aiming for layouts that are transparent. The best design is said to be the one we can’t see, that we look straight through to the content and never notice. It took some hundreds of years to achieve such designs. But of course the recent developments in editorial theory show that every reading is affected, consciously or not, by the page design, including the characteristics of the chosen font, the amount of leading and white space, the binding and paperstock, and more obviously the accompanying illustrations; or the competing matter beside a magazine serialisation being read week by week. All this needs to be considered before we get to the wider contexts of the reading: when it happens, to whom, for what purpose, under what conditions, with what history of prior reading.

12So now it becomes harder to define what a text is. But we can’t duck the question. We have to think about text, its material condition and its reception if we are to understand what it is that we are encoding when we say that we are encoding texts.

3. Text and Codes

13I go for regular walks up Mt Ainslie near my house in Canberra. It’s actually less heroic than it sounds, and so if friends are visiting I take them too. About halfway up there is a striking gum tree to the left of the path, which I have frequently looked at (see cover image). It has various markings on it. An American friend, with a bent for editorial theory, saw that I was serious when I stopped in front of the tree and asked him whether he was now looking at a text, or not. There in front of him was potentially a document, a textual carrier – the nearly white, virtually smooth bark of the tree – and there were, without doubt, brown squiggles on it, more or less in a vertical line, and conveniently at eye-height. He looked. Could they be in a Tai script, of which he was only vaguely aware? He had been to Thailand. Or was it conceivably in a sinuous Bengali script of which he had seen examples but could not read? He knew that Canberra is a multi-cultural city containing people from all parts of Asia and the Middle East. He took a while to declare that he had tried but he could not make any sense of it, and that in fact he doubted that it was a text, even though he could not explain how the markings came to be there. The fact, once explained, that these markings are the trail gouged by an insect burrowing under the bark that the Scribbly Gum later sheds, thus revealing the markings, clinched the matter. This was not a text. There was no human communicative intent.

  • 7 Cf. the 3rd-century BC Greek shorthand systems known as tachygraphy. They are yet to be deciphered (...)

14In contrast, consider the period up until 1799 when the Rosetta Stone was discovered. Egyptian hieroglyphic inscriptions were unreadable, yet there was agreement that they would probably have a meaning, if only the code could be broken. There was little doubt that the stone inscriptions in the tombs were texts of some kind due to their regularity and repetitiveness. This of course proved to be the case when (by the 1820s) the code was articulated. There was, after all, proven human communicative intent demonstrated by the use of an alphabetic, syllabic and pictographic code that the original inscribers had held in common. The markings had proved to be not just mindless repetitions but real inscriptions, and the inscribed stone had therefore proved to be a document. What we had, now, were texts.7

15The decisive change in status from natural or physical artefact to document, and vice versa in the case of the Scribbly Gum, occurs at the same time that it is decided that markings are or are not textual inscriptions. In other words, the documentary and the textual dimensions are interdependent. They are separable for purposes of discussion, but they are not separate. In a private communication, Mats Dahlström disagrees. He instances the case of the prisoner on death row who is given paper and ink on which at last to record his confession of the crime, but stolidly refuses to do so. The prisoner has created no text. But yet there is a document: the paper, Dahlström says.

16I would disagree. The assumption that there is one seems justified only because the context has set up the expectation of normal documentarytextual interdependence. The physical paper is about to achieve a documentary status, but it fails to happen. If the same unused sheet of paper were then turned into a paper aeroplane by the perversely silent criminal it would no longer be thought of as a document. So the objection only confirms what I contend: that the documentary and the textual dimensions are fundamentally interdependent.

  • 8 The first reader is the writer. At every stage of composition and revision, writers are reading wh (...)

17This fact is something we would have noticed long since as important did we not spend most of our reading lives assuming that we could essentially ignore the document. The basis of the document may be physical, it may be computational, or it may be the sound waves of orally declaimed verse: but in all cases there is a material condition for its newly declared status. Materiality is not a sufficient condition, for the documentary dimension is always in relation to the textual. Neither is self-identical, and both have their histories: the histories of writing and production, and the histories of reading. The two histories are intertwined.8

  • 9 See further, Eggert 2009, chaps. 8-9.

18The space that we nevertheless open up by distinguishing conceptually between the documentary and the textual allows a number of otherwise puzzling things to fall into place. The first is that the material document can be seen now as the basis of the persisting something that we know reading reveals to us – whether the text is screen-evanescent, a temporary visualisation, or whether it arises as we read from a document that has hardly altered in hundreds of years. What related conclusions may we draw? First, we note the space between, yet the intertwined nature of, the documentary and textual dimensions; and second (although I do not enlarge on it here), the central relevance, when considering texts, of agency and time.9

4. The Humpty Dumpty Approach to Text

19How can computer-encoding respond to these fundamentals? One way forward was proposed in 2001 by Jerome McGann in Radiant Textuality. Before I get to his ambitious proposal, which most commentators seem to have passed over, I have to deal with the aspect of his book that has dominated discussion so far: the Humpty Dumpty argument that words, and therefore texts, can mean whatever we want them to mean. A discussion of this ludic argument, which is conducted as a conversation between different voices animated by McGann, will finally point a way forward.

20David Hoover has given McGann’s Humpty Dumpty argument a mauling in ’Hot-Air Textuality: Literature after Jerome McGann’ (2005). McGann’s repeated rescanning of the same double-column document, a process which he describes in his book, resulted in textual variation: ’therefore,’ he concludes, ’the text is not self-identical since the machine produced somewhat different texts from the same document’ (cf. 2001: 144-6). Hoover shows, by repeatedly scanning a simpler document, that variation can be trivial. Thus for most practical purposes, he claims, texts are selfidentical at least within a tightly defined readership (2005: 76).

21Hoover next turns to McGann’s account of a class on Keats’s ’Ode on a Grecian Urn’. McGann writes it, teasingly, in the form of a discussion between various characters with names such as Instruction, Printer’s Devil and Footnote. The student who claims that the phrase ’O Attic shape!’ refers to a ghostly shape in an attic, such as his grandmother’s attic, rather than to a Greek or Attic urn, is robustly defended. Although the student has deformed the poem’s meaning by reference to her own experience, isn’t this (McGann implies through one of the voices, but without fully committing himself) what always happens? The critic who brings historical information to bear on the reading to defend the poem against such subjective deformation deforms it also but in a different way. This argument has upset traditional scholars.

22Hoover’s counter-argument is that literature does not need to be opened up (quoting one of McGann’s characters) ’in lots of new and interesting ways’. Radical forms of deformation are not worth pursuing. Rather, Hoover argues, ’interpretation requires new, interesting, and reasonable ways of constraining the wide array of possible meanings that literary texts typically make at least marginally possible’ (2005: 90). In a more recent article, ’The End of the Irrelevant Text: Electronic Texts, Linguistics and Literary Theory’ (2007), Hoover gets further onto the front foot. His argument reflects the remarkable growth in empirical resources now available to us that can aid, guide and check literary interpretation: bibliographic databases, text corpora, computational stylistics, reliable scholarly editions, and biographies of writers and detailed chronologies of their writings.

23It is easy to get hot under the collar about McGann’s proposals, except they are not quite his. He is only dramatising the dispute, giving voice – impiously, even wickedly, yet also sweetly and reasonably – to a normally repressed desire for a plenitude of meanings that anarchic students (such as, Dear Reader, we once were too?) must at least sometimes have felt whenever a teacher was determined to assert his or her interpretative authority on this or that line of a poem. ’It’s mine too, isn’t it?’ we muttered darkly to ourselves as our suggested interpretations were cast ignominiously aside.

24The alarmed response to McGann’s book is understandable. Hoover’s articles are more importantly indexing a general shift in the critical scene after the winding-down in energy of the Theory (the capital T Theory) movement since the late 1990s. But I think the actual importance of McGann’s book lies elsewhere. He is setting up a principle of reading as inevitably and unavoidably one of deformation, a principle that he needs to invoke later in the book as a counter-weight when he finally gets down to his serious proposals. They are what I wish to discuss now. It will become clear that my objection to McGann’s proposals takes a different form to the empirical ones of David Hoover.

5. Text as Algorithm

25McGann’s agenda was canvassed in a working paper published electronically (probably in 2000), ’Rethinking Textuality’, where he talks of the need ’to rethink the work’s textuality by consciously simulating its social reconstruction’. The computer game called IVANHOE, which he and Johanna Drucker developed to simulate textuality, is something I discuss later. The basic problem for encoding, McGann points out, is that literary texts are not like informational texts. They are inherently incommensurable. Poems are not just about a subject; they are also about their vehicle of transmission. They exploit capacities of sound, image, metaphor and movement; they get in behind the ratiocinating mind. Put another way: noise is part of their communication. It does not separate readily from signal. This is the conundrum faced by anyone who is sensitised to poetry but wants to specify the aspects of poems’ functioning susceptible to knowledge representation that the computer can deal with, and that would allow it, in a specific sense, to ’read’ the poem. This is the ultimate goal that McGann foresees, while recognising that there will always be limits (2001: 185).

26The term that McGann invented in 1991 to cover the meanings emerging from the physical instantiation of the linguistic text – what he calls the ’bibliographic code’ – seems custom-made for the computing environment, and certainly he takes on the challenge in Radiant Textuality. He hopes to set off a general effort to encode physical aspects of documents: he is eloquent on the subject of page-space (say, versus scroll-space or cave-space). He affords some hope that basic aspects of the mise-en-page that are below the level of our notice may after all be precisely specifiable and therefore rendered intelligible to the computer. Could we one day, then, have a machine called an OBR – an Optical Bibliographic Reader – exploiting new forms of digital pattern recognition?

  • 10 If it ever were to be defined, C.S. Peirce’s semiotics might be the key to the advance: his accoun (...)

27I am sceptical. It is not that I question whether advances in digital pattern recognition will be made. Advances are very likely. Rather, I question McGann’s notion of bibliographic code itself. The term has been taken up by a raft of editorial commentators and theorists but the attraction of it is, I believe, mainly rhetorical. If one is to be strict about the term, then there clearly is no such thing as bibliographic code. Dictionary definitions stress the systematic nature of codes: rigorously collected and arranged, as in legal codes; and the strictly defined substitution of words for other words, as in secret military codes. But the unpredictabilities of the gap between the physical features of a book and their meaning are poor conditions for the specification of a code. We can talk about the art of page design and book binding. Such work can be highly conscious and aimed at achieving particular aesthetic effects or even meanings: so that we could perhaps go so far as to claim the existence of a documentary or bibliographic semantics. But code is going further than the evidence permits. It would require a full-blown semiotics.10 It seems to me that there can be no specifiable and invariable meaning for any particular mise-en-page.

28Compare the criticism of paintings. Art critics sometimes refer to the visual ’vocabulary’ of a particular artist or movement, and sometimes profess to be ’reading’ paintings. In truth, these claims work only at a loose, metaphoric level. While paint or page designs involve the production of physical markings, neither invokes, in their physical appearance, a specifiable code that would allow the site to be duplicated without loss or change of meaning. With written or printed pages, then, to profess to be specifying the full material range of their possible significances – to our senses of sight, touch and smell – in order to turn them into a code would involve having to close the gap between the documentary and the textual, the gap between the material stimulus and the meaning for the reader. Given that there is no pre-existing code that can be drawn down for analysis, how is a ’code’ to be specified? Clearly, it is impossible.

29With what McGann calls the ’linguistic code’ the chances are far higher, since a socialised agreement about the use of alphabetic or other scripts and about the functioning of syntactic arrangements, pre-exists both the writing and the reading of a text. The ’code’ is, in effect, drawn down by both writer and reader. It is the document’s supplément. It can be described, with varying degrees of success, structurally. It has commensurability. But meanings based on it notoriously vary so, although the conditions for the specification of this code are propitious, even here they are far from perfect.

  • 11 McGann 2001: 173. Della Volpe (1895-1968): his principal work in aesthetic philosophy was Critica (...)

30There is also a larger, philosophical idea that the claim of a specifiable bibliographic code is presupposing. McGann envisages that it should be possible to articulate the rules for the reading of a document – as he puts it, ’a set of protocols for negotiating the textual scene’ – so that a computer could read it (2001: 143). He claims that texts, because they are ’coded bibliographically and semantically’ should be seen as ’sets of rules (algorithms) for generating themselves’ (2001: 138). These rules are the linguistic and bibliographic codes. This is a breathtaking idea, but what does it imply? McGann realises that he is sailing close, here, to the holy grail of structural linguistics (2001: 151), only he wants to expand its purview to include the graphic (pre-semantic) markings in which the codings are embodied. McGann knows well that texts are not self-identical, and he says so several times in the book. But his new position on encoding is drawing him into the orbit of a transcendental idealism that would underwrite the continuing identity of any text. Indeed, he stresses it, relying on argument from 1960 of the Italian aesthetic philosopher Galvano della Volpe: ’As della Volpe shows, it [a ’true critical representation’] stands in a dialectical relationship to its object, which must always be a transcendental object so far as any act of critical perception is concerned.’11

31When McGann writes that ’A text is a display and a record of itself, a fulfilment of its own instructions’ (2001: 151), he is postulating the existence of bibliographic and linguistic systems that he hopes to see fully specified in ways that the computer can rapidly analyse when, say, presented with printed matter for scanning. A computer-algorithmic explanation of text is proto-structuralist; it requires no human participation. Yet such participation is crucial for the principle of deformation, which, as we have seen, McGann also maintains. By this principle he means that we can have a textual idealism while at the same time all our perspectives on that text can be different: ’they cannot be measured on a scale of equivalence’ to the object of encoding, he says. All representations of it are, therefore, a deformation.

32He hangs onto his earlier rhetoric of historical explanation of actual (what he calls ’determinate’) productions and readings. Yet he also, in Radiant Textuality, refers to ’fields of perception and systems of conception’ (2001: 178). Why ’systems’? The proto-structuralist explanation is working against the historical case. The effort starts to sound positivistic. Diachronic explanation collapses into the synchronic, and the idealism is not far away – as was also the case for Husserl, as Derrida famously pointed out (1973: 50). Reinvoking the transcendental ideal is too profound a philosophical step, or reversion, to be based on so little. In a sense the idea of system has been slipped in quietly to replace the human subject as the thing that underwrites and engages the transcendental ideal. There is, I believe, a more defensible model for textuality needed here.

6. A Model for Textuality

33When in 1994 I first criticised McGann’s idea of bibliographic and linguistic codes it was because I felt the idea vaporized the writerly and readerly witness of the document. It converted the documentary dimension instantly into the encoded meaning and thus made the role of individuals in relation to it more or less irrelevant (Eggert 1994, 22-4). I find that this objection remains, but with a significant caveat. Since McGann’s The Textual Condition of 1991, on which I was then commenting, he has found ways, both conceptually (via his principle of deformation) and in computer-assisted practice (in his IVANHOE game12), of incorporating readers’ dealings with texts. But, as I see it, the advance is essentially only additive – i.e. what we end up with is system plus dealings – which explains how his position can be both like Renear’s and Pichler’s, realist and anti-realist, at the same time. There is a dilemma here that McGann’s additive approach is trying to bridge: how can the text have a stable identity that can be encoded, and yet be always different?

34Once we remove the transcendental assumption that McGann invokes, once we recognise what editorial theory has been pointing to in richly different ways now for years – i.e. the diachronic lives of texts in our lives – then it becomes clear that text will always be a messy affair, that our knowledge of it will always be partial, and all the more intriguing for that. But, if so, what can we point to that sufficiently stabilises a text’s identity so that when we indulge in one of our culture’s primary and most productive games, discussing texts, we can be sure that we are not only or merely discussing ourselves?

  • 13 Eggert 1998, further adapted in Eggert 2009, chap. 10.

35In 1998 I first adapted an idea of Adorno’s that a negative dialectic between the textual and documentary dimensions can be thought of as underwriting the continuing identity of works and as therefore eliminating the need for McGann’s or anyone else’s idealism.13 A negative dialectic has no synthesis. It describes an ongoing, antithetical but interdependent relationship. Document, taken as the material basis of text, has, by virtue of its physical or computational nature, a continuing history in relation to its productions and its readings; any new manifestation of the negative dialectic necessarily generates new sets of meanings. The work emerges only as a regulative idea, the container, as it were, of the continuing dialectic. The ongoing existence of document is enough to link all the textual processes that are carried out under the name of the work. And bibliography is a technology for describing and relating allied documents. What McGann calls a ’deformation’ is, from this point of view, simply another manifestation of text-in-process. And editions and text-encodings are only more excavatory and reconstructive forms of the same basic cultural dynamic.

36Historical-materialist approaches to text are typically diachronic whereas semiotic (or algorithmic) modes are typically synchronic. As modes of explanation they have little respect for one another, and while in process are typically intolerant of one another’s truth-telling claims. They tend to consume one another, to explain away one another’s capacities to explain. My objection to McGann’s argument is that he wants to be able to invoke both at the same time. But they never come together like that. They are constantly in real or potential conflict. As the philosophers say, they sublate one another over time; hence my invoking of the idea of a negative dialectic as a way of modelling explanations of textuality.

37In Radiant Textuality McGann (2001) seemed to be seeing the principle of deformation as the next step for literary criticism. He subsequently employed games theory to incorporate the reader into the textual field and to record the resulting interactions. A paper he gave at the conference of the Society for Textual Scholarship in New York in 2003 substantiated this development. It offered a form of modelling of what texts are and do, and in this sense it was a prospect of things yet to come.

38The computer game IVANHOE, which he developed earlier with Johanna Drucker at the University of Virginia, allows participants to roleplay within what he calls the discourse field of Scott’s novel. This is defined as including its production history and subsequent receptions; what editorial theorists, following the aesthetic philosopher Roman Ingarden in the 1930s, would call its life, including its textual evolution. None of the players of IVANHOE stands outside this life; all are role-players in it. Players must respond to new information claimed to be factual but which may, for the sake of the game, be duplicitous. Each move they make is played in the knowledge of the public (i.e. recorded) moves of all the other players; and the players keep a private log explaining each of their moves to which the computer itself has access and can make arbitrary moves to unsettle things.

  • 14 Cf. Schreibman 2003.
  • 15 For a case-study, see Eggert 2009, chapter 9.

39In Radiant Textuality, McGann stresses what he calls the quantum effects of being always self-conscious of one’s position within the game as a participant rather than an outside observer (2001).14 As I see it, the game models the cultural field in which all works participate. Reviewers normally write their reviews, and critics their later articles, in at least partial knowledge of the views already expressed about the work in question, and about, say, what the author said in interviews on radio or television. If parodies of the work spring up, they themselves assume an existing knowledge of the work, or at least its mediation by commentators. All are operating in the same discourse field, for the real world of texts is always in a state of dynamic process.15

  • 16 The IVANHOE game, and also the virtual-reality environments called MOOs*, such as the one develope (...)

40So this line of experimental modelling of text is potentially a fruitful one.16 But if my account of the negative dialect between the documentary and textual dimensions of works is persuasive; if works are indeed in a process of continuous unfolding; if synchronic explanations of text are inevitably partial, then totalising or exhaustive schemes for text-encoding cannot be brought to fruition – although less ambitious schemes may, particularly if they accord with the model of textuality that I have described.

7. Stand-off Markup and Other Modest Advances

  • 17 Extensible Markup Language (a set of rules for encoding documents electronically).

41Theodor (Ted) Nelson, the original theorist of the internet, has long advocated the idea of external or stand-off markup rather than loading the textfile, as is typically done at present, with increasing amounts of interpretative markup subject to the same document-type definition (DTD) and therefore to the same hierarchy of content objects. Take XML17 files for instance. They mix data with data referring to the data and with data referring to itself, all in the same file. This is very disadvantageous for some applications. Standoff markup on the other hand offers a way of data-modelling and enhancing a text from multiple points of view. There is the opportunity to proceed with encoding what we already know about texts and to accumulate new knowledge about them without the worry of overlapping hierarchies, since conflicting models need not be applied to the text simultaneously. This method leaves open the capacity to add layers and new kinds of interpretation that may emerge in the future. It also allows the signing of interpretative stand-off files by their creators, simplifying copyright concerns that the mixing of contributions within the one expanded file otherwise creates.

42While better solutions will probably emerge, we know already that the use of stand-off markup using a checksum algorithm can resolve the problem of guaranteeing the ongoing authenticity of any text-file that is undergoing interpretation and enhancement. In the standard paradigm – i.e. using in-line markup – every addition of markup to a text-file necessarily creates a new state of the text. Text-files can quickly become so heavily encoded as to be beyond human capacity to proof-read them.

43This is a problem for scholarly editions. Those who prepare them get jittery if asked to depend upon, without checking, the authenticity of newly processed text-files. Repeated human proof-reading is, strictly speaking, necessary because how can the editor know whether something has not accidentally been changed? Scholarly editors get jittery because they know from ample experience of manuscript and print production the normal fate of texts over time that are themselves undergoing repeated acts of copying. In the electronic environment, intervention, correction and enhancement bring with them new forms of this ancient fallibility.

44The use of stand-off markup external to the text-file, applied to the text upon the user’s call and incorporating an authenticating checksum algorithm in the act of incorporation, has emerged as one answer. Since the experimental JITM projects, mentioned above and reported in Berrie et al. 2003, Eggert 2005 and Berrie et al. 2006, there has been new interest in the potential of stand-off markup. In From Gutenberg to Google (2006b), Peter Shillingsburg assumes it as a given, and it helps him to strike out in new directions. Peter Robinson has informed me that he intends to give standoff markup a significant role in foreshadowed technical developments for his e-editorial projects at his editorial institute in Birmingham. And as of late 2007 the programmers in DISCOVERY were giving the technique consideration as part of the likely development of a tagging tool aimed at collaborative interpretation of text-files within an RDF (Resource Description Framework) environment.

45I mention these developments only to point to what I see as the emergence of a less totalising alternative to the structural one that underlies one side of Jerome McGann’s thinking in Radiant Textuality. These developments lean, without toppling, towards the opposite, deformative side of his thinking. While one has to admit that all interpretation is an appropriation, a collaborative working environment for interpretation is surely preferable if it lets scholars hang on to what has been hard won: reliable transcriptions of the versions of literary works, and thoughtful rounds of emendation and interpretation. The interpretative files (or ’tagsets’) created by scholars need to be accessible as a gradually evolving tradition of commentary and scholarship. This methodological requirement, familiar from the print environment, will not go away because of a change in medium. Scholarly agreement and disagreement need to be explicitly enabled in an environment where the commentaries are themselves authenticated i.e. electronically signed and dated and thus essentially anchored in their own tradition and history, rather than being free-floating and changeable, able to be deformed by others at will.

46Of course, McGann’s deformation is in spirit a ludic methodology designed to open up critical and interpretative possibilities, so David Hoover’s censoriousness seems to me at moments to mistake its target. But his implicit question: ’Is life long enough to entertain open-ended possibilities?’ answers itself with a confident no. This is partly because scholarship is basically collaborative. It needs to be in relation to something shared. It is a conversation over time about an abiding something. Interpretation of that something calls out counter-interpretation. Error calls out correction. But unless all of the participants can continually refer to the documents that carry the texts under investigation, their remarks will pass one another by without ever meeting. To play the fool with the documentary-textual continuum definitely creates more instantiations of it, just as treating it seriously does. But some instantiations are going to be more productive, more enlightening than others. Finally, the criterion has to be pragmatic, and I mean this in C. S. Peirce’s sense of the word.

47If this is the modest direction that electronic-edition development is going to go, will the medium ever supersede the printed book? Even in its scholarly forms the book is reasonably compact, sometimes cheap, often expensive but not ruinously so. And it has developed ingenious ways of condensing multitudes of evidence in tables, footnotes and cross-referenced textual apparatus. The fact of looming publication brings out heroic efforts on the part of the scholarly editor to finalise and complete the complex task – to get it done – thus answering to an all-too-human desire and capacity. And, as Peter Robinson points out, the printed scholarly edition filters out the surplus of information typical of electronic editions to date, an overload that readers cannot deal with profitably anyway.

48Robinson and Shillingsburg have given the best answers to date as to the conditions under which the electronic edition will be able to supersede the printed book. Both concur with an argument of mine (Eggert 2005): that we will get to the new phase only when editors stop treating the electronic edition as something that they must keep jealously under their control, letting users consult but not re-build. Robinson’s projects have so far been in this mould. His recent system ANASTASIA gives impressive functionality in its engine room, and impressive displays on its interface. But editions based on it are hard to reissue in a revised form when errors are detected in so complex an array of cross-referencing files. Worse, the whole wonderful thing depends to a dangerous extent on the welfare of its creator. Everyone worries how long the editions that depend on ANASTASIA would remain fully functional should Robinson happen to go under the proverbial bus.

49Perhaps he worries too. His new emphasis is on distributed servers, each giving access to whatever materials relevant to an edition have been lodged on those servers prepared by various scholars or other individuals. Editions must henceforth be interactive, he says and in this he joins hands with Shillingsburg. He imagines Web 2.0 (and presumably semantic-web) capabilities gradually learning to predict the reader’s needs, automatically finding, on other servers around the world, equivalent passages to, or relevant commentaries on, the lines of text that the reader is currently viewing. The hope is that, when relevant images, transcriptions, collations and commentaries automatically appear to gather themselves into our working or reading environment, editors will find the advantage compelling and will forsake the book. Readers will forsake it too, as soon as the web gives them a better experience. But we are not there yet, despite all the extravagant predictions about the imminent supersession of the book that were made in the early 1990s.

50Is there an archival problem from this scattering of textual resources that Robinson predicts? My sense is: not as long as computers exist. Bits are tenacious creatures, probably more so than books, and they are very easily transferred. The digital resources of which I speak will achieve a permanent, effectively archival foothold purely through their wide distribution and use. Regathering them at the moment of reading in a relevant and authenticated form, and allowing interested parties to make further enhancements, is the challenge.

  • 18 In his unpublished paper, ’A Specification towards Distributed Editions’ (2007c), Robinson propose (...)

51Common encoding standards and, as Robinson argues, agreed addressing protocols that ensure that exactly the same text fragment is being referenced, are required to make this happen (2007b).18 Of course we still lack many of the basic tools. But this vision of a common, interactive type of scholarship and readership that democratically puts the reader in a box-seat while also empowering the scholar to make and sign more expert editions, doing much of the discovery work for us, is a very attractive prospect.

  • 19 JITM (the development of which ceased in 2005) was a step in that direction. Its system degrades g (...)

52We should not fear the barbarians entering the gates of the scholarly city. They have usually got less arduous things to do anyway, and even when they do decide to interfere (as in contentious passages from books of the Bible whose wordings they object to) or even if they are empowered to make their own editions, my response is: Let them! Existing scholarly protocols of assessment and refereeing will doubtless be adapted to sort the electronic sheep from the goats. Authentication routines will keep the scholarly transcriptions safe, and when emendations are proposed by other scholars we should be able to provide keys that securely link the emending file to the target edited text-file, with permissions or refusals to emend built in. Stand-off interpretative files written for the original edited text should be able to be applied to the emended one once authentication routines or signed files with appropriate keys become easily available.19

53Peter Shillingsburg’s emphasis on modularity is important here (2006b: 80-125). He usefully lists everything that a well-constructed and adequately populated electronic edition should contain: all of the textual, contextual and facsimile materials, the receptions and adaptations. It is perhaps best to think of this as a wish-list. To achieve it, the editorial team will in practice inhabit what I have called the ’work-site’ or what Shillingsburg calls the ’knowledge site’. Whatever we call it, the activities on that site will be the same: those of preparing, gathering, encoding, comparing, presenting and interpreting materials relevant to the work being edited. All this will be happening on a site on the internet. Equally, elsewhere, any number of other people not engaged in building the specific work-site, could be seriously or playfully or merely curiously handling copies of the same files at the same time. The work-site will be merely a more focussed and serious arena using the same – or mostly the same – materials. I say mostly the same, because the dragon of copyright is always at the gates, limiting the textual work we might do, limiting the free play that readers might otherwise have.

54Since its editor-constructors will have inheritors or adapters or popular re-users, the work-site will, or ought to, consist of modules – materials and tools – that are individually reusable and repurposable, rather than forever locked together refusing entry to users. This imperative, about which Shillingsburg is persuasive, has to find a balance with the scholarly needs for textual authenticity, for precise citation of an ongoing tradition of interpretative commentary and, as Robinson points out, for precise addressing of the text fragments under discussion.

55As I see it, the work-site will grow gradually by accretion, not by means of a grand scheme to enunciate the complete range of linguistic and bibliographic codes. It is virtually certain that the capacities of the computer to learn from our inquiries will bring to our attention information and materials for which we would never have thought to look. The computer’s use of inferential logic, especially when assisted by formalised models and ontologies, and its capacity to compare files and to enable their collaborative enhancement, will multiply our knowledge. In these ways it will help us to build work-sites. It will allow us to understand works differently and anew. Its slow growth will reflect our gradual accumulation of knowledge about texts.

8. Conclusion

56How, finally, does this vision of the future sit with the theoretical relationship between the documentary and textual dimensions sketched above? The material basis of electronic texts is obviously their computing environment, just as that of printed texts is the printshop. The routines of both environments affect the storage, processing and generation of texts. We become aware of this only when we refuse to naturalise the format and environment, when we stop to think about them.

57I have referred easily and casually to texts being generated or visualised on screen. But in truth what we call a text on a computer screen is not a text: it is a computer artefact, an encoding and visualising of a binary flow of data, just as ink on a page is not a text till the material medium can be understood to be a document; or not, as in the case of the Scribbly Gum. For working purposes, because it simplifies matters, we normally agree to call a particular computer file visualised on screen a text. But as I hope I have shown, computers cannot strictly actualise texts – that is a human accomplishment – but they can process, manipulate and visualise bits with astonishing speed, often illuminating results and can allow cheap worldwide distribution. For this, all book readers will eventually be grateful, even if they are not now.

Notas

1 The thinking in this paper has been stimulated by many conversations with my collaborators in the successive Just In Time Markup (JITM) projects at the Australian Scholarly Editions Centre (see http://www.unsw.edu.au/ASEC and http://www.unsw. adfa.edu.au/JITM). For reports, see Berrie et al. 2003; and Berrie et al. 2006. For a commentary on the wider meanings of the JITM projects, see Eggert 2005. I thank Peter Robinson for giving me access to three papers of his prior to their publication: ’Electronic Editions for Everyone’ (in the present volume), ’Current Directions in the Making of Digital Editions: Towards Interactive Editions’ (Robinson 2007b) and ’Documenting Texts and Text Sources for Exposure and Retrieval’ (Robinson 2008). I also thank De Montfort University, especially its Centre for Textual Scholarship, whose support made possible my lecture to the London Seminar in Digital Text and Scholarship in 2007, on which this essay is closely based.

2 Editorial self-preservation usually means that physical evidence is ignored or suppressed: cf. Eggert 2004, 162-4.

3 See for instance: Aarseth 1997, Dahlström 2000, Kirschenbaum 2001 and 2002, and Hayles 2001 and 2003. For a commentary, see Eggert 2005.

4 For the importance of modelling as a route to knowledge in humanities computing, see McCarty 2004.

5 For DISCOVERY, see Pichler and Lanestedt 2007 and, more generally, Hayward 2006.

6 Renear 1997, 117-24. My counter-argument is in Eggert 2005.

7 Cf. the 3rd-century BC Greek shorthand systems known as tachygraphy. They are yet to be deciphered despite the existence of a prayer in both normal handwriting and three different types of shorthand.

8 The first reader is the writer. At every stage of composition and revision, writers are reading what they just wrote, or wrote before. Typesetters, before they do anything else, are readers too, and obviously editors are; but so too are encoders of e-texts. All these people intervene between an earlier document to create the new document (printed or computer-processed) used by the readers. See further, Eggert 2009, chap. 10.

9 See further, Eggert 2009, chaps. 8-9.

10 If it ever were to be defined, C.S. Peirce’s semiotics might be the key to the advance: his account of the sign incorporates the interpretant of the sign into the semiotic transaction: see further, Eggert 2009, chap. 10.

11 McGann 2001: 173. Della Volpe (1895-1968): his principal work in aesthetic philosophy was Critica del gusto (1960), transl. Michael Caesar as Critique of Taste, London: NLB, 1978.

12 For literature on IVANHOE, see the articles cited at http://www.ivanhoegame.org/wordpress/?page_id=2 [accessed 26/02/2010].

13 Eggert 1998, further adapted in Eggert 2009, chap. 10.

14 Cf. Schreibman 2003.

15 For a case-study, see Eggert 2009, chapter 9.

16 The IVANHOE game, and also the virtual-reality environments called MOOs*, such as the one developed for the Romantic Circle, MOOzymandias, may ultimately yield useful information about the ways in which readers process the physical qualities of books. See Fraistat and Jones 2003. *MOO stands for Multi-user dimension Object Oriented. See also Schreibman 2003.

17 Extensible Markup Language (a set of rules for encoding documents electronically).

18 In his unpublished paper, ’A Specification towards Distributed Editions’ (2007c), Robinson proposes some new TEI attributes and authoritative addressing protocols for different versions of the same work. The Functional Requirements for Bibliographic Records standard should assist with the addressing. FRBR defines a descending hierarchy of Work, Manifestation, Expression, Item, with adaptations treated as separate works but linked at the level of subject matter. FRBR’s first large-scale implementation was the AustLit database (http://www.austlit.edu.au): RDF and Topic Maps enmesh all instances of the fundamental concepts of agent (author, publisher etc.) and work into a spider’s web of relationships that themselves effectively define the agent or work rather than treating each one as a self-identical entity, robustly separate from all other agents and works.

19 JITM (the development of which ceased in 2005) was a step in that direction. Its system degrades gracefully. Markup written for text elements that are subsequently emended cease to authenticate those text elements when applied to them, but the remainder continue to be functional: see citations in n. 1, above.

Autor

Paul Eggert is an Australian Research Council professorial fellow, based at the University of New South Wales at ADFA in Canberra. He chairs the Board of the AustLit database, and has been involved in experimental electronic edition projects since the mid-1990s. He was founding general editor of the Academy Editions of Australian Literature (10 vols, 1996-2007). His edition, with Elizabeth Webby, of Rolf Boldrewood’s Robbery Under Arms appeared in 2006. He wrote Securing the Past: Conservation in Art, Architecture and Literature (2009).

CC-BY-NC-ND-4.0

Únicamente el texto se puede utilizar bajo licencia CC BY-NC-ND 4.0. Salvo indicación contraria, los demás elementos (ilustraciones, archivos adicionales importados) son "Todos los derechos reservados".

Comprar

Buscar en OpenEdition Search

Se le redirigirá a OpenEdition Search