5. Defining Electronic Editions: A Historical and Functional Perspective
p. 119-144
Texte intégral
1. Introduction
1Since Peter Robinson published the first electronic edition in a series designed to accommodate all of Chaucer’s Canterbury Tales, he has been theorizing and writing about the nature and definition of electronic editions. Along with Jerome McGann and others, he has been at the centre of the debate on electronic textual editing since he began work on the collation and textual criticism of Icelandic manuscripts, for which he developed programs collectively called Collate. Because of his accessible papers on the subject of electronic textual editing, his software for automatic collation and his models realized in commercially available editions, Robinson has been the logical starting point for many a scholar wanting an introduction to the concept of electronic textual editing.1 However, rather than looking for a general definition of an electronic edition, Robinson has consistently been specific about what kind of editions he wants to produce, first of the Old Norse Svipdagsmál, later of Chaucer’s Canterbury Tales and Dante’s Commedia, and more recently of the Greek New Testament. Since he admitted that he was mistaken to abandon the single (edited) text in the edition of The Wife of Bath’s Prologue (Robinson 1996c) in favour of a set of different views of the text, he has moved from advocating the reader’s freedom of choice among many texts, to recognizing the function of the one text, to looking for the ideal model of an electronic edition and its functions. Currently he advocates ’fluid, co-operative and distributed editions’ that are strongly interactive and that are indebted to Peter Shillingsburg’s concept of ’knowledge sites’ (Robinson 2003a: 125, 2007c). At the same time, Robinson’s ideas of ’electronic editions for everyone’ (2007b, ch. 6 in this volume) correspond with Shillingsburg’s concept of the convenient and the practical edition that must bridge both the theoretical and practical differences between textual and literary critics (2005). This concept recalls Fredson Bowers’ idea of the ’practical edition’ (1969).
2Shillingsburg’s and Robinson’s ideas for distributed editions do not, however, provide a general model for electronic editions or a generally applicable and stable interface. These ideas may well be suited for classical, medieval, and Victorian Anglo-American textual traditions and may well respond to the needs of the broad communities interested in them. They seem unlikely to be successful for editors of texts from smaller traditions, such as the modern Dutch and Flemish. Editors of texts from such traditions work for an audience of only a few interested academics and a small reading public who for the most part want simply to read texts from printed books. The idea of the active involvement of a computer-literate and critical community with a knowledge site built around a modern Dutch or Flemish text is but an idle fantasy.
3It is tempting to advocate a standardized interface to electronic editions for the convenience of the user. From a theoretical point of view, however, such an interface is as absurd as Stefan Graber’s defence of the historicalcritical edition as the only legitimized type for scholarly editing (1998). An interface should rather be conceived as an aggregate of means by which the user can interact with the text, commentary, and ancillary material. What the interface is called upon to provide is very much dependent on which underlying mechanisms have been provided to assist the manipulation of a particular text or set of texts according to the nature of that text and the editor’s interpretation of it. Hence, imposing a general interface would render some perspectives difficult to impossible to realise. The electronic edition would then be reduced to a publication tool demonstrating a fixed set of options rather than a modelling tool for exploring the text and generating meaning with it.
4Is it possible to define what an electronic edition is, given that it may be called upon to do so many different things? The range of requirements is large – from demonstrating ’the considered act of reproducing or altering texts’ (Tanselle 1995a: 10) to providing tools to online communities for the enhancement of knowledge sites; from the digitization of a printed edition to the provision of user generated editions; from the publication of one text to the presentation of a textual archive. How should this question be answered? By a guide to good practice? By a survey of current theoretical positions and case studies, e.g. in the recent volume on Electronic Textual Editing (Burnard et al. 2006)? By normative guidelines like those of David Gants (1994), Susan Hockey (1996), Jerome McGann (1996a), and Peter Shillingsburg (1993, 1996b)? By several meditations on the technologies, functions, building stones, or characteristics of electronic editions (e.g. Karlsson & Malm 2004)?
5My aim here is to propose a definition of an electronic textual edition and frame it in the historical context of earlier defining efforts.
2. Research Assistant
6Robinson’s edition of The Wife of Bath’s Prologue on CD-ROM (1996c) is likely the first generally acknowledged example of a published electronic edition providing a real example of the rationales and principles articulated in the mid-1990s. At the time the new HTML-driven World Wide Web and the desktop computer seemed to support the widely proclaimed democratizing character of hypertext. The feasibility of electronic editions, for which these documents provided blueprints, was determined by the technological knowledge of textual scholars who had been looking into the advantages of computational techniques. The production model of encoding text-critical research in a platform independent markup language and publishing the scholarly edition in a hypertextual environment seemed to be a fair trade-off between the use of manageable text-based technology and the popular(izing) hype of hypertext publishing.
7Although the idea of hypertext was devised by Theodor Nelson in the mid 1960s (2003/1965), it first became a useful technology with the release of the programmable hypermedia authoring tool and information organizer HyperCard in the mid-1980s. Before then the computer had been used extensively as a ’research assistant in scholarly editing’ (Shillingsburg 1980: 31), relying on programs and tools developed for concordancing, collation, analysis of variants, stemma determination, reconstruction, building, and photo composition. In the 1980s sophisticated and integrated packages for textual editing, such as Shillingsburg’s CASE, Robinson’s COLLATE, Wilhelm Ott’s TUSTEP (1988), Francisco Marcos Marín’s UNITE and Robert Cannon and Robert Oakman’s URICA! were developed.2 However, with HyperCard and the introduction of the personal computer came a new contingent of less technically sophisticated scholarly users. Consequently, computer applications in the field of textual editing consisted of ’sophisticated word processing’ (Potter 1985: 95) whose most proclaimed advantage was the elimination of the need to retype documents whenever a correction was inserted.3 Towards the end of the 1980s Hans Walter Gabler summarized the computer’s function in textual editing nicely by saying that: ’l’ordinateur n’est donc pas instrument de recherche, mais un simple outil pratique qui peut, il est vrai, améliorer de manière notable l’efficacité et la qualité de nos travaux’ (1989: 55).
8This was corroborated by a survey of scholarly editors conducted in the fall of 1990 by Cathy Moran Hajo, mentioned by David Chesnutt (1991). This survey showed that computers were mainly used as word processors in the preparation of critical editions and their critical apparatus. ’ [E]ditors working in 1991’, Chesnutt concluded, ’continue to use computers in many of the same ways they adopted in the late 70s and early 80s’ (1991: 377). As a matter of fact, the 1970s and 1980s saw few new insights. When presenting TUSTEP as a suitable suite of software tools for critical editing in 1988, Wilhelm Ott confessed that the basic ideas and techniques presented were not new: ’Most of it could have been told (and has in part been told) ten years ago. [...] So you must be content with a more than ten-year-old concept, and with some results and experiences which we have achieved since then.’ (Ott 1988: 82). Charles Faulhaber agreed: ’To date, most computerized textual criticism has conceived of the computer primarily as a tool to facilitate the production of printed texts both by automating the procedures of textual criticism, as well as by permitting a much greater consistency in the application of editorial criteria.’ He further pointed out that the goal of the process was still the printed text itself, and he observed that ’as a byproduct, but only as a byproduct, the computer also produces an electronic version of the text’ (1991: 123).4
9With respect to the application of computational techniques to textual editing up to the beginning of the 1990s, Thomas Tanselle is probably right in commenting that ’ [w]hen people say that the computer makes possible certain kinds of textual research, such as locating all the appearances of particular words in a given text or group of texts, they are using the word possible inexactly to mean ’practically feasible’’ (2006: 3). It is indeed true that when Miriam Shillingsburg claimed back in 1983 that her edition of Washington Irving’s The Conquest of Granada ’could not have been produced without the aid of the computer’ (1983: 654), she did not mean that it had been impossible to produce an edition of this complexity and size without the aid of the computer, but that it was unlikely a project to happen since it would have occupied a substantial part of one’s academic career. ’The misuse of possible is not a trivial matter,’ Tanselle argues, ’for it is symptomatic of the exaggerated claims one often hears about computers, and these claims do not provide a useful foundation for thinking productively about just what computers can in fact do for us’ (2006: 3). In this respect also, Tanselle’s conviction that ’[p]rocedures and routines will be different; concepts and issues will not’ (2006: 6) seems to be true. Fifteen years before Tanselle’s claim, Jean-Louis Lebrave had already observed that the computer-assisted edition did not affect the concepts and issues of editing: ’les charactéristiques et la structure du produit imprimé ne sont pas modifiées par l’utilisation de l’ordinateur dans les phases de mise en page. De ce fait, la P.A.O [Publication Assistée par Ordinateur] n’affecte pas la problématique de l’édition’ (Lebrave 1994/1991: 171).
3. Publication Medium
10The exaggerated claims to which Tanselle reacted were also present in hypertext theory of the late 1990s. The introduction of the computer as a publication medium, however, and thus, in a way, as a modelling tool, started the transition from computer-aided or computer-assisted editions to true electronic editions that exploit the possible beyond the feasible.5
11The idea of publishing scholarly editions electronically, then, began to gain ground, thanks to the wide availability of personal computer software and hardware, economically sound solutions to the ’input bottleneck’ by the development of affordable scanning services and optical scanners with OCR software, the improvement of digital imaging equipment and techniques, the availability and exponential growing capacity of magnetic and optical storage devices and the overall falling cost of data processing and storage.
12The ambiguity in the association of the concept of electronic edition with the photo-composition of printed editions, rather than with the production of editions for the screen, was criticized by Roger Laufer who proposed the concept of the édition-diffusion éléctronique (electronic distributed edition) as an alternative to l’édition automatique which was what computer-assisted editing was called in France (Laufer 1989: 115). Disappointed by the illegibility of his own edition of Alain René Le Sage’s Diable Boiteux and inspired by translation software used on Apple Lisa and Macintosh machines, Laufer, at an international meeting in 1984,6 promoted the implementation of the technique of multi-fenêtrage or multiple windows in a program which could turn the computer itself into a publication medium (Catach 1988). This technology would overcome the economic and static limitations of the elitist construct that is the printed scholarly edition, and introduce a social alternative for the dynamic and full realization of the promise of the critical synoptic edition that claims to offer the option of reading multiple versions simultaneously in the form of the apparatus of variants:
Le recours à l’informatique permet d’obvier à ces inconvénients. Le lecteur choisit son texte de base et son ou ses textes de comparaison pour les lieux qui l’intéressent. Ainsi devient réalisable une édition critique entièrement variable selon demande. (Laufer 1989: 115)
13The use of the multiple frames on the screen as a manipulation tool for the dynamic reading or consultation of parallelized documents (versions, variants, facsimiles, annotations...) was the original idea Laufer added to existing technology.
14Around the same time, George Logan, David Barnard, and Robert Crawford described the critical edition of Thomas More’s Utopia that aimed to publish a machine-readable text not merely as a series of computer files which could be distributed and analysed in conjunction with (non-system) analytical software,7 but as ’files linked to software that can display sections of text in desired configurations, maintain interconnections between the different files, and provide other appropriate services’ (Logan et al. 1986: 319- 20). The authors describe a variety of uses of the electronic edition, ranging from consulting the isolated component files of the edition, to dividing the screen into multiple windows for the simultaneous consultation of different component files, to simultaneously scrolling parallelized component files. Unlike Laufer, however, Logan et al. propose to publish the electronic edition alongside the printed edition. Because all the components of the printed edition are also present in the electronic version, the latter can be used as a replication of the printed edition, but as the authors point out, the greatest advantage lies in its ’power to facilitate coordinations which, though explicit, are virtually impossible to discover in printed books’ (Logan et al. 1986: 322). They explain: ’the windows of the electronic edition can replace not one but several place-holding fingers [...] they allow appropriate temporary rearrangements of the pages of the edition’ (Logan et al. 1986: 322).
15Jean-Louis Lebrave commented that: ’ [l’]innovation principale est peutêtre que l’utilisateur devient partie prenante dans l’élaboration des matériaux qu’il consulte, et contrôle librement les cheminements qu’il effectuera à travers les documents’ (1988: 127). He added that one of the implications of this technology was that the exclusive choice between a critical edition and a facsimile edition disappeared because ’on peut consulter simultanément un fac-similé du brouillon et telle ou telle forme d’édition ou interprétation de ce brouillon’ (Lebrave 1988: 127). As an alternative to the conjunctive use of analytical software with the machine readable text in Logan et al. (1986), Roger Laufer predicted the integration of several analytical software tools such as collation and concordance software in this kind of electronic edition (Laufer 1989: 124). However, as he pointed out, the specific software that would facilitate all this still had to be written.
16Like Laufer, Lebrave saw recent computer technology as an alternative to the traditional codex. He described the electronic edition as ’a multi-media data base, giving access to facsimiles, to various transcriptions, to interpretation tools, like dictionaries or programs for automatic comparison of textual fragments’ (Lebrave 1987: 142). The advantage of such a ’pluralistic system’, according to Lebrave, is that it ’would allow any reader to construct his own reading according to the hypothesis he wants to build up’ (1987: 142).
17Although neither Logan et al. nor Laufer and Lebrave ever use the terms hypertext or hypermedia in their early presentations and writings, – Logan et al. speak of ’interconnections between the different files’ (1986: 319) – their descriptions of the mechanics by which users of their editions could walk ’à travers les données génétiques sans être prisonnière d’aucune des formes de représentation utilisées’ (Lebrave 1988: 136) undoubtedly describe the functionality of hypertext.
18The explicit link between Gérard Genette’s (1982) concept of hypertext as a form of intertextuality8 – already in use in genetic studies (Marantz 1988) – and Nelson’s concept of hypertext as non-sequential writing (2003/1965)9 was made by Lebrave in a series of articles on hypertext and avant-texte in the early 1990s. Here, Lebrave highlighted the advantages of hypertext for the organisation and visualization of the dossier génétique and reported on some early experiments with the hypermedia authoring tool HyperCard.10
19Although the concept of hypertext was considered to provide the ideal metaphor and technology to reconstitute the dynamics of the writing process (by the visualization of a number of documents and their regrouping according to several principles such as resemblance, difference, teleology, and chronology) the resulting edition was a closed hypertextual universe and remained as static as its printed counterpart. In other words, it only offered dynamism within its own pre-set boundaries and according to the enabled features of the hypertext application.
4. Analytical Tool
20An interesting early suggestion of a dynamic system was provided by Todd Bender in 1976 and was echoed by Donald Ross Jr. in 1981. Both scholars conceived of formal textual editing as a computer project that ’should be set up and preserved in such a way that future scholars can return to it and use it in its electronic form’ (Bender 1976b: 194). After analysing the Platonic orientation of modern textual criticism, as advocated at the time by the CEA (College English Association), and the incompatibility of this method in the case of modern literary texts where ’the printed page is inherently incapable of representing the work accurately or fully’ (Bender 1976b: 194), Bender introduced computer technology as offering the possibility of retaining ’a version which more closely approximates the essence of a work without disregarding all the mutations which exist in the manuscript and printed representations’ (1976b: 194). He proposed to recognize the electronic text as the primary form of the work and the ’”real” repository of information’ from which any printed expression and any form of textual analysis could be generated. With the publication of the concordance to Conrad’s Heart of Darkness in 1973, Bender (1973) had demonstrated the generative power of this approach. Instead of basing the concordance on a printed text, which Bender argued is but ’one among many possible provisional, incomplete, and arbitrary formats of information’ (1976b: 194-5), he based the concordance on the basic input data which included transcriptions of all significant printed and manuscript versions of the text and their collations. This genetic and transmissional information turns the repository into a three dimensional data pool that, although it cannot produce definitive editions, ’can easily search out for us and note every case in which a literal or punctuation variant occurs in this three dimensional matrix’ (Bender 1976a: 333-4). As any printed edition is a two dimensional and ’simplified expression of a matrix of complex variables’ (Bender 1976a: 336), Bender envisioned that the role of the textual editor might be the construction of the multidimensional model of variables which could be consulted from any ’scholar’s desk console anywhere in the world’ connected ’through radio or telephone circuitry’ to ’one central data bank’ (Bender 1976b: 195). The reader, Bender noted, will come to the electronic repository and ask for ’a provisional expression shaped to his needs’ (1976a: 337). In order to facilitate a dynamic consultation and analysis of the data bank, Bender developed a system by which relationships among words or signs are represented not by positional notations, but by arithmetic notations that are semantic-neutral representations of the language. This system would allow the representation of interrelations in a set of complex variable information in which a word is seen as a constellation of significations (Bender 1976b: 196).
21Some years later, Donald Ross Jr. proposed to turn Bender’s model inside out. Instead of a textual database containing the transcriptions of the witnesses of the transmissional and genetic history of the text and their collations, Ross suggested storing the copy text as data together with collation information and expressing the traditional footnotes or apparatus of a printed edition as algorithms or a series of programs that manipulates the copy text into a representation of any selected stage of the textual history (1981: 159-61). This means that every single stage of the textual tradition or genesis, including the critical text established by the editor, would then be assumed data which could be called upon and produced by these algorithms. Editors would be responsible for the validity of the commands invoking the algorithms that produce a stage of the text and they would also have to document the procedures so that users of the edition could access any perspective on the textual history. The use of such a system would be threefold in Ross’ view. First, this data organisation could produce traditional printed editions presenting any stage of the text. Second, the assumed data could easily be analysed, for instance to determine stylistic patterns by the generation and collation of concordances of various stages of the textual history. Just as in Bender’s database proposal, statistical analyses of other stylistic features would also be options as well as an automatic analysis of genetic features. Third, the database could function as a ’front-end’ for a document retrieval system that not only displayed the assumed data assembled by specific user-driven commands graphically, for instance to represent the author’s working process, but that also provided access to all information stored in this database. ’Assuming this were possible,’ Ross concluded, ’then the kind of information in the data base could be displayed to the scholar working at a terminal, where passages from all sources could be called up’ (Ross 1981: 161).
5. Hyperedition/Database
22Almost two decades after Bender’s proposal, Marilyn Deegan and Peter Robinson (1994/1990) came to similar conclusions regarding the fact that traditional scholarly editions inevitably present a selection of the information used and produced in preparing such an edition, especially when the computer is used for the transcription, preparation, encoding, and collation of texts which produce much intermediate data that are not represented in the final result. The selection and presentation of data is traditionally left to the judgement and experience of scholarly editors who endorse scholarly editions with their authority. In Deegan and Robinson’s proposal, however, all data on which editorial decisions are based could be presented in what they called ’an electronic hypertext edition’ which would not substitute for but supplement the traditional edition. As Deegan and Robinson argued, ’viewed in the light of certain reasonable reading expectations, the legitimate exercise of editorial selectivity imposes arbitrary and subjective limits on the interpretation of texts, and what is a justifiable and intelligent limitation from one scholarly angle may appear, from another, an unnecessary restriction of possibilities’ (Deegan and Robinson 1994/1990: 35-36). Less radical than Bender, who emphasized that the real information was located in the electronic memory, Deegan and Robinson preferred the on-screen representation of information with the aid of hypertext as a referential management and navigating system. By their proposal to not only present the results of critical research such as the edited text and several commentary sections and apparatuses, but also the research materials, such as encoded transcriptions and digital facsimiles of the documentary witnesses, they introduced the edition/archive issue which was commented on by Peter Shillingsburg (1996a: 161-71). However, they clearly expressed the need for the electronic hypertext edition to preserve the features of the traditional scholarly edition alongside the presentation of these data, as did Logan et al. (1986) and the 1997 CSE Guidelines for Electronic Scholarly Editions which claimed that the ’content of an electronic edition differs little from that of a print edition.’
23In an instructive and elaborate essay on ’Textual Criticism in the 21st Century’ Charles Faulhaber agrees with Deegan and Robinson on the content of a hyperedition. He describes the concept and function of what he calls ’the electronic critical edition’ as follows:
In an electronic critical edition the critical text will be the locus of a set of data connected to it by various kinds of links, some established specifically by the editor, others established automatically by software tools. The critical text will not exist as a self-sufficient isolate but rather as part of a rich environment which will enable users to study the text’s internal structure – graphemic, phonological, morphological, lexical, semantic, syntactic, discursive – as well as its relationship to its genre, to its linguistic and literary tradition, to the interpretive tradition which surrounds it, to its historical moment, to its society, and, eventually, to significant aspects of its culture, understood in anthropological as well as artistic terms. (Faulhaber 1991: 128)
24Deegan and Robinson (1994/1990) draw the attention to two issues that also feature in the later propositions and normative guidelines for electronic editions already mentioned in the introduction to this essay. The first was the requirement for a platform independent and non-proprietary markup language that could deal with the linguistic and the bibliographic text of a work and that could guarantee maximal accessibility, longevity, and intellectual integrity in the encoding of texts and textual variation. This was found in the work of the Text Encoding Initiative, which had issued their first Guidelines for the Encoding for Machine-Readable Text in 1990. The second issue was the need for a hypertextual navigation tool that could guide the user through an enormous amount of documentation and proof of textual variation, and that would overcome the shift from the singularity of the edited text to the multiplicity of the archive. Already in 1993 (although not published until 1996), John Lavagnino boasted that textual scholars were ’the avant-garde when it comes to the use of hypertext’ (1996: 109).11
25This vision is typical of the overall tendency in the 1980s and 1990s to ignore Donald Ross’ generative database proposal, which took an algorithmic approach to produce assumed data in favour of Todd Bender’s archive suggestion, which took a presentational approach towards articulated data. However, Manfred Thaller, in line with his perspective on humanities computing as a humanistic computer science, proposed to consider electronic editions as computer systems that are able ’to support historical research, as opposed to administering, in a convenient way, results of historical research’ (1996: 254), which is what hypertext editions do. In Thaller’s vision, the underlying structure of an electronic edition is best organised as a database system that browses texts as extended string data types. Using this data type as a replacement for the concept of a simple string in programming application systems enables the acceptance by the database system of external information ’which is browsed into the internal extended string representation, processed in that form and re-converted into some kind of external representation before being displayed on an appropriate medium’ (Thaller 1996: 252). Thaller is backed up by Dino Buzzetti (1996), who favours a database representation of the entire textual tradition that contains processable representations of text. But he also warned that ’the dynamic form of a database representation – a form of representation that affords a more faithful reproduction of the varied and diversified expressions of textual fluidity – should not be mistaken for the accomplished form of its edition’ (Buzzetti 1996: 255). Like Ross, Buzzetti proposed to document the sequential textual tradition in a unique and consistent non-linear representation in database form, for which he used Thaller’s extended string concept:
A database representation can thus act as a consistent and unifying model of all different sequential representations of a text, a congruent structure onto which they can all be mapped simultaneously and consistently, and from which they can all be separately derived and individually displayed. (Buzzetti 1996: 255)
26This entails a double economy of encoding a single sequential representation and processing a multiplicity of structurally different representations.
27However, probably because of its embedding in computer science rather than in the humanities, this database model was not generally considered by humanities scholars who came to think about electronic editions. The hypertext edition, on the other hand, with its focus on the edition as an object and its realization of associativity and intertextuality, was championed by many a project. As Lou Burnard (1992: 17) explained: ’Where true database systems require a formalization of the information content of text, hypertext systems return us the view of information as an emergent property, resulting from a connection between one piece of discourse and another.’
6. Edition/Archive
28The database concept, however, did turn up in Deegan and Robinson’s description of hypertext as ’a document which is essentially a database with active cross-references allowing non-sequential reading and writing’ (1994/1990: 36). This inspired Peter Shillingsburg (1993, 1996b: 31) to conceive of the electronic edition mainly as a database attached to a network. Next to the concern that the design of the electronic edition and the storage capacities of the archive must anticipate the desires of the targeted user community, Shillingsburg mainly addressed issues of usability, transportability, security and order, integrity, and expandability. The networked database model was shared by Susan Hockey (1996), who put more emphasis on the need for encoding strategies that could also handle documentation of meta-information about the text and the images. ’Ideally’, Hockey argues, ’the master copy would consist of transcriptions of the text and digital images of the source material’ (1996: 13-14). Deegan and Robinson likewise proposed that the electronic edition should contain encoded transcripts of all the manuscripts and ’possibly digital images of some or all manuscripts’ (1994/1990: 36). Shillingsburg (1996b) turned this desirability of a full accurate transcription and a full digital image of each source edition into a condition of the electronic edition, echoing Tanselle who stated that ’ [d]igitized images of the original manuscripts and printed pages should always be provided along with the more manipulable electronic texts’ (1995b: 591-2) – something which Lebrave (1987) had already requested before. Tanselle also required the inclusion of ’critically reconstructed texts [...] within the collection of texts available in a hypertext edition’ (1995b: 592),12 a suggestion picked up by Shillingsburg in his defence of ’both types of editing’ (1996a: 95), that is historical and critical editing. In the context of his envisioned knowledge sites, Shillingsburg defends the logic behind the inclusion of a critical text in a documentary archive and asks: ’In what sense is it a gain to have in an archive a historical text that was poorly produced and represents the hasty and not-so-careful editorial work of a commercial publisher rather than the thoughtful, careful work of a scholarly editor – who just happens to pursue editorial goals with which you don’t agree?’ (2006b: 157). Interestingly, Shillingsburg designed his knowledge sites as documentary archives in which scholarly editions have their place. Curiously, he seemed to have forgotten Tanselle’s defence of the inclusion of a critical text in a hypertext edition ’or ’archive’ (Tanselle 1995b: 591) when he reported ’I have yet to hear anyone suggest that the electronic scholarly archive should have a critical edition of each sort added to the collection of historical texts’ (Shillingsburg 2006b: 157).
29In his influential essay ’The Rationale of HyperText’, Jerome McGann pointed out that the Rossetti Hypermedia Archive (McGann 2005b) is ’an archive rather than an edition’, claiming that its indefinitely expandable ’webwork of relations’ escapes the ’bibliographical limitation’ of the edition which ’closes its covers on itself’ (1996a: 27). McGann’s concept of the archive is directly linked to the function of hyperediting, the result of which is ’theoretically open to alternations of its contents and its organizational elements at all points and at any time’ (McGann 1996a: 29). Furthermore, ’ [u]nlike a traditional edition, a hypertext is not organized to focus attention on one particular text or set of texts. It is ordered to disperse attention as broadly as possible’ (ibid.). In this discourse, McGann clearly used the word ’edition’ when he referred to the traditional codex edition, and ’archive’ to emphasize the hypertextual nature of the electronic edition. That this is not a useful distinction is proved by his own use, in the same essay, of the terms ’Hypereditions’ and ’Hypermedia editions’ when he refers to the results of ’hyperediting’ (and not ’hyperarchiving’). The somewhat awkward distinction between edition and archive is not used consistently by McGann himself and it seems to reflect his aversion to the mechanics of the traditional critical edition shown in the following quotation:
Editing in codex forms generates an archive of books and related materials. This archive then develops its own meta-structures – indexing and other study mechanisms – to facilitate navigation and analysis of the archive. Because the entire system develops through the codex form, however, duplicate, near-duplicate, or differential archives appear in different places. The crucial problem here is simple: the logical structures of the ’critical edition’ function at the same level as the material being analyzed. As a result, the full power of the logical structures is checked and constrained by being compelled to operate in a bookish format. If the coming of the book vastly increased the spread of knowledge and information, history has slowly revealed the formal limits of all hard copy’s informational and critical powers. The archives are sinking in a white sea of paper. (McGann 1996a: 14)
30As McGann later reflected on this essay, ’ t]he immediate focus of the argument was the debate among editorial theorists about the possibility of creating, in scholarly form, the ’social text’ – that is, a critical edition that would not privilege the authority of one particular text or document’ (2001: 25). In order to achieve this, McGann (1996b) sought to ’integrate for the first time the procedures of documentary and critical editing’. The Rossetti Hypermedia Archive, then, was created by McGann to demonstrate the practical feasibility of his social theory of the text that was at the heart of A Critique of Modern Textual Criticism (1983), and to promote the view that digital forms were open and interactive as opposed to the static and linear qualities of the traditional codex form (2001: 25).
31Robinson (1996b) interestingly ranked McGann’s Rossetti Hypermedia Archive together with Richard Finneran’s Hypermedia Yeats project (Finneran & Bornstein 1994) in a ’more is better’ kind of electronic edition which he opposed to the ’less is better’ group in which he situated his own edition of the Wife of Bath’s Prologue (1996c) and Anne McDermott’s edition of Johnson’s Dictionary of the English Language (1996). The distinction between these two kinds of editions is made on the basis of their respective intent to include all the relevant multimedia materials or only a selection. According to Robinson, editors of editions in the ’less is better’ group aim ’to identify a particular textual domain and a particular audience, and to present that text for that audience as clearly, as richly, and as accurately, as is possible with the resources available’ (1996b). In the case of The Wife of Bath’s Prologue (1996c) the textual history under consideration is limited to the pre-1500 witnesses only, and the Johnson edition only presents the first and the fourth edition of the Dictionary. Further, Robinson pointed out that these editions are also ’rigorously exclusive: there is no discussion of the importance of Johnson’s lexicographic work on the CD-ROM, and the Wife of Bath’s Prologue CD-ROM contains no glossary and no study of the Wife of Bath herself’ (1996b). Together with the explicit editorial presence in the text, this function is one of Robinson’s arguments that an electronic edition should not be an archive, resource, or ’an accumulation of materials without any editorial ”interpretation”’ (1996a: 110). In a later reflection on the editions of the Wife of Bath and the General Prologue, however, he called them ’repositories of information, from which skilled scholars might quarry what they need’ (Robinson 2003b). Robinson (2007a: 8) summarizes: ’for a digital edition to be all it can and should be, then it will let the editors include all that should be included, and say all that needs to be said.’
32The distinction between editions and archives, however, is not made by Tanselle who observed that ’[u]p to now, scholarly projects for publishing electronic texts have tended to take the form of archives’ (2006: 5). Electronic editions and electronic archives, in Tanselle’s view, are therefore synonymous.
33At the same time, it is true that the meaning of the word ’archive’ in connection with electronic textual editing has changed over the course of time. Originally denoting a mere repository of digital surrogates of material artefacts and processed data, the concept has come to include scholarly and critical material such as edited texts, annotations, scholarly essays and the like, alongside the digital resources. This transition has happened organically.
7. Critical Edition
34The distinctions discussed so far such as print versus electronic, archive versus edition, database versus hypertext, or ’more is better’ versus ’less is better’ have been useful in the debates in which they feature, but they are problematic with regard to defining the electronic edition. Just as pointing to a tree does not define one, comparing a tree with something which is apparently not a tree does not work either. The discussion above illustrates that a definition as crude and basic as the one John Lavagnino suggested, writing in 1993, as the core of all hypertext editions, was even at the time of writing theoretically problematic: ’a system that would store both electronic texts and images of all the versions of the works in question, and offer the ability to display parallel texts of any two versions, as either images or electronic texts’ (Lavagnino 1996).
35Although it might have been true that this is ’[w]hat a number of scholars have imagined a hypertext edition would be’ (Lavagnino 1996), this definition clearly describes a very specific type of electronic edition and a very specific type of hypertext edition which requires a specific archival basis and a specific display. Toby Burrows (1997), in his proposal for building a typology of electronic editions, did not include any requirement with regards to the contents or the display of the edition, but instead looked at five more neutral characteristics of electronic texts, namely the markup scheme employed; the extent to which the edition is dependent on specific software; the method of distribution or publication; the overall structure or architecture of the edition; and the type of edition involved. Although this checklist could produce informative metadata on the edition as a bibliographical object which should evidently be documented as part of the edition, theorists of the electronic edition have focused on describing more functional requirements. John Lavagnino (1996) argued that the hypertext edition should facilitate four tasks: ’selecting versions to look at; comparing versions; constructing new and possibly more representative versions of the text on the basis of the information available; and integrating all this study with other scholarship and criticism.’13 A fifth possible task ’consulting a critical text’ is not listed here. By excluding the explicit requirement of the inclusion of a critical text, Lavagnino defended an electronic edition that is different from Faulhaber’s ’electronic critical edition’ (1991) which is centred on a critical text. The reason for this can be found in McGann’s definition of critical editing: in an interesting discussion – at least from a historical and theoretical point of view – on the ESE (electronic scholarly editing) mailing list in 1994 (ESE 1994), about what critical editing is and what the nature of the electronic archive is, McGann made the following claim:
critical editing is a mechanism whereby, through a programmatic study of textual variance in extant documents, one hypothetically reconstructs lost or absent documents (which may themselves be hypothetical). period. that IS what it is and that’s all it is. [...] now although this editing tool is obviously designed for use in dealing with ancient texts, it was adapted by scholars to certain ’modern’ circumstances where the documentary record was once again relatively broken and problematic. it was then re-adapted (by bowers) to situations where the documentary record was hardly damaged at all, i.e., in cases where one did not need the special tool of ’critical editing’ to clear the texts of errors. simple collations would take care of the errors. the tool was used rather to construct ’eclectic editions’ that represented hypothetical forms of some hypothesized ’authorial intention’ (original or final, usually). (ESE 1994)
36McGann continues: ’with the coming of electronic text, however, the use of ’critical editions’ in the proper sense, for modern texts, changes.’ Therefore, the real question in connection with electronic archives and critical editions, McGann argued, is: ’would a critical text be useful?’ In other words, ’are there any cases where such an edition would be called for, where it has any point; what would make one want to produce such a text?’ According to McGann, the documentary record of texts, as presented in digital archives, seldom demands such a text. Nevertheless he argues in favour of the inclusion of critical texts in digital archives such as his Rossetti Archive, but, he adds, ’I won’t make such a reconstruction myself.’14
37Robinson (2002) accepts McGann’s reservation about the inclusion of a critical text in an electronic edition or archive, and mentions the presentation of an edited text as a mere possibility of a ’critical digital edition’. With the ’critical digital edition’ Robinson proposes to extend the functions of a traditional printed critical edition in the traditional print library to the digital realm. Its main function is thus, according to Robinson, to ’think critically, and to help others think critically’ (2002: 59) or, in other words, ’to help editors edit, [...] to help readers read’ (Blake & Robinson 2000). Robinson’s proposal reintroduces critical editing into the model of the hypertext edition as an archive whose main function is the creation of accessibility to certified materials. In an earlier meditation on the electronic edition, Robinson (2009; written 1997-2002) had defined the electronic edition in general terms as ’an edition conceived and executed exclusively for electronic publication, and impossible in any other form’. Here, he discussed six requirements which supplement this definition and which he sees as ’co-ordinates by which critical editions might be located’ (Robinson 2002: 51). An electronic critical edition, then:
- is anchored in a historical analysis of the material
- presents hypotheses about creation and change
- must supply a record and classification of difference over time, in many dimensions and in appropriate detail
- may present an edited text
- must allow space and tools for readers to develop their own hypotheses and ways of reading
- must offer all this in a manner which enriches reading
38Scholarly editions, as Ray Siemens (1996: 43) has reminded us, have a certain dynamic: ’The contents of a scholarly edition, to some degree, show the influence of previous scholarly work and, because scholars will rely on and refer to it, its contents also influence future study.’ The quality and relevance of the scholarly edition depends on its capacity to document the no longer and to facilitate the not yet. Robinson’s co-ordinates are all situated in this continuum with the first three leaning towards the documentation of the past and the last leaning towards the empowerment of the reader and user who are invited to conduct future study, part of which could be the creation of a critical edition. A critical digital edition is thus minimally a well-documented digital archive that overcomes the dangers of what Már Jonsson has called ’Utgeverisk impotens’ or ’editorial impotence’ (cited in Ore 2004: 35).
39If McGann and Robinson are right in their assumption that the documentary records of texts in such archives or editions seldom ask for the inclusion of a critical text but are incubators for future scholarship, the digital archive should be an icon representative of the tangible and original documentary archive – Flanders (2009; written 1997-2002) called representation the textual condition of the edition/archive. The idea goes, then, that this representational archive of digital images, encoded transcriptions, records of difference over time, and contextual information provides the building stones from which different kinds of editions – which Robinson (1994: 93) calls ’nothing more than compilations of materials’ and McGann (1994: 104) considers ’specialized organizations of materials’ contained in the archive – are generated for different audiences. Mats Dahlström, however, has called this assumption ’overidealistic’ (2001: 69). When taking into account Julia Flanders’ observation that in an electronic edition ’the representation of documentary evidence is attached, conceptually, to the mode of knowing that the edition is offering’ which substantiates in ’different theories about what counts as textual knowledge’ and different internal economies of ’evidence, of substantiation, of utility’ (1998: 306) we can begin to understand Dahlström’s reservation. His reticence has nothing to do with a fundamental suspicion towards the reliability of reproductions15 as voiced by Tanselle (1989), nor a distrust of the accuracy of the transcriptions – transcriptions of which text, one could ask, for no text is self-identical, as McGann has argued in Radiant Textuality. Literature after the World Wide Web (2001) – but with his analysis of the nature of editions, as he explained in his essay ’How Reproductive is a Scholarly Edition?’ (Dahlström 2004). Dahlström’s main argument is that the claim of reproductivity as a result of the scholarly edition’s supposedly scientific nature ignores the limitations of the genre. The nature of a scholarly edition, Dahlström contends, is determined by its historical, medial, social, and rhetorical dimensions:
To sum up, the SE [scholarly edition] is a subjective, rhetorical device. It is moreover both a result of and a comment on contemporary values, discussions and interests. It is situated in time, in space, in culture and in particular media ecologies (of both departure and target media). To all bibliographical genres, using derivative target documents as representations of departure documents, these are factors imposing constraints on their iconic force. The situatedness limits the representational and moreover the remediating force of bibliographic tools, including the SE. There are no absolutes here. The SE obviously has representational and reproductive force, the very abundance and undisputable value of SEs throughout history testify to that truism. The interesting question is what factors are at work to limit or to enhance this force. Another important matter is what force and purpose the remediated material itself might have, that is, to what degree the SE is valuable as laboratory, as working material for new scholarly editorial endeavours. I am not talking about the value of SEs for historians, for literary critics, for studies in the history of ideas, etc., but for the making of next SEs based on textual criticism. (Dahlström 2004: 27)
And he continues:
If such archives are to be used as laboratories for generating new scholarly presentational documents such as critical editions, i.e. turning the target documents into departure documents, one would have to stay alert to the derivative status of the archived material in the first place. An SE based primarily (if not solely) on the derivative documents of such a digital archive will always to some extent depend on the inevitable choices made by the persons building the archive, on the historical, socio-cultural, cognitive, and media particulars and on the pragmatic purposes and theoretic values defining and framing the final derivative documents in the archive. (Dahlström 2004: 28)
40In the same essay, Dahlström mentions the ’mimetic fallacy’ and the ’complete encoding fallacy’16 as implicit and problematic assumptions of the electronic scholarly edition that aims to be reproductive. He also reminds us that the scholarly activities of transcribing and text encoding are subjective moments of selection. Since they are governed by one’s theory of the text which, on the pragmatic level, is translated to ’thought, method, and decision’ (Robinson, 2002: 55) and since they are straining after rhetorical and political effects, we could call them editorially intentionalistic.17 Tanselle (1995a: 14), however, has warned that the resulting texts ’may be inappropriate for certain purposes’ and Dahlström argues that striving for a universal aim of the digital archive ’is doomed to failure because it is rooted in an assumption that both textual material and scholarly editing are context-free phenomena’ (Dahlström 2004: 28).
41Espen Ore advised that a basic archive – ’grunnarkivet’ (1999: 143) – be a self sufficient digital archive whose creation ’is done as a goal in itself, not as a step in the creation of an edition’ (2004: 42). Therefore, he requires that all documents have explicit source descriptions and that their creation as digital artefacts is documented.18 Further, Ore stipulates that the documents are sufficiently described in terms of file types, resolution, character set information, and encoding schemes. This information must articulate the authority of the archive and must guarantee the preservation of the archive as a bibliographic artefact, as Marilyn Deegan has claimed as well: Deegan (2006: 366) suggests that a thorough documentation of data, metadata, links, programs, and interfaces may enhance the chances that the digital edition is preserved as a functional scholarly environment.
42Interestingly, Ore (2004: 42) adds that digital archives may move on from being basic archives ’if they offer editing tools and make it possible for users to mark up texts’, that is, allow users to apply their own theory of the text on the textual model. However, contrary to Shillingsburg (2006b), Ore does not consider this a formal requirement of the digital archive: ’The archive should be a possible data source for zero or more editions’ (Ore, 2004: 42).
43Elsewhere I have argued that an electronic (scholarly) edition should be processed from a platform-independent and non-proprietary basis or digital archive of encoded transcriptions, high–resolution image files, metadata etc. which can be stored for archival purposes and can be used as a reproductive basis for more editions. But I have also emphasized that this archive differs from and precedes the generation of the edition proper, which is the immediate result of textual scholarship; the edition proper is intended for a specific audience, is designed according to project-specific purposes, represents at least one version of the text or the work, and its creation and editorial status are explicitly articulated and documented (Vanhoutte 2006: 163).
8. Ergodic Editions
44In their fullest realizations, Robinson’s model of co-operative and distributed editions and Shillingsburg’s knowledge sites aim to incorporate both Ore’s self-sufficient archive and Robinson’s and McGann’s models of the reproductive edition against the background of the history of electronic textual editing as recounted in this essay. The eventual product would no doubt have to qualify as an ergodic19 text where the reader behaves as ’a user in a transcending, cocreative, author mode’ (Aarseth 1997: 183) and from which electronic editions, as I have defined them, could be generated. Paraphrasing Aarseth (1997: 1), in an ergodic edition or text, nontrivial effort is required to allow the reader to traverse the text.20
45Espen Aarseth developed his textonomy mainly for literature, but I argue here that his typological model is applicable to electronic editions and can help in typifying the different genres of editions as they exist today and as they are envisioned in the writings of McGann, Robinson, and Shillingsburg, and discussed by Ore and Dahlström. Aarseth’s textonomy is especially helpful in describing these different genres because it uses a vocabulary that is not common to the humanities. By applying this textonomy to the province of electronic textual editing, he supplies the field with a better model for the defining debate than the dichotomous positions described so far between print and electronic editions, archives and editions, hypertext and dynamic editions, or critical and non-critical editions. It also explicitly incorporates the user in the descriptions which is relevant especially for those editions which, as target texts, present themselves explicitly as departure texts for future scholarship. The active interactivity and the fluidity of the edition as a co-operative and distributed model, then, is substantiated in the tension between the textons or ’strings as they exist in the text’ (Aarseth 1997: 62) and scriptons or ’strings as they appear to readers’ (ibid.). These two concepts are central to Aarseth’s model.21 Since textual editions are constructed with an implied or ideal reader or user in mind, often an avatar of the editor, the traversal mode of the edition as text should be of concern to its creators.22 In his typology, Aarseth identifies seven variables ’which allow us to describe any text according to their mode of traversal’ (Aarseth 1997: 62). Adapting the model of the traversal mode to the electronic textual edition results in the following schematic overview:
46This expands as follows:23
471. Dynamics: In a static edition the scriptons are constant; in a dynamic edition the contents of scriptons may change while the number of textons remains fixed (intratextonic dynamics), or the number (and content) of textons may vary as well (textonic dynamics). In a knowledge site where users can add new markup, new variant texts, new explanatory notes and commentaries, and have their personal note space, the number of textons is not known. An edition produced on the basis of the archive provided can have a fixed or a variable number of textons, depending on the editorial model and technology implemented. The editorial model, introduced by Lancashire (1989) and discussed by Siemens (2001, 2005), with integrated advanced textual analysis software constitutes a dynamic edition.
482. Determinability: This variable concerns the stability of the traversal function. An edition is determinate if, for every scripton its adjacent scriptons are always the same. If not, the edition is indeterminate. As a scholarly product, stability and hence determinacy appears to be a conditio sine qua non. However, one could envision an edition, probably based on game models, which is self reflective and generates simulated forms of meaning resulting in indeterminate text as Jerome McGann and Johanna Drucker’s Ivanhoe Game attempts to do for literary criticism.24
493. Transiency: If the mere passing of the user’s time causes scriptons to appear, the edition is transient; if not, it is intransient. Most, if not all, editions are intransient and do nothing unless activated by the user. However, one could conceive of a play mode which showcases the contents of the edition to the user as a recorded movie.
504. Perspective: If the edition requires the user to play a strategic role, then the edition’s perspective is personal; if not, it is impersonal. Editions which present the user with no other possibility of action but reading are impersonal. In a reproductive edition, the user is (in part) responsible for what happens with/to the texts.
515. Access: In an edition or archive with random access the scriptons of the text are readily available to the user at all times. If this is not the case, then access is controlled. Random access is typically a quality of the printed edition. But electronic editions which have all data pre-processed qualify as random access as well. This is closely related to the perspective of the edition. Personal editions will generally offer controlled access.
526. Linking: An edition may be organized by explicit links for the user to follow, by conditional links that can only be followed if certain conditions are met, or by none of these (no links).
537. User functions: Besides the interpretative function of the user, present in all editions, some editions may be described in terms of additional user functions: explorative, in which the user must decide which path to take, and configurative, in which scriptons are in part chosen or created by the user. If textons or traversal functions can be (permanently) added to the edition, the user function is textonic. If all the decisions of a user about an edition concern its meaning, then there is only one user function involved, here called interpretation. When users must make choices about alternative paths and actions, the user function is explorative. Some editions allow the user to configure the scriptons by rearranging textons or changing variables. And finally, in some cases the user can extend or change the text by adding their own writing or programming.
54Aarseth has calculated that these seven variables create a multidimensional space of 576 unique genre positions for text, applied in this case to electronic editions (Aarseth 1997: 64-65). This space offers an alternative to the legacy typologies from conventional editorial theory with which current theory on electronic editions is wrestling. As Aarseth points out, ’the model works both on an abstract, synthesizing level and on a particularizing, predictive one’ (1997: 74). He further explains that the ’open categories approach also allows for a prediction of hypothetical textual modes, by combining functions that are not found together in any existing texts’ (Aarseth 1997: 74). On the synthesizing level, correspondence analyses of existing and envisioned electronic editions on the basis of this traversal model could shed new light on the defining debate and show that the recent participatory models of Robinson and Shillingsburg occupy just one of these genre positions each, next to many others. On a predictive level, the model offers a toolbox for the combination of functions into new genres of editions. But the main advantage of the adoption of this traversal model is probably that its reductionist perspective ’makes it easy to check, criticize, modify, or even reject if necessary’ (Aarseth 1997: 74) conceptions of texts, readers, editions and their limits. As such, this textonomy of electronic editions does not offer a decisive end to the defining debate, but feeds it with another method of analysis, description, and definition.
Notes de bas de page
1 It would be informative to examine in how far this is true for non-Anglo-American digital scholarship, for instance in the work of French and German scholars.
2 The main difference between the integrated packages and other programs is that each program in such a package produces an output that can be used as input for follow-up programs so that a continuous editorial procedure from text comparison to typesetting the scholarly edition becames possible.
3 Even if the manuscript had been prepared on a word-processor, it was commonly retyped by the publisher in a second machine-readable version that included formatting codes.
4 Already in 1967, Martin Kay optimistically claimed that ’[i]n a few years every printing house which wishes to remain competitive will produce a machine-read-able version of a text as a natural by-product of the printing process, and it is to be hoped that a systematic effort will be made to insure that this material is not destroyed as it usually is today’. The question is then: what can be done ’to make this data available to linguists and literary scholars and to enable them to profit as they should from the computer facilities that are so rapidly becoming cheaper and more powerful’? (Kay 1967: 171).
5 As long as the printed paradigm remains the model by which the computer is used in assisting text-critical research, this transition will never take place fully. Statements about the computer as a mere tool to facilitate the text-critical process are mostly made by scholars who do not intend to explore the possibilities of the computer as a modelling tool, and who stick to the assistant role of the computer in existing areas of study. Reasons for this attitude can be manifold and include ignorance, resistance, peer pressure, and intentional compliance with certain schools and traditions.
6 Table ronde internationale portant sur ’Les problèmes techniques et éditoriaux des éditions critiques’, 28 29 June 1984, Paris: CNRS. The proceedings of this meeting are published in Catach (1988).
7 Towards the end of the 1980s machine-readable texts of literary titles became available as separately distributed products on CD-ROM or as part of electronic text centres. Often, the texts were encoded for use with specific analytical software packages such as WordCruncher or Micro-OCP.
8 ’J’appelle donc hypertexte tout texte dérivé d’un texte antérieur par transformation simple (nous dirons désormais transformation tout court) ou par transformation indirecte: nous dirons imitation.’ (Genette 1982: 16).
9 Nelson describes: ’a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper’ (Nelson 2003/1965: 144).
10 Namely genetic editions of the beginning of Flaubert’s Hérodias by Lebrave and a genetic path through one of Joyce’s Finnegans Wake notebooks by Daniel Ferrer (Ferrer 1995).
11 The paper was written in 1993 but published in 1996.
12 In ’Critical Editions, Hypertexts, and Genetic Criticism’ Tanselle makes two explicit points about hypertext. First he defends the graphical possibilities of hypertext: ’Just as a scholarly edition in codex form is considered deficient if it does not provide a record of variant readings, a hypertext edition (or ’archive’) should be regarded as inadequate if it does not offer images of the original documents, both manuscript and printed’ (1995b: 591). Secondly he defends the inclusion of a critical text in a hypertext edition/archive: ’Indeed, the point can be made more positively: that critically reconstructed texts ought to be included within the collection of texts available in a hypertext edition’ (Tanselle 1995b: 591-2).
13 When rereading his essay in 1997, Lavagnino pointed out that ’this essay looks to the future because most of its suggestions about things we need to be able to do with texts have not been implemented’ http://hdl.handle.net/2027/ spo.3336451.0003.112 [accessed 8/3/2010].
14 In this ESE discussion, Morris Eaves asks McGann whether he thinks critical editing is dead? McGann answers that critical editing is certainly not dead and that a ’full bowersian critical editing process’ is justified ’to clear a problematic documentary record’.
15 Issues of digital surrogacy, authentication of digital images, and questions of photographic truth, are passed over in this essay.
16 Willard McCarty defined ’mimetic fallacy’ as ’the idea that a digitized version will be able to replace its non-digital original’ and ’complete encoding fallacy’ as ’the idea that it is possible completely to encode a verbal artefact’ (McCarty 2003, cited in Dahlström 2004: 24).
17 See also Peter Shillingsburg’s discussion of five formal orientations in editing, in particular the documentary, sociological, and bibliographic orientation (1996a: 15-27).
18 ’For digital facsimiles this would include the techniques used for photographing and/or scanning and information about post-scanning processing of the image files. For texts, transcription work and encoding (including proofreading) should be documented.’ (Ore 2004: 42)
19 Ergodic is derived from the Greek ἔργον - work and ὁδός - path.
20 Further paraphrasing Aarseth (1997: 1-2): if the ergodic edition is to make sense as a concept, there must also be non-ergodic editions, where the effort to traverse the text is trivial, with no extranoematic responsibilities placed on the reader except (for example) eye movement and the periodic or arbitrary turning of the pages or scrolling of the screen. Examples of such editions can be printed reading editions that are linear documents or simple text archives that only represent one version of the text.
21 This opens up the possibility not only of seeing the electronic edition as an electronic infrastructure for script acts, but also of applying Shillingsburg’s Script Act Theory to the edition proper (Shillingsburg 2006b: 40-79) – the electronic edition as a model of self-reference.
22 Aarseth defines the traversal function of a text as ’the mechanism by which scriptons are revealed or generated from textons and presented to the user of the text’ (Aarseth 1997: 62).
23 This expansion applies Aarseth’s original model to electronic editions and quotes, paraphrases, and adapts Aarseth’s original text (1997: 62-64). Aarseth himself has suggested to readers to ’use these terms in any way you find pleasurable, please rewrite them, refute them, or erase them, if you want’ (Aarseth 1997, 183).
24 McGann and Drucker’s Ivanhoe Game can be found at
http://www.ivanhoe-game.org/ [accessed 10/3/2010].
Auteur
Edward Vanhoutte is director of research at the Royal Academy of Dutch Language and Literature, Ghent Belgium, and head of the Centre for Scholarly Editing and Document Studies (CTB). He pioneered electronic textual editing in Belgium and the Netherlands, is Associate Editor, Literary and Linguistic Computing, and author and editor of books and articles on (electronic) textual editing and humanities computing.
Le texte seul est utilisable sous licence Creative Commons - Attribution - Pas d'Utilisation Commerciale - Pas de Modification 4.0 International - CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.
The Digital Public Domain
Foundations for an Open Culture
Melanie Dulong de Rosnay et Juan Carlos De Martin
2012
Text and Genre in Reconstruction
Effects of Digitalization on Ideas, Behaviours, Products and Institutions
Willard McCarty
2010
Digital Scholarly Editing
Theories and Practices
Matthew James Driscoll et Elena Pierazzo (dir.)
2016