Précédent Suivant

7. The Evaluation and Peer Review of Digital Scholarship in the Humanities

Experiences, Discussions, and Histories

p. 163-181


Texte intégral

Introduction

1The project of publishing guidelines and advocacy documents for the evaluation of digital scholarship in the humanities has gained particular momentum since c. 2002. This ‘turn’ is unlikely to have been spontaneous, and thus various questions follow: which contexts and what interests shaped the work of devising guidelines for the evaluation of digital scholarship? What were the digital humanities communities’ experiences of the evaluation of digital scholarship during the years before c. 2002? And what trajectory has the evaluation of digital scholarship followed over the longer term? In short: what is the history of the take-up and development of evaluative methods for the assessment of digital scholarship in the humanities? In this chapter, I explore these wider questions by looking more closely at how the evaluation of digital scholarship was experienced and discussed by the humanities computing community during the years before c. 2002. This chapter contributes to this volume by presenting an overview of the trajectory and contours of the debates about digital scholarship and communication that occurred in the humanities computing community. Chronologically ‘downstream’ of the digital humanities, the material presented in this chapter offers useful and grounded preliminary and historical material that explains some of the longer-term origins of many of the debates that still concern the digital humanities, which are discussed in the introduction to this volume in particular, but in other chapters too.

2Digital humanities is often said to have developed from humanities computing, whose origins, in turn, are often traced to approximately 1949.1 As will be shown below, conversations about the evaluation of the field’s digital scholarship, as well as a few projects that sought to tackle its various aspects, can be documented from at least the 1960s. Yet, it is in the first decade of the twenty-first century that a cluster of publications and projects about evaluation can be noted, many of them influential. In 2002, the MLA (Modern Language Association) Committee on Information Technology published ‘Guidelines for Evaluating Work in Digital Humanities and Digital Media’.2 These guidelines have proved to be a significant starting point for those seeking direction about the evaluation of digital scholarship.3 In 2004, the Networked Infrastructure for Nineteenth-Century Electronic Scholarship (NINES) was set up with aims that included its functioning as a peer review collective for digital work about the nineteenth century.4 The work on evaluation conducted by Geoffrey Rockwell from 2005 to 2008 was officially released by the MLA’s Committee on Information Technology in 2008.5 New peer-reviewed platforms for the digital publication of multimedia scholarship (for example, Vectors) began publishing in 2005.6 Around this time, the MLA’s Committee on Scholarly Editions incorporated electronic editions into its guidelines for print editions.7 In 2006, the MLA also stated that ‘ [d] epartments and institutions should recognize the legitimacy of scholarship produced in new media, whether by individuals or in collaboration, and create procedures for evaluating these forms of scholarship’.8 That same year the influential ACLS (American Council of Learned Societies) Commission on Cyberinfrastructure in the Humanities and Social Sciences also emphasised the importance of recognising digital scholarship, including evaluating it appropriately.9 In 2007, the report ‘University Publishing in a Digital Age’ urged universities to show ‘a renewed commitment to publishing in its broadest sense’.10

3The documents and projects outlined above are, ceteris paribus, in favour of digital scholarship and committed to devising robust ways of assessing it. Yet, regarding the 2006 quotation above from the MLA (about the worth of digital scholarship and the necessity of devising approaches to its assessment), the fact that it was necessary to make such a statement implies that the reception and evaluation of digital scholarship remained problematic. On my initial reading of the documents cited above, given their emphasis on the necessity for evaluating and recognising digital scholarship, I assumed that the imagined audience for such calls was the wider academy. Yet, I began to wonder about attitudes to, and experiences of, evaluation that may have existed in the humanities computing community itself. Was the community united in favour of digital scholarship being formally evaluated? Was there internal agreement about what constituted digital scholarship and appropriate forms of evaluation?

4In order to explore these questions further, and thus to understand more about the prehistory of the evaluation of digital scholarship, I will survey some of the conversations the humanities computing community recorded in the years before c. 2002 concerning peer review and evaluation. In particular, I will uncover and discuss attitudes to and experiences of the evaluation of digital, or digitally-derived, research recorded in internet and www forums, publications, and oral history interviews.11

5Because humanities scholarship is usually evaluated via peer review, I will survey conversations about one or both of these terms. I define the terms ‘peer review’ and ‘evaluation’ broadly to include any kind of assessment (whether qualitative or quantitative) of digital scholarship that is discussed in the literature I have surveyed. So too, I have adopted a broad definition of digital scholarship that includes not only digital or digitally-derived scholarship but also scholarship that has been published digitally. I do this on account of the practice of ‘double-publication’, which has long been at play in the digital humanities, where publication about a digital humanities artefact or tool is required in addition to the digital object or resource itself.12

6A growing body of literature addresses the evaluation of digital scholarship and the issues connected to it. Important discussions include the social and dialogic contexts that might be cultivated at a departmental level to support the longer-term evaluation of digital scholarship,13 criteria for evaluative committees who assess digital scholarship,14 and the particular circumstances that often underpin digital scholarship, for example, collaboration.15 Publications also advocate for the necessity of evaluating digital scholarship,16 explore ways in which particular communities might contribute to evaluation,17 and discuss some approaches to assessing emerging forms of digital scholarship.18 Yet, the wider history of the evaluation and peer review of digital scholarship is little addressed (while the history of peer review in the humanities also requires further research).19 This paper seeks to explore this topic by sketching the ways in which peer review and evaluation were discussed and understood by the humanities computing community during the years before c. 2002.

Experiences and Discussion of Evaluation c. 1963–2001

7The discussions and debates that are summarised below are founded on the following questions: what constitutes a digital research output? Which outputs should be formally evaluated? In line with what criteria could they be evaluated? How should the peer review process be organised and managed, and who might participate in it? What do bibliometrics imply about the perceived impact and quality of digital scholarship? The responses these questions elicited are often underscored by a certain ambivalence about the robustness and fair-mindedness of the process of evaluating digital scholarship. The question of whether digital scholarship could even get a fair hearing seems to be raised implicitly. At the time of writing, digital humanities is apparently in a strong position, so this attitude might seem puzzling to readers of this chapter. Yet, it is an important backdrop against which many of the conversations summarised below should be read, and I will therefore briefly address it and its wider contexts.

Individual and Group Experiences of Making Digital Scholarship

8References to the negative evaluations some humanities computing scholars have received of their digital work feature in oral history interviews, listserv discussions, and formal publications. Of course, negative evaluations were not a universal experience, as evidence from Julianne Nyhan and Andrew Flinn’s oral history interviews demonstrated: Susan Hockey and John Nitti, for example, recalled the positive collaborations they pursued with established humanities scholars.20 However, others readily recalled the opposition their work met with. For example, Mary Dee Harris reported that: ‘I got a lot of flak from the Department about my work. One of the graduate advisers swore that I was trying to destroy literature by using the computer.’21 John Burrows and Hugh Craig discussed the difficulties they sometimes faced when trying to publish their scholarship in ‘mainstream’ English journals, as opposed to dedicated humanities computing or digital humanities publications.22

9Some discussions on Humanist (which is referred to as an electronic seminar, see the further discussion of it below) tally with these experiences. For instance, a post to Humanist emphasised that there existed almost ‘universal disregard for work in computing among the committees that govern hiring, tenure, and promotion’.23 Another post pointedly asked: ‘Do tenure and promotion committees value programming, software reviewing, and other of the activities [sic] so typical of HUMANIST addressees?’.24 These sentiments find an echo in formally published literature too. A.Q. Morton, for example, in 1963 recalled how his work was dismissed by the humanities journals he first sought to publish it in:

The first technical article I wrote I sent to the Scottish Journal of Theology. It arrived back within three days. I sent it to the Expository Times. A letter came back: ‘Dear Mr. Morton, I do not understand this but I am quite sure that if I did understand it, it would be of no value.’ I sent it to Science News, whose editor came up to see me about immediate publication.25

10Joel D. Goldfield echoed this experience of dismissal when, in 1993, he wrote approvingly of Paul Fortier’s strategy for de-centring his computational techniques and data:

At this juncture I therefore accept Paul Fortier’s politically wise approach in his study on Gide’s L’immoraliste: statistical sophistication in stylometric and thematic analysis, as well as statistical details implicit in the interpretation, are relegated to appendices or simply not included in the publication.26

11Indeed, Joseph Raben, who was for many years the editor of Computers and the Humanities (the field’s first academic journal), indicated the problem was a systemic one. In 1991, he wrote:

for many individuals the mere existence of this journal [Computers and the Humanities] has meant the difference between academic success and failure. Promotion and tenure committees, restricted in their vision to ‘legitimate publication’ have often been satisfied by articles that have passed our referees and appeared in our pages. Few of these articles would have been appropriate for the conventional journals of their respective disciplines.27

12Other conversations indicate it was not only the use of a computer for research that was considered problematic by some; merely publishing work on a digital platform could also be viewed as problematic. An example from 1987 speaks to this. The idea of setting up an electronic journal was proposed for the field of humanities computing, with peer review by an editorial board.28 The idea was rejected on various grounds, including the proposed medium of publication: it was felt that few researchers would contribute to it, as the electronic format held too many risks.29 Willard McCarty also claimed that electronic publication in the humanities was devoid of ‘professional kudos’ and had the potential to ‘pre-empt […] conventional [publication]’.30

13In this way, I believe, the inauspicious reception digital scholarship sometimes received from the wider community partly explains the ambivalence some members of the humanities computing community expressed towards the evaluation of digital scholarship in the conversations summarised below.31 The conversations that took place about the evaluation of digital scholarship will now be presented, beginning with discussions about which outputs were considered amenable to peer review.

What Should Be Evaluated?

14One of the richest sources of discussion about experiences of, and attitudes to, the evaluation and peer review of digital scholarship I have encountered is contained in the archives of Humanist. Humanist was established in 1987 on the BITNET/NetNorth/EARN node in Toronto, Canada, and run on Listserv software.32 It was styled as an academic seminar, and debates about the evaluation of digital scholarship occurred on it from an early stage. In the earliest Humanist posts, questions about peer review are somewhat inward looking: one question asked was whether a form of peer review, in the sense of moderation, should be applied to Humanist itself,33 a proposal that was ultimately rejected.34 Another question asked whether posts to Humanist might be peer reviewed so they could be counted by tenure committees.35 Discussions about whether posts to a listserv group might be peer reviewed now seem antithetical to the participatory and interactive paradigm that currently characterises many digitally-mediated communication platforms. Such conversations remind us of the novelty of the technology at that stage, and they prompt questions about how social contexts and dialogue, and not just technological affordances, shaped the take up of computing in the humanities. As we shall see, over the longer term, social and dialogic factors also played a role in persuading the humanities computing community of the necessity of formally evaluating digital scholarship in the humanities.

15Conversations on Humanist soon turn to the absence of peer review mechanisms for humanities computing scholarship (including electronically published articles and studies like editions, software, code, tools, and other kinds of computational work and software reviews). In the discussions this observation gives rise to, or interlinks with, ambivalence towards the field of humanities computing itself is palpable. When summarising the first two months of conversations that had taken place on Humanist, McCarty noted that frustration had been expressed with the ‘juvenality [sic] of an emerging discipline: the lack of peer review, hence of quality-control’.36 Indeed, in a post to Humanist, McCarty argued that peer review was essential to reforming the status quo:

The second reason for the disregard from our academic masters and colleagues may be the often poor quality of the writing (and sometimes thinking) associated with computing. The informality of the medium may have quite a bit to do with this. Mainframe editors are in general so primitive and screen images so difficult to proofread that we are tempted to slap something down and dash it off without much thought. We can do something about this, it has been suggested, by peer-review and editorial intervention.37

16Responses to McCarty were mixed; over the longer term, doubts about the imprimatur of peer review continued to be raised. The question of how evaluation intersected with disciplinary identity was evoked when peer review was discussed as being a hallmark of the established humanities, a sector from which humanities computing tended to differentiate itself: ‘if we really boil things down to their foundations and meanings, we may find that a lot of them are rubbish and that the Mainstream with its Peer Reviewers is largely unsatisfactory’.38 All the same, a tentative acceptance of the necessity of some form of peer review or formal evaluation of the field’s scholarship is indicated by some. For example, by 1996/7, a contentious critique of the Text Encoding Initiative (TEI) Guidelines observed that ‘the TEI Guidelines have never been subjected to significant peer review’.39 Whatever the accuracy of the claim, that the guidelines should be criticised in such terms implies that peer review was seen as increasingly fundamental.

Which Evaluative Criteria?

17Various concerns were also raised about the difficulties of actually implementing peer review. It was recognised that, in order to elaborate peer review guidelines, complex, fundamental, and likely contested questions about what constituted quality would have to be addressed. For example: ‘Both Charles Faulhaber and Willard McCarty imply that peer review is enough to put e-work on an equal footing with conventional work. But are there any criteria for peer review? […] without some rules, isn’t it a meaningless criterion?’.40

18Reaching a consensus about how quality could be identified was just part of the task. Identifying those with the technical skills necessary to evaluate such work was also germane, as was the ongoing problem of what could and should count as scholarship:

The problem is that none of the people in my department would be able to judge work in computers, since they use the computer mostly as a typewriter, with some network involvement. Not to badmouth my own department, this would be true of most departments I know of. […] Next, there is the problem of who does the work. I know of people who have published concordances, for example, who downloaded the text, outsourced the programming, made a KWIC concordance, so there was little formatting, got it published and submitted it to the tenure committee. Such work should not count. On the other hand, if you write a concordance program yourself, no matter how good, you will have a hard time getting any credit for it.41

Organising the Peer Review Process

19The conversations on Humanist range over various possibilities for how peer review could be organised and implemented. Overall, one is struck by the conservative nature of these posts. It is curious to see fairly standard humanities approaches being mooted as viable approaches to assessing scholarship that often did not fit into the pre-determined categories of the mainstream humanities. The old chestnut of appointing a group of esteemed scholars to devise evaluative guidelines was proposed:

A procedure should be established by professional organizations and the e-text center for the peer-reviewing of annotated e-texts if they are tagged beyond screen mark-up (e.g., morphological and literary tagging). This reviewing could take place prior to or following in-house editing, depending on the expertise of the reviewers.42

20A post about how peer review could be applied to a pre-publication initiative suggested that:

an editorial board, as prestigious as possible could be organized and could begin selecting the better papers so as to provide a quality of intellectual certification through some classical peer review […] The selected papers could then be marked in such a way that users would know that they are fully certified as if they had been published in a normal, peer-reviewed journal.43

21Some posts did consider a more innovative form of peer review that could potentially subvert established hierarchies:

Could the use of […] ‘e-review’ methods eventually supplant the existing system of peer review used by conventional publishers (the lack of which is one of the reasons libraries are reluctant to buy self-published books)? Are there any other Humanists out there who have experimented in self-publishing?44

22A more differentiated approach to peer review was also suggested:

it seems clear that the user needs to know whether what he or she has on screen is worth spending time puzzling over. Peer-review seems to me essential for some kinds of online publication (journals, usw.), but not for everything. Given a disciplined self, self-publication can be (a) a powerful inducement for our colleagues to get involved, and (b) a way of getting into the public light interesting, valuable material that otherwise would stay in darkness. The more conversational (like Humanist), the more experimental the less peer-review seems appropriate.45

23The question of how and why it was that some processes went on to be largely adopted by the field is a question that remains open for further studies in this area to explore.

Implicit Peer Review

24A prominent debate that played out on Humanist, and continued in Computers and the Humanities, again showed the complicated relationship the field of humanities computing had with evaluation and peer review. In 1992, Mark Olsen criticised humanities computing for its ‘intellectual failure’, as evidenced by the implicit and explicit peer review of its work:

Our failure is indicated by both explicit and implicit peer review of our work. Implicitly by the intellectual failure of humanities computing research to be cited by or published in (with a few notable exceptions) mainstream scholarship. Bluntly put, scholars in our home disciplines (literature, history, etc.) seem to be able to safely ignore the considerable literature generated by humanities computing research over the years. Explicit peer review is indicated, in part, by the fact that humanities computing hasn’t been invited to the banquet. We don’t *have* to be invited precisely because the results of so much work can be ignored by scholarship in our home disciplines.46

25The following year, he published a more detailed version of this argument in a special edition of Computers and the Humanities, together with a set of responses from the wider humanities computing community. Olsen wrote how his argument had caused ‘considerable debate concerning the proper methods of disciplinary evaluation’,47 and again emphasised the importance of peer review, including the notion of implicit peer review and what it said about the field:

Given the dominance of peer review in scientific and humanities research, as demonstrated in publication evaluation, grant applications, and hiring/tenure decisions, I find it very difficult to discount the importance of the most objective measure of the value of our work to our peers the decision to read, to use, and to publish our conclusions.48

26Goldfield’s response to Olsen acknowledges humanities computing’s marginalisation, but he nonetheless detects the advent of ‘a long-awaited, but still incipient, success d’etre enfin parvenus’.49 Arguing that the field was ‘battling on two fronts, one scholarly and one political’,50 he discusses its ambivalent attitude towards the peer review of digital scholarship:

I find fallacious [Olsen’s] implicit assumption that studies of interest, new truths, and allegations quickly find their way into the mainstream in the humanities. I would submit that there are two compelling factors working against mainstream entry and fertilization in our quantitative interdiscipline. The first is the inertia of mainstream journals’ reviewers and possibly editors, and the unwillingness of the studies’ authors to submit their work for peer review, especially in a form palatable for the keepers of the keys.51

27Nevertheless, during the years under discussion various peer review initiatives were undertaken. For example, the ACH Newsletter includes a notice that IBM had funded the MLA and the ‘Center for Applied Linguistics to implement a system of peer review for language-oriented software written for IBM microcomputers and compatible hardware’.52 Yet, the impact of such initiatives on the humanities computing community appears to have been limited. Six years later the lack of progress made in the context of peer review was again addressed, and the community was once more reminded that ‘the production of peer reviewed scholarship is the single most important activity for professional advancement in academe, including tenure, promotion, and salary increases’.53

28From the late 1990s onwards, there are notable signs that the rejection of the digital per se was coming to an end. One contributor to Humanist wrote of developments at UC Berkeley:

I have finally gotten my hands on the formal statement proposed by Berkeley’s Library Committee to the campus’s Academic Senate, with respe [c] t to faculty review and different media: ‘In the course of reviewing faculty for merit and promotion, when there are grounds for believing that processes of peer review and quality assurance are the same in different media, equal value should be attached to the different forms of scholarly communication’.54

29Other notable developments include the announcement of a new electronic imprint from the University of Virginia Press, and its intention to

look nationally and internationally for pioneering digital work that emphasizes both creative scholarship and innovative technology. Each project published will be approved by the press’s editorial board and will receive extensive peer review just as print publications do.55

30In 2002, an essay ‘recently published by the Knight Higher Education Collaborative [argued that] universities and colleges should establish policies declaring peer-reviewed work in electronic form suitable for consideration in promotion and tenure decisions’.56 Nevertheless, the essay noted that some scholars still needed reassurance that electronic publication would not harm their careers.57

Conclusion

31The material cited above shows that many fundamental conversations took place in the years before c. 2002 in the humanities computing community about what constituted academic and technical excellence in digital and digitally-derived scholarship, about the appropriateness of peer review as a mechanism for evaluating digital scholarship, and about whether the digital was a suitable medium for publication. On the whole, the evidence I have gathered here suggests the community had mixed experiences of, and attitudes toward, peer review and formal evaluation. While a consensus does seem to have been reached about the importance of formal evaluation for the emerging discipline, this review indicates that it took time to build such a consensus (and, of course, agreement was not necessarily unanimous). Discussion and debate seem to have played a crucial role in building this consensus over the longer term.

32External factors, such as the growing acceptance of digital publication, may also have offered the community an important signal that change was on the horizon and they would need to respond accordingly. It also seems reasonable to propose that the wider position of digital humanities, which by c. 2002 was undergoing a process of institutionalisation, made the requirement for evaluative guidelines all the more urgent.58 Indeed, Matthew G. Kirschenbaum has noted a ‘rapid and remarkable rise’59 of the term ‘digital humanities’ around this time. He has written of the ‘surprisingly specific circumstances’60 that arguably led to the rise of the term, and that included the preparations (from c. 2001 until its publication in 2004) of Blackwell’s Companion to Digital Humanities, the establishment of the Alliance of Digital Humanities Organizations (ADHO) in 2005, and the establishment of the Digital Humanities initiative by the NEH (National Endowment for the Humanities) in 2006 (which became the Office of Digital Humanities in 2008).61 He wrote that ‘ [i] n the space of a little more than five years, digital humanities had gone from being a term of convenience used by a group of researchers who had already been working together for years to something like a movement’.62 Advances in the digital evaluation of scholarship, such as I have discussed above, are not included in Kirschenbaum’s list. Is it merely a coincidence that peer review efforts bear a particular kind of fruit, and exert a specific influence, around the time of the ‘rise’ of the term digital humanities? Is it plausible to suggest that progress made in the digital evaluation of scholarship contributed to the institutionalisation of the digital humanities? And, if that is the case, what role might digital evaluation play in the ongoing development and institutionalisation of the digital humanities? These are questions that subsequent research about the history of peer review and evaluation of digital scholarship might take up.

33The institutionalisation of the digital humanities is in media res. Much progress has been made in important areas like faculty appointments, the establishment of dedicated teaching programmes, and the setting up of prestigious centres.63 Nevertheless, much remains to be done to address ongoing questions that are pertinent to securing a firmer foothold, including, for example, urgent work on areas like the epistemology of the digital (such as appears in chapters 3 and 6 of this volume), and in terms of analysing and theorising the multi-layered and sometimes tacit scholarship that informs and is embodied in the computational artefacts the field creates.64 The outcomes of this research should also inform future iterations of guidelines on the evaluation of digital scholarship.

34Elsewhere, I have observed a dichotomy between the radical discourse of digital humanities — with its frequent talk of revolutions — and its apparent conformity with the established norms of the academy:65 for example, the use of (sometimes) blind, pre-publication peer review to evaluate the scholarship it submits to its major journals. One wonders why more experimental and radical approaches to the evaluation of digital scholarship are not being more extensively explored.66 Is it because of the considerable barriers to open peer review that still exist?67 Or is it because the price of the field’s institutionalisation into the academy has been the abandonment of its radical agenda (if not discourse)? As intimated by Goldfield, peer review is intimately connected with disciplinary identity.68 Our approaches to the evaluation of digital scholarship in the coming years are of crucial importance, not only in terms of the field’s continuing institutionalisation but also in terms of what peer review can reveal about the digital humanities’ evolving disciplinary identity.

Bibliographie

Des DOI sont automatiquement ajoutés aux références bibliographiques par Bilbo, l’outil d’annotation bibliographique d’OpenEdition. Ces références bibliographiques peuvent être téléchargées dans les formats APA, Chicago et MLA.

Bibliography

American Council of Learned Societies, Our Cultural Commonwealth: The Report of the American Council of Learned Societies Commission on Cyberinfrastructure for the Humanities and Social Sciences (New York: American Council of Learned Societies, 2006), https://www.acls.org/uploadedFiles/Publications/Programs/Our_Cultural_Commonwealth.pdf

10.1632/prof.2011.2011.1.136 :

Anderson, Steve, and Tara McPherson, ‘Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship’, Profession (2011), 136–51 https://doi.org/10.1632/prof.2011.2011.1.136

10.3998/3336451.0010.301 :

Brown, Laura, Rebecca Griffiths, and Matthew Rascoff, ‘University Publishing in a Digital Age’, The Journal of Electronic Publishing, 10.3 (2007), https://quod.lib.umich.edu/j/jep/3336451.0010.301?view=text;rgn=main, https://doi.org/10.3998/3336451.0010.301

Chubin, Daryl E., and Edward J. Hackett, Peerless Science: Peer Review and U.S. Science Policy (Albany, NY: SUNY Press, 1990).

10.1353/syl.2000.0005 :

Finkel, Raphael, et al., ‘The Suda On Line (Www.Stoa.Org/Sol/)’, Syllecta Classica, 11 (2000), 178–90, https://doi.org/10.1353/syl.2000.0005

10.1632/prof.2011.2011.1.196 :

Fitzpatrick, Kathleen, ‘Peer Review, Judgment, and Reading’, Profession (2011), 196–201, https://doi.org/10.1632/prof.2011.2011.1.196

10.1177/1536504212466347 :

— ‘Revising Peer Review’, Contexts, 11 (2012), 80, https://doi.org/10.1177/1536504212466347

10.1093/llc/fqq021 :

Galey, Alan, and Stan Ruecker, ‘How a Prototype Argues’, Literary and Linguistic Computing, 25.4 (2010), 405–24, https://doi.org/10.1093/llc/fqq021

10.1007/BF01829387 :

Goldfield, Joel D., ‘An Argument for Single-Author and Similar Studies Using Quantitative Methods: Is There Safety in Numbers?’, Computers and the Humanities, 27.5–6 (1993), 365–74, https://doi.org/10.1007/BF01829387

‘IBM Grants’, ACH Newsletter, 9.3 (1987).

10.5406/illinois/9780252037528.001.0001 :

Jockers, Matthew L., Macroanalysis: Digital Methods and Literary History, 1st ed. (Urbana: University of Illinois Press, 2013), https://doi.org/10.5406/illinois/9780252037528.001.0001

10.1632/ade.150.55 :

Kirschenbaum, Matthew G., ‘What Is Digital Humanities and What’s It Doing in English Departments?’, ADE Bulletin (2010), 55–61, https://doi.org/10.1632/ade.150.55

McCarty, Willard, ed., Humanist Discussion Group Archive (1987–2018), http://dhhumanist.org/

— ‘Humanist So Far: A Review of the First Two Months’, ACH Newsletter, 9.3 (1987).

10.1007/bf00058618 :

— ‘HUMANIST: Lessons from a Global Electronic Seminar’, Computers and the Humanities, 26.3 (1992), 205–22, https://doi.org/10.1007/bf00058618

10.1632/prof.2011.2011.1.182 :

McGann, Jerome, ‘On Creating a Usable Future’, Profession (2011), 182–95, https://doi.org/10.1632/prof.2011.2011.1.182

10.3998/3336451.0013.208 :

McPherson, Tara, ‘Scaling Vectors: Thoughts on the Future of Scholarly Communication’, Journal of Electronic Publishing, 13.2 (2010), https://doi.org/10.3998/3336451.0013.208

Modern Language Association of America, ‘Guidelines for Evaluating Work in Digital Humanities and Digital Media’, Modern Language Association (2012),https://www.mla.org/About-Us/Governance/Committees/Committee-Listings/Professional-Issues/Committee-on-Information-Technology/Guidelines-for-Evaluating-Work-in-Digital-Humanities-and-Digital-Media

Modern Languages Association of America Task Force for Evaluating Scholarship for Tenure and Promotion, Report of the MLA Task Force on Evaluating Scholarship for Tenure and Promotion (New York: MLA, 2006), http://www.mla.org/pdf/taskforcereport0608.pdf

Morton, A. Q., ‘A Computer Challenges the Church’, The Observer (1901–2003) (3 November 1963), p. 21.

10.1017/S0018246X17000334 :

Moxham, Noah, and Aileen Fyfe, ‘The Royal Society and the Prehistory of Peer Review, 1665–1965’, The Historical Journal 61.4 (2018), 863–89, https://doi.org/10.1017/S0018246X17000334

10.1632/prof.2011.2011.1.169 :

Nowviskie, Bethany, ‘Where Credit Is Due: Preconditions for the Evaluation of Collaborative Digital Scholarship’, Profession (2011), 169–81, https://doi.org/10.1632/prof.2011.2011.1.169

Nyhan, Julianne, and Andrew Flinn, Computation and the Humanities: Towards an Oral History of Digital Humanities, 1st ed. (Cham, Switzerland: Springer, 2016).

10.1007/BF01829390 :

Olsen, Mark, ‘Critical Theory and Textual Computing: Comments and Suggestions’, Computers and the Humanities, 27.5–6 (1993), 395–400, https://doi.org/10.1007/BF01829390

10.4000/jtei.949 :

Pfannenschmidt, Sarah L., and Tanya E. Clement, ‘Evaluating Digital Scholarship: Suggestions and Strategies for the Text Encoding Initiative’, Journal of the Text Encoding Initiative (2014), 7, https://doi.org/10.4000/jtei.949

10.1007/bf00141184 :

Raben, Joseph, ‘Humanities Computing 25 Years Later’, Computers and the Humanities, 25.6 (1991), 341–50, https://doi.org/10.1007/bf00141184

10.1632/prof.2011.2011.1.152 :

Rockwell, Geoffrey, ‘On the Evaluation of Digital Media as Scholarship’, Profession (2011), 152–68, https://doi.org/10.1632/prof.2011.2011.1.152

10.1632/prof.2011.2011.1.123 :

Schreibman, Susan, Laura Mandell, and Stephen Olsen, ‘Introduction’, Profession (2011), 123–201, https://doi.org/10.1632/prof.2011.2011.1.123

Sinclair, Stéfan, et al., ‘Peer Review of Humanities Computing Software’, in ALLC/ACH 2003 — Conference Abstracts ( [n.p.], 2003), pp. 143–45.

10.1108/OIR-06-2015-0182 :

Tattersall, Andy, ‘For What It’s Worth: The Open Peer Review Landscape’, Online Information Review, 39.5 (2015), 649–63, https://doi.org/10.1108/OIR-06-2015-0182

Unsworth, John, ‘Digital Humanities Beyond Representation’ (Orlando, FL: University of Central Florida, 2006), http://www.people.virginia.edu/-jmu2m/UCF/

Notes de bas de page

1 See, for example, John Unsworth, Digital Humanities Beyond Representation (Orlando, FL: University of Central Florida, 2006), http://www.people.virginia.edu/-jmu2m/UCF/

2 Modern Language Association of America, ‘Guidelines for Evaluating Work in Digital Humanities and Digital Media’, Modern Language Association (2012),https://www.mla.org/About-Us/Governance/Committees/Committee-Listings/Professional-Issues/Committee-on-Information-Technology/Guidelines-for-Evaluating-Work-in-Digital-Humanities-and-Digital-Media

3 See, for example, Geoffrey Rockwell, ‘On the Evaluation of Digital Media as Scholarship’, Profession (2011), 152–68, https://doi.org/10.1632/prof.2011.2011.1.152

4 Jerome McGann, ‘On Creating a Usable Future’, Profession (2011), 182–95, https://doi.org/10.1632/prof.2011.2011.1.182. Notable precursors include the collective that was set up in 1998 by Suda online (SOL), which included an innovative form of online peer review of the translations and annotations made to it by users. See Raphael Finkel et al., ‘The Suda On Line (www.stoa.org/sol/)’, Syllecta Classica, 11 (2000), 178–90, https://doi.org/10.1353/syl.2000.0005

5 Susan Schreibman, Laura Mandell, and Stephen Olsen, ‘Introduction’, Profession (2011), 123–201 (p. 127), https://doi.org/10.1632/prof.2011.2011.1.123

6 Tara McPherson, ‘Scaling Vectors: Thoughts on the Future of Scholarly Communication’, Journal of Electronic Publishing, 13.2 (2010), https://doi.org/10.3998/3336451.0013.208

7 See Modern Languages Association of America Task Force for Evaluating Scholarship for Tenure and Promotion, Report of the MLA Task Force on Evaluating Scholarship for Tenure and Promotion (New York: MLA, 2006), p. 42, http://www.mla.org/pdf/taskforcereport0608.pdf

8 Modern Languages Association of America Task Force, Report of the MLA Task Force, p. 11.

9 American Council of Learned Societies, Our Cultural Commonwealth: The Report of the American Council of Learned Societies Commission on Cyberinfrastructure for the Humanities and Social Sciences (New York: American Council of Learned Societies, 2006), p. 34, https://www.acls.org/uploadedFiles/Publications/Programs/Our_Cultural_Commonwealth.pdf

10 Laura Brown, Rebecca Griffiths, and Matthew Rascoff, ‘University Publishing in a Digital Age’, The Journal of Electronic Publishing, 10.3 (2007), https://quod.lib.umich.edu/j/jep/3336451.0010.301?view=text;rgn=main, https://doi.org/10.3998/3336451.0010.301

11 The literature that I surveyed covered the main journals in the field that were published from the setting up of computing and the humanities onwards (Computing and the Humanities; Literary and Linguistic Computing / DSH: The Journal of Digital Scholarship in the Humanities; Digital Humanities Quarterly; Digital Studies / Le champ numérique / Text Technology / CHWP: Computing in the Humanities Working Papers). I also surveyed the grey literature that I had access to, namely the transactions of Humanist; the newsletter of the Association for Computers and the Humanities (ACH); early issues of the ALLC Bulletin; and online proceedings of the ALLC/ Digital Humanities conferences.

12 ‘Scholarship in electronic formats seems to be recognized when done in addition to work in print formats but may place a candidate at risk if presented as the sole or primary scholarly basis for consideration for tenure.’ Modern Languages Association of America Task Force, Report, p. 44.

13 Rockwell, ‘On the Evaluation of Digital Media’.

14 Kathleen Fitzpatrick, ‘Peer Review, Judgment, and Reading’, Profession (2011), 196–201, https://doi.org/10.1632/prof.2011.2011.1.196

15 Bethany Nowviskie, ‘Where Credit Is Due: Preconditions for the Evaluation of Collaborative Digital Scholarship’, Profession (2011), 169–81, https://doi.org/10.1632/prof.2011.2011.1.169

16 Schreibman, Mandell, and Olsen, ‘Introduction’.

17 Sarah L. Pfannenschmidt and Tanya E. Clement, ‘Evaluating Digital Scholarship: Suggestions and Strategies for the Text Encoding Initiative’, Journal of the Text Encoding Initiative (2014), 7, https://doi.org/10.4000/jtei.949

18 Steve Anderson and Tara McPherson, ‘Engaging Digital Scholarship: Thoughts on Evaluating Multimedia Scholarship’, Profession (2011), 136–51, https://doi.org/10.1632/prof.2011.2011.1.136

19 Noah Moxham and Aileen Fyfe, ‘The Royal Society and the Prehistory of Peer Review, 1665–1965’, The Historical Journal The Historical Journal 61.4 (2018), 863-889, p. 886, https://doi.org/10.1017/S0018246X17000334

20 Julianne Nyhan and Andrew Flinn, Computation and the Humanities: Towards an Oral History of Digital Humanities, 1st ed. (Cham, Switzerland: Springer, 2016), pp. 87–97, 137–56.

21 Ibid., p. 125.

22 Ibid., p. 49.

23 Humanist Discussion Group Archive (1987–2018), 1.49, ed. by Willard McCarty (1987/88), http://dhhumanist.org/. The archives of Humanist that are cited in this chapter are accessible via the following landing page: http://dhhumanist.org/

24 Humanist, 1.47 (1987/88).

25 A. Q. Morton, ‘A Computer Challenges the Church’, The Observer (1901–2003) (3 November 1963), p. 21.

26 Joel D. Goldfield, ‘An Argument for Single-Author and Similar Studies Using Quantitative Methods: Is There Safety in Numbers?’, Computers and the Humanities, 27.5–6 (1993), 365–74 (p. 370), https://doi.org/10.1007/BF01829387

27 Joseph Raben, ‘Humanities Computing 25 Years Later’, Computers and the Humanities, 25.6 (1991), 341–50 (p. 341), https://doi.org/10.1007/bf00141184

28 Humanist, 1.44 (1987/88).

29 Humanist, 1.49 (1987/88).

30 Willard McCarty, ‘Humanist So Far: A Review of the First Two Months’, ACH Newsletter, 9.3 (1987).

31 Though not within the scope of this article, the numerous debates that have taken place in the wider academy that question peer review are presumably also relevant to this. See, for example, Daryl E. Chubin and Edward J. Hackett, Peerless Science: Peer Review and U.S. Science Policy (Albany, NY: SUNY Press, 1990).

32 Willard McCarty, ‘HUMANIST: Lessons from a Global Electronic Seminar’, Computers and the Humanities, 26.3 (1992), 205–22 (p. 205–06), https://doi.org/10.1007/BF00058618

33 See, for example, Humanist 1.28 (1987/88).

34 McCarty, ‘HUMANIST: Lessons’, 210–12.

35 Humanist, 1.40 (1987/88).

36 McCarty, ‘Humanist So Far’, p. 2.

37 Humanist, 1. 49 (1987/8).

38 Humanist, 14.52 (2000).

39 Humanist, 10.789 (1996/7).

40 Humanist, 12.1040 (1998/9); see also 1.344 (1987/8).

41 Humanist, 12.1050 (1999).

42 Humanist, 5.881 (2085) (1991/2).

43 Humanist, 13.221 (221) (1999/2000).

44 Humanist, 7.453 (836) (1993/4).

45 Humanist, 9.872 (916) (1995/6).

46 Humanist, 6.652 (845) (1992/3).

47 Mark Olsen, ‘Critical Theory and Textual Computing: Comments and Suggestions’, Computers and the Humanities, 27.5–6 (1993), 395–400 (p. 395), https://doi.org/10.1007/BF01829390

48 Olsen, ‘Critical Theory’, 395–96.

49 Goldfield, ‘An Argument for Single-Author’, 371.

50 Ibid., 366.

51 Ibid., 371.

52 ‘IBM Grants’, ACH Newsletter, 9.3 (1987), p. 6.

53 Stéfan Sinclair et al., ‘Peer Review of Humanities Computing Software’, in ALLC/ ACH 2003 — Conference Abstracts, ( [n.p.], 2003), pp. 143–45.

54 Humanist, 13.72 (1999/2000).

55 Humanist, 15.524 (2001/2).

56 Humanist, 15.724 (2001/2).

57 Ibid.

58 By 2013, Matthew L. Jockers, for example, discussed the rapidly institutionalising field thus: ‘Academic jobs for candidates with expertise in the intersection between the humanities and technology are becoming more and more common, and a younger constituent of digital natives is quickly overtaking the aging elders of the tribe. […] Especially impressive has been the news from Canada. Almost all of the “G10” (that is, the top thirteen research institutions of Canada) have institutionalized digital humanities activities in the form of degrees […] programs […] or through institutes […]’. Matthew L. Jockers, Macroanalysis: Digital Methods and Literary History, 1st ed. (Urbana: University of Illinois Press, 2013), pp. 13–14, https://doi.org/10.5406/illinois/9780252037528.001.0001

59 Matthew G. Kirschenbaum, ‘What Is Digital Humanities and What’s It Doing in English Departments?’, ADE Bulletin (2010), 55–61 (p. 56) https://doi.org/10.1632/ade.150.55

60 Kirschenbaum, ‘What is Digital Humanities’, 56.

61 Ibid., 57–58.

62 Ibid., 58.

63 See footnote 58.

64 See, for example, Alan Galey and Stan Ruecker, ‘How a Prototype Argues’, Literary and Linguistic Computing, 25.4 (2010), 405–24, https://doi.org/10.1093/llc/fqq021

65 See Nyhan and Flinn, Computation.

66 See Kathleen Fitzpatrick, ‘Revising Peer Review’, Contexts, 11.4 (2012), 80, https://doi.org/10.1177/1536504212466347

67 See Andy Tattersall, ‘For What It’s Worth: The Open Peer Review Landscape’, Online Information Review, 39.5 (2015), 649–63, https://doi.org/10.1108/OIR-06-2015-0182

68 Goldfield, ‘An Argument for Single-Author’, 372.

Précédent Suivant

Le texte seul est utilisable sous licence Creative Commons - Attribution 4.0 International - CC BY 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.