Version classiqueVersion mobile

Ouvrir, partager, réutiliser

 | 
Clément Mabi
, 
Jean-Christophe Plantin
, 
Laurence Monnoyer-Smith

Entretien

What data release means, and why it matters: an interview with Prof. Christine Borgman

Christine Borgman et Jean-Christophe Plantin

Résumé

Au cours de cet entretien, Christine Borgman présente les multiples implications que les données massives peuvent avoir tant dans le partage et l’archivage de données, les infrastructures de recherche, les divisions entre disciplines que dans la collaboration scientifique.

Texte intégral

1Jean-Christophe Plantin:

We have been talking a lot recently of the advent of big data: first in the sciences, then more and more in the humanities and social sciences. But each of these discipline has different traditions in using large datasets—if there is an already established tradition of handling large data sets for the former, the latter typically involve relatively smaller datasets. What do you think would be the most important disruptions to envision concerning research methods in humanities and social sciences?

2Prof. Christine Borgman:

As to definitions of big data, my perspective echoes that of Ralph Schroeder, Eric Meyer, and others that big data need not be “big” in any absolute sense: big is relative to the phenomena under study. Astronomy has always been big data in terms of volume, for example, but they have a continual acceleration: each new telescope collects 10 or 100 times as much data as prior instruments. On the other hand, a spoonful of data might drown somebody in other domains. We should think of big data in terms of that qualitative shift rather than as an absolute jump. In the sciences, what we are seeing, even in astronomy, is that many researchers indeed crunch terabytes of data; they use big visualization programs, they build massive collections from different places, and that is how they do their work. But we also find pockets of astronomy that have very small data. Sometimes, it is because they have written their own proposals to get observations from an instrument; occasionally, they are building their own instruments. As a result, they obtain data that do not interoperate with others’ data, so those data become isolated. If you are getting data from a big space mission, those data may come straight from the mission archive, but data from ground-based missions are less well archived. We have found astronomers whose current data reside only on servers underneath their desks. I am trying to get people to think about the variety of data within any field; the diversity is not simply a matter of abundance in some fields and scarcity in others.

  • 1 Christine L. Borgman, Big Data, Little Data, No Data: Scholarship in the Networked World, Cambridge (...)

We are seeing similar patterns in the social sciences: the big flows of Twitter feeds also require data reduction. As to big data in the humanities, one of the case studies in my 2015 book1 is in Buddhist philology, based on the work of one of my Oxford colleagues. This is an area where scholars will spend their lifetimes looking at representations of 3rd century manuscripts, where every punctuation mark makes a difference in meaning; that is the level of detail that matters. And yet, the Chinese Buddhist canon—more than a hundred volumes of text—now has been digitized in full semantic markup. With this massive apparatus of tools around the canon, the scholar can search texts, compare phrases, and look for patterns. He or she can select passages, drop them into a Word document, and the text and metadata will render accurately. This is very “big data” compared to the phenomena, which is the transfer of Buddhist texts from one culture to the next. These resources and tools are changing the way scholars work and also giving them new tools to think with. Big data provides a distant lens to see patterns, but also risks obscuring detail. Data reduction and provenance are continuing concerns.

  • 2 The Sloan Digital Sky Survey is a an astronomy project that produced a large dataset of data availa (...)

Provenance is often understood in a fairly linear sense: for example, a museum is concerned about who had custody of an object from the time it was first found. If you find any gaps in that chain of custody, you are suspicious of the validity and the veracity of what you have. When you deal with digital materials, provenance is much more complicated: by the time you get a dataset, parts of it may have come from many different places. Interpretation depends on the ability to establish the provenance chains. What has been touched or transformed along the way? Early decisions in capturing data, in deciding things as basic as what is an outlier, and how to fill in missing values in a survey or in a camera image, determine what information is available later. It may not be possible to recover information lost by early decisions. Those decisions often are not documented because they are part of normal practices about how the work is done. Provenance can become an infinite regress, so the question is how far back to go: if you want to pull a dataset from the Sloan Digital Sky Survey,2 which is particularly well documented, when is it necessary to examine the instrument specifications? Do you trust the workflow, the data reduction pipeline that this group of people published, or do you want to go back to raw data and do your own pipeline processing? Big data gives you new ways to think, but data reduction can obscure the origins of data.

3Jean-Christophe Plantin:

  • 3 For example, Christine L. Borgman, Scholarship in the Digital Age…, op. cit.

You have been studying infrastructures, data sharing and data reuse practices for a long time3 and across a variety of disciplines and scholarly communities. What do you think are the biggest challenges “big data” brings to humanistic scholarship, traditionally less system-based and involving smaller datasets?

4Prof. Christine Borgman:

  • 4 ArXiv is an online archive created in 1991 for pre-print articles in physics, mathematics, and othe (...)

Let’s reframe the issue. Instead of starting with “science has an infrastructure that should be mapped to other fields,” we should think about what each field needs. This is a continuing issue, at least since physics made the first big move to open access with ArXiv:4 “it works for physics, it should work for everybody else; we are just transferring.” Arxiv did not transfer very well, because what works for physics does not necessarily work for other fields. Classics scholars, for example, are trying to build their own infrastructures, but they do not have comparable resources to physics. The larger questions are more along the lines of, “what kind of infrastructure do they need, who is going to build it, and how can bottom up and top down concerns be balanced?” The physical and biological sciences are better organized to speak in a unified voice about their infrastructure needs. The humanities are less able to speak in a unified voice, but scholars are becoming more articulate about saying, “we need these tools and infrastructure,” and they are getting better at collaborating. In terms of funding in the U.S., the National Institutes of Health has nearly ten times as much money as the National Science Foundation, and the National Science Foundation often has about ten times as much money as the National Endowment for the Humanities. These fields simply cannot work at the same scale.

  • 5 Christine L. Borgman, Scholarship in the Digital Age…, op. cit.

Another challenge for the humanities is that their data and publications remain valuable for much longer periods of time than in the sciences.5 Part of what they are doing is digitizing very old materials and making them available for broader use. In comparison, a two-year-old article in physics has reached its half-life. Physicists use arXiv to read articles prior to publication. By the time the two-year window of the impact factor has passed, humanities scholars may just be beginning to take notice of a book or article. The publication may not be cited yet. As a consequence of the lag time and long periods that materials remain useful, it would seem the humanities would get more money, because they have these big piles of stuff that need to last longer, but the reverse is usually the case. These fields are wrestling with different kinds of questions.

Britain set up data archives and data deposit requirements earlier than most countries. The Arts and Humanities Data Services (AHDS)6 was one of them. In 2008, fairly abruptly, they shut it down. That is an interesting story that still needs to be told, but it appears that part of the challenge is determining what to preserve. This is where information studies expertise comes in: the AHDS was more focused on preserving the back end, the end-of-line data sets and getting them into a dark archive, rather than trying to keep an interactive site going. The data that they preserved, without the associated apparatus, did not find a user community. It was not clear who would reuse those data, how, or why. These experiences raise questions for all fields of what part of the data and systems should be preserved and which are most likely to be reused. These are the infrastructure questions, which are huge and largely unanswered.

5Jean-Christophe Plantin:

Scientists may interpret some of the incentives to share their data as yet another requirement from funding bodies, as additional work, or as another form of evaluation of their work. Would it be possible to imagine other forms of incentives to share their data sets or databases?

6Prof. Christine Borgman:

  • 7 Jillian C. Wallis, Elizabeth Rolando and Christine L. Borgman, “If we share data, will anyone use t (...)

In a recent publication,7 we went back to our ten years of studying the Center for Embedded Network Sensing and looked at the circumstances under which people were willing to share their data, what they shared, and when they looked for data from somebody else; and how and when they used data obtained from others. We found that when data sharing does occur—which was not often—it tended to be a private act: “I read your journal article, and I saw some data that might be relevant to my projects, so I contacted you,” and you said, “would you let me see your datasets?” And if I know you, and you told me just what you wanted to know about the data, I might give the dataset to you. But the conditions vary widely. Almost everyone wants an embargo period until they have published the associated papers. However, if your research questions look like they are competing with mine, then I might not give it to you; or if I do, it is on the condition that I get co-authorship on everything you write from these data, or on the condition that I get access to your data in return. Data sharing is often a negotiation or a bartering process. Our findings confirm reports of about 20 years ago that data are valuable resources to barter in collaboration, funding, employment, and other situations.

Publications are not simply packages of data to be exchanged. Much effort is required to document data for one’s own use; even more work is required to make them useful for others. Another reason for not sharing data is that scholars cannot imagine what anybody else would do with them. Why should a scholar spend a week trying to clean up data, package them, document them, and put them someplace just in case others may wish to use them later? Some scholars are concerned about the use of their data for competitive or misleading purposes. The investments and the benefits often fall to different parties.

Similar concerns arise around biomedical data, with the new electronic record systems: private companies are offering to manage the records in your medical clinics so they can get access to all those records. The effort invested in making these data useful for one purpose may allow others to make money if they can aggregate enough resources. Many researchers are concerned about such reuses of their data. At the core of these issues is the lack of agreement on what are the “data”—does data sharing mean releasing the spreadsheet, the specimens, a table, and a paper? These are poorly understood problems and questions. If you ask scholars in the humanities to release their data, they may say, “are you kidding, this is my dowry!” They spend a lifetime building up notes, records, and their personal library. A related problem is the lack of a one-to-one relationship between the dataset and the journal article, which is also a “bioscience” presumption; for example, that a journal article is about 10 pages and is associated with one dataset and one lab experiment. This is not a generalizable case across fields. What is meant by releasing “the data” is rarely clear.

7Jean-Christophe Plantin:

Corporate entities such as Twitter are increasingly used as data providers, for example in communication studies. But they bring additional challenges through terms of use that sometimes prevent researchers from archiving or republishing accessed data. How might we simultaneously acknowledge the presence of these actors in research, while preserving scholarly requirements?

8Prof. Christine Borgman:

You hit on a very thorny problem, and that is also the one to which I was alluding about the biosciences. It is a reproducibility question of what happens when you want to publish the article in a journal that requires you to deposit the data, but your contract with the company specifies that you cannot deposit the data. Certainly concerning the human subjects data, you often cannot deposit either one, and we are quite accustomed to that. The social sciences have ways of interpreting and documenting human subjects research that are valid and reliable, if not fully reproducible. On the other hand, we have been publishing for hundreds of years without people giving up their data, due to the trust on which the whole scholarly communication system rests. Part of what we are looking for is new ways to embody that trust and questions of veracity, which are really up in the air.

9Jean-Christophe Plantin:

An upcoming article in this present book shows that big data does not necessarily change the scientific methods per se, but rather results in new and unexpected collaborations amongst various profiles of researchers. On the other hand, many debates are currently about the importance for researchers to learn how to code. Taking the Digital Humanities as an example, what do you think matters most: learning how to program, or how to engage in interdisciplinary collaborations?

10Prof. Christine Borgman:

The notion of computational thinking, which Jeannette Wing has been advocating for a long time, is pretty much what Seymour Papert was promoting with Logo in the early 1980s: getting kids to think computationally and to gain numerical literacy. My view—I have a math degree, and now I am an ACM fellow, so I am supposed to be an expert on these things!—I certainly see the value, and the fact that I have the technical background and that I did learn how to code gives me a particular way to think about things, and it certainly gives me a greater advantage in talking to scientists. However, I don’t want to promote the technological determinism that says “everyone needs to learn how to code.” What you want is the right mix of people at the table, and the ability to mobilize an array of expertise.

  • 8 A popular button in the digital humanities community bears this phrase.
  • 9 Johanna Drucker, “Blind spots: humanists must plan their digital future,” The Chronicle of Higher E (...)
  • 10 Christine L. Borgman, “The digital future is now: a call to action for the humanities” [on line], D (...)

The digital humanities, over the decade or two that I have been following them, continue to be divided between the camp that says “we need more resources and more people to build stuff for us” and those whose attitude is “shut up and code,”8 because nobody is going to do it for us. Johanna Drucker9 wrote the most succinct piece on this debate and I explored the challenges in a bit more depth.10 Sitting around and waiting for librarians, programmers, or others to build tools and infrastructure is not going to be a successful strategy. The physicists and the biologists did not wait for their computer science departments to build them something; they figured out what they needed. Scholars in the humanities have to find ways to articulate what they need, what they want, and to express their imagination for the necessary tools and infrastructure. It is only going to come from within, because people who do not understand what their work is about cannot build suitable infrastructure. We need more engagement. People with backgrounds in the humanities often make excellent programmers, by the way. Someone who studies philology, philosophy, and other areas that require very close reading may have an ideal intellectual orientation to design analytical tools and infrastructure.

Bibliographie

Borgman, Christine L., From Gutenberg to the Global Information Infrastructure: Access to Information in the Networked World, Cambridge (MA), MIT Press, 2000.

Borgman, Christine L., Scholarship in the Digital Age: Information, Infrastructure, and the Internet, Cambridge (MA), MIT Press, 2007.

Borgman, Christine L., “The digital future is now: a call to action for the humanities” [on line], Digital Humanities Quarterly, 3(4), 2009, available at <http://digitalhumanities.org/dhq/vol/3/4/000077/000077.html>.

Borgman, Christine L., Big Data, Little Data, No Data: Scholarship in the Networked World, Cambridge (MA), MIT Press, 2015.

Drucker, Johanna, “Blind spots: humanists must plan their digital future,” The Chronicle of Higher Education, 55(30), 2009, available at <http://humanitiesblast.com/wp-content/uploads/2011/10/Blind-Spots.pdf>.

Wallis, Jillian C., Rolando, Elizabeth and Borgman, Christine L., “If we share data, will anyone use them? Data sharing and reuse in the long tail of science and technology,” PLoS ONE, 8(7), 2013: e67332.
DOI:10.1371/journal.pone.0067332

Notes

1 Christine L. Borgman, Big Data, Little Data, No Data: Scholarship in the Networked World, Cambridge (MA), MIT Press, 2015.

2 The Sloan Digital Sky Survey is a an astronomy project that produced a large dataset of data available for public use: <http://www.sdss.org/>.

3 For example, Christine L. Borgman, Scholarship in the Digital Age…, op. cit.

4 ArXiv is an online archive created in 1991 for pre-print articles in physics, mathematics, and other areas of the sciences. It is heavily used, with mirror sites around the world: <http://arxiv.org/>.

5 Christine L. Borgman, Scholarship in the Digital Age…, op. cit.

6 <http://www.ahds.ac.uk/>

7 Jillian C. Wallis, Elizabeth Rolando and Christine L. Borgman, “If we share data, will anyone use them? Data sharing and reuse in the long tail of science and technology,” PLoS ONE, 8(7), 2013: e67332.

8 A popular button in the digital humanities community bears this phrase.

9 Johanna Drucker, “Blind spots: humanists must plan their digital future,” The Chronicle of Higher Education, 55(30), 2009, available at <http://humanitiesblast.com/wp-content/uploads/2011/10/Blind-Spots.pdf>.

10 Christine L. Borgman, “The digital future is now: a call to action for the humanities” [on line], Digital Humanities Quarterly, 3(4), 2009, available at <http://digitalhumanities.org/dhq/vol/3/4/000077/000077.html>.

Auteurs

Christine Borgman is Distinguished Professor and Presidential Chair in Information Studies at the University of California, Los Angeles. Her work takes place across the fields of information studies, computer science, and communication, where she investigates how scholarship is changing in the digital age. She and her colleagues have conducted research on scholarly communication, and digital libraries, scientific data practices, data sharing, collaboration, and infrastructure building and use. Her research encompasses various scales of scientific research projects, ranging from individual scholarship to large-scale infrastructures and collaborative projects, such as the Center for Embedded Networked Sensing (CENS) and the Sloan Digital Sky Survey (SDSS). She is also interested in cross-disciplinary analysis, as attested by her work in the sciences, social sciences, and humanities. She is a Fellow of the American Association for the Advancement of Science (AAAS) and of the Association of Computing Machinery (ACM). Among her awards are career recognition from the Association for Information Science and Technology (ASIS&T), the Coalition for Networked Information, and the University of Pittsburgh. Her newest book, Big Data, Little Data, No Data: Scholarship in the Networked World (Cambridge [MA], MIT Press, 2015), won the PROSE Award from the Association of American Publishers for best book in Computing and Information Sciences. Her prior MIT Press monographs, Scholarship in the Digital Age (2007) and From Gutenberg to the Global Information Infrastructure (2000), each won the ASIST Best Information Science Book of the Year award.

Jean-Christophe Plantin is Assistant Professor at the London School of Economics and Political Science, department of Media and Communications. He investigates the civic use of mapping platforms, the collaborative challenges in big data science, and the evolution of knowledge infrastructures. His research was funded by the Alfred P. Sloan Foundation, the Gordon and Betty Moore Foundation, the European Regional Development Fund, and the University of Michigan MCubed Program. His work was published in New Media & Society, Media, Culture & Society, and the International Journal of Communication.

Le texte et les autres éléments (illustrations, fichiers annexes importés) sont sous Licence OpenEdition Books, sauf mention contraire.

Acheter

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search