Version classiqueVersion mobile

Ouvrir, partager, réutiliser

 | 
Clément Mabi
, 
Jean-Christophe Plantin
, 
Laurence Monnoyer-Smith

Réflexion épistémologique sur les pratiques de recherche à partir des données numériques

Big data is not about size: when data transform scholarship

Jean-Christophe Plantin, Carl Lagoze, Paul N. Edwards et Christian Sandvig

Résumé

Alors que les débats sur les « données massives » tendent à se concentrer sur les difficultés et potentiels inhérents à leur taille, ce chapitre démontre que la perte de contrôle sur les sources de ces données est tout aussi importante à considérer. La provenance incertaine des données massives a des répercussions sur l’intégrité des données, et peut nuire à la recherche scientifique de multiples façons. Une analyse de l’introduction de grands ensembles de données dans les prévisions météorologiques et en épidémiologie démontre que l’accroissement des données peut être contre-productive, voire déstabiliser les méthodes déjà existantes. Sur la base de ces exemples, nous examinons deux implications des « big data » pour la recherche : les transformations de la structure disciplinaire traditionnelle des sciences ainsi que de l’infrastructure pour la communication scientifique.

Texte intégral

Introduction

  • 1 Bruno J. Strasser, “Data-driven sciences: From wonder cabinets to electronic databases,” Studies in (...)
  • 2 Gali Halevi and Henk F. Moed, “The evolution of big data as a research and scientific topic: Overvi (...)
  • 3 Karin Knorr Cetina, Epistemic Cultures: How the Sciences Make Knowledge, Cambridge (MA), Harvard Un (...)

1“Very large” amounts of data in science are far from a new phenomenon. Previous eras each saw their own “data deluge.” For example, the expansion of travel following the discovery of the New World brought naturalists an unprecedented number of specimens, observations, and measurements, forcing them to create new classification systems.1 The phrase “big data,” however, connotes something much more recent. It first appeared in the scientific literature in 1970, and the use of the term slowly increased before reaching a peak around 2008. It appeared most often in computer science, but other disciplines are now involved, including electrical engineering, telecommunication, mathematics, and business.2 In the sciences, the phrase’s appearance coincides with the advent of large infrastructures to support data-driven science. For example, the petabytes of data streaming from high-energy physics experiments (studied thoroughly by Knorr Cetina3) or those that are components of the Sloan Digital Sky Survey are certainly “big” in terms of sheer size. A petabyte is approximately one million billion bytes; by comparison, the text of all 32 million books in the US Library of Congress would, if digitized, occupy just 20 terabytes, or one-fiftieth of a petabyte. These “petascale” datasets stress the capacities of computers, networks, and storage systems, as well as the budgets of the institutions that manage them. Even today’s fastest computer networks cannot transfer petascale datasets in a reasonable amount of time; as a result, computation is often moved to the data, rather than the other way round.

  • 4 Chris Anderson, “The end of theory: The data deluge makes the scientific method obsolete,” Wired, J (...)
  • 5 Tony Hey, Stewart Tansley and Kristin Tolle (eds.), The Fourth Paradigm Data-Intensive Scientific D (...)
  • 6 David Lazer et al., “Computational social science,” Science, 323(5915), 2009: 721–723.

2Several authors emphasize the scientific opportunities presented by these larger datasets. “Big data” proponents have challenged the necessity of theory, promoting science conceived as pattern recognition.4 Others specifically emphasize the size of datasets and their disruptive effects on science. Writing for a Microsoft Research collection, prominent scientists have suggested that a “fourth paradigm”5 of “data-intensive scientific discovery” is arising due to the ready availability of fine-grained environmental and social data from cheap, ubiquitous sensors and social media. This new paradigm complements the prior paradigms of experiment, theory, and simulation as sources of scientific knowledge. Similarly, “computational social science”6 combines large sets of born-digital social data (e.g. from email, or gathered by mobile phones) with social network analysis to study how people interact, move, and communicate. In the humanities, several projects such as the digital library HathiTrust aim to provide access to large collections of digitized texts.

  • 7 danah boyd and Kate Crawford, “Critical questions for big data: Provocations for a cultural, techno (...)
  • 8 Kate Crawford, “The raw and the cooked: The mythologies of big data,” DataEDGE 2013 (Berkeley, May (...)
  • 9 Annette N. Markham, “Undermining ‘data’: A critical examination of a core term in scientific inquir (...)
  • 10 Samuel Arbesman, “Stop hyping Big Data and start paying attention to ‘Long Data,’” WIRED, January 2 (...)
  • 11 Carolin Gerlitz and Bernhard Rieder, “Mining one percent of Twitter: Collections, baselines, sampli (...)
  • 12 Farida Vis, “A critical reflection on big data: Considering APIs, researchers and tools as data mak (...)
  • 13 danah boyd and Kate Crawford, “Critical questions for big data…”, op. cit.
  • 14 Axel Bruns, “Faster than the speed of print: Reconciling “big data” social media analysis and acade (...)

3By contrast, more critical writers have highlighted the mythology,7 hidden biases, and cult of personality (or “fundamentalism,”8) associated with the hype-ridden discourses surrounding big data. Some scholars emphasize how large datasets can come to drive research questions and methods—the reverse of the usual relationship—and thus to frame intellectual approaches in ways that exclude what might be learned from smaller datasets or from methods less driven by the exigencies of scale.9 An undue focus on immediate “snapshot” analysis is one result,10 but other critics point to the difficulties in sampling large datasets, e.g. from the Twitter API,11 the shortcomings of dealing with API requests,12 the ethical considerations around accessing personal datasets,13 and the various limitations of publishing research based on large data.14

4A healthy dialogue between proponents and critics of “big data” research helps to develop a reflexive point of view on these emerging scientific practices. Yet we see drawbacks in both perspectives. First, the phrase “big data” remains remarkably vague. The use of the term across several social worlds, from industry to science to marketing, results in multiple definitions. Similarly, the proliferation of “computational” and “digital” sub-fields, such as “computational social sciences” or “digital humanities,” seems to arise from a perceived need to engage with large data sets, yet succeeds mainly in highlighting the absence of clear definitions. Second, both proponents and critics of “big data” research focus mainly on the increase in size of datasets available, often neglecting other, equally important sociotechnical transformations of scientific practices.

  • 15 Ross Atkinson, “Library functions, scholarly communication, and the foundation of the digital libra (...)

5In this article, we argue that uncertainty about the provenance of data, rather than large scale, best characterizes the real targets of many “big data” discussions. We demonstrate that this uncertainty results from the loss of what Atkinson calls the “control zone.”15 A retrospective look at the introduction of larger datasets in weather forecast and epidemiology will show that more data can be counter-productive and destabilize already existing research methods. Based on these historical examples, we look at two implications of “big data” for scholarship: transformations of traditional disciplinary structures, and changes in the infrastructure of scholarly communication.

The fracturing of the control zone: an alternative view on “big data”

6We propose that “big data” fractures the provenance chain that traditionally formed the basis for determining data integrity, and transitively research integrity. Traditionally, provenance chains consisted of physically containable data (e.g., written by hand or stored on magnetic disks or tapes), shared by handing them off physically to colleagues. The physicality of the data, in their containers, and the direct transfer of responsibility to colleagues with stakes in and knowledge of the data and their meaning, provided an unassailable evidence chain. These highly-controlled, regimented procedures were described by Atkinson with the notion of “control zone.”

  • 16 Carl J. Lagoze, Lost Identity: The Assimilation of Digital Libraries into the Web, PhD dissertation (...)

7In a seminal 1996 article, the late Ross Atkinson, then Associate University Librarian at Cornell University, described how the notion of the control zone lies at the foundation of the Library. Within this framework, the functioning of the library depends on a clearly defined boundary separating what lies within the library from what is outside. Inside this boundary, within the control zone, the library can lay claim to those resources that have been selected as part of the Collection, and assert curation, or stewardship, of those resources to ensure their consistent availability over the long term. The boundary of the traditional library was easy to define. It was the building that contained and protected the selected physical resources over which the library asserted control and curation responsibility. Correspondingly, from patrons’ point of view, the boundary marked what could be called a “trust zone,” an area in which they could presume that the integrity guarantees of the library existed. The transition from the physically contained library (bricks and mortar) to the networked digital library has fractured this formerly well-defined control zone.16

  • 17 Jean M. Converse, Survey Research in the United States: Roots and Emergence, 1890–1960, Berkeley, U (...)

8This fracturing of the library’s control zone offers a useful metaphor for thinking about “big data” in the sciences. Consider social science research. For the past 70 years, many of the most arresting new findings in the social sciences have been derived from expensively produced, but also widely shared datasets. These included survey research and government statistics such as the Roper Poll, the General Social Survey, the American National Election Studies, and the Current Population Survey. In the US, after the military needs of World War II opened social science to large-scale federal research funding for the first time,17 the government funded an extensive data infrastructure comprised of highly curated, metadata-rich social science archives such as the ICPSR (Inter-University Consortium for Political and Social Research). Like academic libraries, these institutions established control zones permitting data quality and provenance to be preserved, and sometimes enhanced, while making them widely available to the social science community through cooperative inter-institutional arrangements, abroad as well as in the USA.

  • 18 Gary King, The Social Science Data Revolution, Horizons in Political Science talk, Government Depar (...)

9Today, these archives still play a major role in quantitative social science research. However, the emergence and maturation of ubiquitous networked computing and the ever-growing data cloud has introduced a spectacular quantity and variety of new data sources as well, labeled by some as a “social science data revolution.”18 These include massive social media data sources such as Facebook, Twitter, and other online communities, which when combined with more traditional data sources provide the opportunity for studies at heretofore unimaginable scales and complexities.

  • 19 John R. Sauer, Bruce G. Peterjohn and William A. Link, “Observer differences in the North American (...)
  • 20 Kathleen Raven, “23andMe’s face in the crowdsourced health research industry gets bigger,” spoonful (...)

10Another example of fracturing the control zone exists in observational science, for example, astronomy, meteorology, and field ecology. In each of these areas there is a growing interest in crowd-sourced citizen science, which engages numerous volunteers as participants in large-scale scientific endeavors. Our particular experience is with the eBird project, which originated at the Cornell Laboratory of Ornithology. For over a decade, this highly successful citizen science project has collected observations from volunteer participants worldwide. By nature, citizen science must contend with the problem of highly variable observer expertise and experience. How can we trust data collected or aggregated by individuals who lack traditional scientific credentials such as academic degree, publication record, or institutional affiliation? For this reason, crowd-sourced citizen science makes the established scientific community uneasy,19 especially in fields where people’s lives are at stake, such as medicine.20

  • 21 Jean-Christophe Plantin, “The politics of mapping plaforms: Participatory radiation mapping after t (...)
  • 22 Geoffrey Boulton et al., Science as an Open Enterprise: Open Data for Open Science, Report 02/12, L (...)

11These examples illustrate how the traditional control zone for scientific data is breaking down. The reasons for this breakdown are not difficult to discern. For the researcher, an enticing array of data is now available from non-traditional sources, such as social media platforms. Data mashups, often mixing traditional and non-traditional sources, are becoming increasingly common, sometimes with clear and substantial benefits.21 Funders, the public, and scientists themselves are demanding better access to data, including fully “open data,” in part as a hedge against fraudulent claims based on cherry-picked, illegitimately manipulated, or nonexistent data.22 This demand grates against the operating principles of some existing data archives, whose organizational raison d’être depends on their ability to guarantee data quality and provenance—i.e., on maintaining the control zone. As a result, the traditional criteria for assessing data integrity are being challenged.

12While proponents of “big data” present its introduction as fundamentally additive—just more fuel for the fire of research—the arrival of new data sources has, in the past, frequently destabilized disciplines and research practices. In the following section, we present two examples to think through the implications of new data.

How “big data” destabilizes knowledge production

  • 23 Carl J. Lagoze, Lost Identity…, op. cit.

13“Uncontrolled” data will inevitably find a place in the research process. Just as libraries cannot return to the era of control over physical resources within bricks and mortar institutions,23 it would be unrealistic for any science to deny the reality and potential benefits of a sociotechnical knowledge infrastructure that mixes the formal with the informal. At the same time, in many cases adding data from uncontrolled and potentially unreliable sources may jeopardize historically successful modes of knowledge production. Examples from weather forecasting and epidemiology will illustrate some of these risks.

The case of weather forecasting

14In the history of weather forecasting, the arrival of new data sources, from radiosondes (weather balloons) to satellite radiometers, initially created confusion and disrupted existing knowledge processes. At the same time, meteorologists eagerly anticipated them, and they ultimately proved of enormous value. By the turn of the 20th century, telegraph networks were already used routinely for regional data exchange, especially in Europe. Around 1900, meteorologists called for a réseau mondial (worldwide network) that would permit the construction of quasi-global, near-real-time weather maps. Although the telegraph-based réseau mondial never materialized, by the 1920s worldwide meteorological data exchange was in fact possible, using a combination of telegraph, teletype, shortwave radio, and several other media. Yet most forecasters never tried to acquire most of these data, and actually discarded much of what they did receive. The reason: pre-computer forecasting techniques simply could not use it within the short time (hours) available for creating a useful forecast. Even climatologists, who did not face the time pressure of forecasting, could not (before computers) make use of much of the available data directly. Instead, they developed a system of distributed calculation. Weather stations were asked to pre-compute such figures as monthly average temperatures and report only those, rather than provide all the raw data to central collectors, for whom the calculating burden would have been overwhelming.

15Computer forecast models first became available in the mid-1950s. The pioneers of this all-important technique faced a different problem. Weather models divide the atmosphere into three-dimensional grids and compute transformations of mass, energy, and momentum among grid boxes on a short time-step (every few minutes). Every grid point must be supplied with a value at each time step; they cannot simply be zeroed out. Yet most instrument observations of weather are taken every few hours (not minutes), and very few weather stations or other instruments are located exactly at the grid points used by the models. So forecasters developed techniques for interpolating grid point values, in time and in space, from observations. In other words, they went from a pre-computer situation in which the large amounts of available data were never used, to a post-computer situation in which most data used were actually generated by calculations (interpolation), rather than measured directly.

16This problem later led to a far more complicated technique known as “four-dimensional (4-D) data assimilation.” 4-D data assimilation systems ingest observations as they arrive, using them to correct or steer an essentially model-based forecast process in which the previous forecast is used as a “first guess” for the current time period. Forecasters like to say that weather models “carry information” from data-rich areas, where there are dense observing networks, to data-poor areas which lack them. When a weather system moves from a data-rich to a data-poor area, the forecast made while that system was still in the data-rich area becomes the “first guess” for its development in the data-poor area—thus transferring the information acquired in the data-rich area forward in time to its later location.

17A surprising conclusion over five decades of experience with this process is that more “real” data (i.e. observations) are not necessarily helpful. First, uneven global coverage is more of a problem than is insufficient data volume. Second, observations inevitably contain errors due to instrument bias, weather station siting, local topography, and dozens of other factors. Third, since the error characteristics of observations are not perfectly known, the best forecast centers now generate dozens of different data sets from the observations, in order to simulate the likely range of possibility of the true state of the atmosphere. Then they run forecasts on each of these data streams. The idea is that since we can’t know exactly what the errors in the observational data actually are, the statistical properties of a few dozen forecasts run on a few dozen variations on those data are most likely to approximate what will really happen.

  • 24 Lennart Bengtsson and Jagadish Shukla, “Integration of space and in situ observations to study glob (...)

18One of the most striking ways forecasters have pictured this process is to describe 4-D data assimilation as “a unique and independent observing system”24 that can generate better, more detailed images of actual planetary weather than can instruments alone. In other words, simulation models (albeit guided by real observations) give you better data than do your instruments. Or, to say it even more provocatively, simulated data—appropriately constrained—are better than real data.

  • 25 Margaret Eileen Courain, Technology Reconciliation in the Remote-Sensing Era of United States Civil (...)

19To take another example, when satellite photographs of weather systems first became available in the 1960s, many meteorologists were elated, and expected a revolution. As it happened, though, interpreting the photographs proved much more difficult than most anticipated. Taken from a great distance, at strange angles, the photographs showed weather systems clearly but were hard to relate to existing standard measurements, such as temperature, pressure, and wind speed. The same thing happened when radar first entered meteorologists’ repertoire; these data promised revolution, but it took well over a decade to work out how to use radar in daily forecasting. In their first 15-20 years, both satellite photographs and radar found their major uses as imagery for television meteorologists—much more symbol than substance. Certainly they were not yet used as direct inputs to weather forecasts.25

  • 26 Paul N. Edwards, A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming, (...)

20A last and even more striking episode involves the use of data from satellite radiometers, which measure the energy Earth radiates into space. These instruments, first flown around 1979, generate very large amounts of data continuously (unlike many other weather data sources, such as surface stations, which take readings on a periodic basis). The radiation they measure comes from the entirety of a huge column of air, typically about 70 km wide and thousands of kilometers deep. Because weather forecast models required values only at regular grid points, and because they already stretched the limits of computer power, in the 1980s and 1990s forecasters converted these continuous, volumetric satellite measurements into periodic point measurements—in effect treating the satellites as if they were radiosondes. This massive data reduction went on for two decades before computer power became sufficient to incorporate satellite data directly.26

21The example of weather forecasting shows that the quest for more data sometimes leads in strange and possibly counter-productive directions. Weather forecasters sought and eventually integrated many new data sources, especially satellites, leading to vast data volumes anyone would describe as “big data.” Yet the actual practice of forecasting still incorporates only some of these observational data, while discarding most of it. It also shows how the very meaning of “data” has shifted, over time, toward a definition that accepts processed, simulated, and/or analyzed data as ultimately more useful and more accurate than anything instruments alone can provide.

The case of epidemiology

  • 27 Roger Magoulas and Ben Lorica, “Big data: Technologies and techniques for large-scale data,” O’Reil (...)

22As we have noted, many conceptions of “big data” do not clearly define what they mean by “big.” Researchers in the social sciences, for instance, seem to use the phrase “big data” when they really mean “web data” or “social media data,” and the datasets they ultimately collect may not be large by any standard. “Bigness,” when discussed at all, is imagined only in terms of procedure—as a problem of “performance requirements,” for example.27 This contrasts with other fields, where increasing scale has proven epistemologically significant.

  • 28 David L. Sackett et al., “Evidence-based medicine: What it is and what it isn’t,” British Medical J (...)
  • 29 Ibid.: 71.

23Our example here is epidemiology. In an initially controversial movement termed “evidence-based medicine,”28 doctors of the 1980s sought to join forces with epidemiologists to better integrate systematic findings from associative population studies into the clinical practice of medicine. These “observational” datasets—meaning data from studies that did not involve random assignment to treatment groups and a control group—were large by the standards of the day. They held out the promise of solving large numbers of clinical puzzles at one swoop. The biggest concern at the time wasn’t that these methods wouldn’t work, but that they were “a dangerous innovation, perpetrated by the arrogant”29 that would render medical practice impersonal by replacing the subjective judgements of doctors with cold statistics.

  • 30 Chris Anderson, “The end of theory: The data deluge makes the scientific method obsolete,” Wired, J (...)
  • 31 Examples from S. Stanley Young, “Everything is dangerous: A controversy,” American Scientist, April (...)
  • 32 Jim Borgman, “Today’s random medical news,” The New York Times, April 27, 1997: E4.

24But the cold statistics did not work as expected. Recall that pundits such as Anderson30 imagine that pattern-finding algorithms will now quasi-automatically generate new truths from large datasets. In medicine of the 1990s, however, more data led to more falsehoods. Epidemiologists produced a plethora of new medical associations such as “Vitamin E lowers the risk of heart attack” and “a low-fat diet prevents cancer in women.”31 These new findings received widespread press coverage, only to be refuted when tested via expensive, randomized, controlled clinical trials. A 1997 editorial cartoon proclaimed the New England Journal of Medicine to be the “New England Journal of Panic-Inducing Gobbledygook” and depicted the research process as a series of spinning roulette wheels.32

  • 33 Stephen T. Ziliak and Deirdre N. McCloskey, The Cult of Statistical Significance: How the Standard (...)
  • 34 John P.A. Ioannidis, “Why most published research findings are false,” PLoS Medicine, 2(8), 2005: 6 (...)

25One cause of this state of affairs was that statistical techniques then in wide use were calibrated, sometimes implicitly, to smaller sample sizes. Researchers who were used to straining to detect any effect at all were suddenly able to easily detect small statistical associations of questionable clinical value, and they did so, and these were published. Trained to worry habitually about false negatives (that is, the danger of missing genuine, but small effects), researchers had little experience in worrying about false positives, which proliferated.33 The situation where a sensational new finding appears and then is quickly proven false was termed “the Proteus Effect”34 as researchers learned that the truth, like Proteus and the sea, is mutable.

  • 35 Gary Taubes, “Epidemiology faces its limits,” Science, 269(5221), July 14, 1995: 164.
  • 36 Ibid.: 167.

26More and larger epidemiological research studies also revealed the fact that all associations are not created equal. After detecting all of the “conspicuous” relationships between risk factors and disease, such as the fact that smoking can increase the risk of lung cancer by 3,000%,35 researchers were left to sift the data for the remaining, inconspicuous associations that proved subtle and weak by comparison. As these remaining factors might influence disease by 300%, 30%, or perhaps 3%, they proved very difficult to isolate. In the words of one researcher in the mid-1990s, “we’re pushing the edge of what can be done with epidemiology.”36 The first victories with “big data” promised future successes that proved impossible to deliver, and this imperiled the whole evidence-based approach.

  • 37 John P.A. Ioannidis, “Why most published research findings are false,” op. cit.

27At a more fundamental level, one new truth that emerged is that many of the older truths in epidemiology were also wrong. New data destabilized old knowledge, leading to recent assessments that the bulk of all published findings in biomedicine are incorrect.37 Just as computer power allowed larger datasets to be processed with existing methods, it also enabled the creation of new methods, invigorating a 200-year-old philosophical debate about the use of Bayesian inference in place of Frequentist inference (currently the dominant technique in applied statistics). Advances in computers and networks allowed researchers to use more data, but analyzing these data highlighted some of the flaws in Frequentist methods. Advances in computers and networks also gave researchers the computational power to make alternative, Bayesian models tractable.

  • 38 John P.A. Ioannidis, “Finding large effect sizes: Good news or bad news?” The Psychologist, 21(8), (...)

28The story of Proteus in epidemiology did not lead to a backlash against evidence, and larger sample size is still seen as a positive. Yet the “big data” hopes from early evidence-based medicine have been sharply curtailed. Some medical statisticians now advocate that the discovery of an apparently important new finding in a research study should be taken first and foremost as evidence of an error, as any associational study should properly be expected to find nothing of value.38

  • 39 John P.A. Ioannidis, “Why most discovered true associations are inflated,” Epidemiology, 19(5), 200 (...)
  • 40 Ibid.

29The earlier crises in epidemiology eventually led from cries for “more evidence” to a wide-ranging conversation about the institutions of science and scientific publishing, which were implicated as drivers of false results. Medical statistician John Ioannidis recently characterized epidemiology as potentially supporting two kinds of researchers, the “aggressive discoverer” and the “thoughtful replicator.”39 The “aggressive discoverer,” he argued, is currently rewarded by the system of scientific publication, yet he produces findings that are extremely unlikely to be true. In the Ioannidis call to action, in addition to more data, successful epidemiology demands Bayesian statistical methods, a new system of scientific publication that rewards replication and tracks null findings, and ultimately a revaluation of the role of data itself. In Ioannidis’s terms, the “aggressive discoverer” in medicine conceptualizes datasets as “private goldmines not to be shared” while the successful practice of research demands that databases be public40 so that suspect findings can be checked.

30As noted previously, in meteorology new data led to an increased reliance on elaborate modes of conditioning and assimilating data, and the field moved away from the analysis of needlessly raw and needlessly large datasets. In the context of epidemiology, new data led to a proliferation of contradictory findings. Although the datasets (presumably) had a known provenance, the discipline is now arguing that the researchers themselves are not trustworthy and that data need to be widely shared in order to allow all new claims to be extensively checked.

“Big data” as transformative for scholarship

31Our argument thus far has been that there is more to “big data” than just its “bigness.” In particular, we see the term as describing data sources and practices that are disruptive on the level of knowledge infrastructures and sociotechnical systems, rather than existing as only a scaling problem that requires a technical solution (e.g., more bandwidth, larger disk drives, parallel computing techniques, etc.). In the first section we related “big data” to breakdowns in the traditional control zone, where the provenance of data sources and collections is no longer guaranteed by established knowledge institutions, generating questions of trust and integrity that remain unresolved. In the second section we saw that new data sources and practices can destabilize disciplines in unpredictable ways. In this section, we explore ways in which “big data” are disrupting established scholarly communities, as well as the scholarly infrastructure for dissemination and publication.

Unexpected scientific collaborations

32Universities, funding agencies, publication venues, and learned societies all assume, and support, institutionally-defined “disciplines” as basic organizational units. Disciplines can be characterized as path dependencies, in the sense that they represent the continuing imprint of historical choices and accidents. As administrative units and long-lasting professional organizations, they shape not only the nature of research, but also the reward systems—especially promotion and tenure decisions—that drive scholarly careers. Yet close examination immediately reveals that most disciplines encompass a wide variety of methodologies, epistemologies, publication practices, and other norms. This raises the question of whether disciplines are really the most significant levels or structures in academic research communities.

  • 41 Leah A. Lievrouw, “The invisible college reconsidered: Bibliometrics and the development of scienti (...)
  • 42 Etienne Wenger, Communities of Practice: Learning, Meaning, and Identity, New York, Cambridge Unive (...)
  • 43 Jonathon N. Cummings and Sara Kiesler, “Collaborative research across disciplinary and organization (...)
  • 44 See the extensive resource list at <http://www.scienceofteamscience.org/scits-a-team-science-resour (...)

33Research on this question has called out a number of other levels and structures, among them invisible colleges,41 communities of practice,42 grant-funded projects,43 and team science,44 which better capture the characteristics of active research communities and work groups. These differ along a variety of dimensions: size, from the solitary bench chemist to the 1000+ person teams of high-energy physics; methodology, including experimentation, simulation, observation, interpretation, and clinical intervention; primary publishing mode, including journals, archival conference proceedings, and monographs; geographical proximity; intellectual and social diversity; and others.

34We argue that “big data” simultaneously emerges from and helps to generate and strengthen inter-, trans-, and post-disciplinary structures of scientific work, because of the manner in which generating and using big data conflicts with the cultural norms of disciplines. We see this disruption firstly on the scale of a single unit of scholarly practice, and secondly in contexts that bring together multiple scholarly cultures.

  • 45 Peter Glasner, “From community to ‘collaboratory’? The Human Genome Mapping Project and the changin (...)

35As an example of disruption within a single field, consider the Human Genome Project. Conceived in the mid-1980s, the Human Genome Project has successfully mapped the entire human genome and made the results available through a fully-open, shared, network-accessible database. The resulting data arguably represent one of the great achievements of a cooperative scientific enterprise. Creating and opening this primary data source to the scientific community had profound effects on fields such as microbiology.45 Laboratory-scale work groups, each producing and guarding data for its own use, became dysfunctional. Instead, progress now depended on a broader collaborative framework and more substantial cooperation. This was manifested through data sharing in online, readily accessible data repositories, which required releasing data from the control zone of the traditional laboratory into a wider, messier arena. Thus, in this well-known example, “big data” transformed scientific practice.

  • 46 Zack Kertcher, “Gaps and bridges in interdisciplinary knowledge integration,” in M. Anandarajan and (...)

36The second type of disruption occurs when “big data” research requires collaboration between scholars previously separated by established and historical disciplinary, field, or institutional barriers. Data-driven multidisciplinary collaborations can create affinities between historically separate epistemologies and methods, providing the context for radically new ways of thinking and doing things. As Kertcher has said, “[c]ombining knowledge from different domains is the essence of innovation, as it offers individuals and organizations a potent recipe to break away from cemented, path-dependent cognitive molds.”46

  • 47 Thomas A. Finholt and Jeremy P. Birnholtz, “If we build it, will they come? The cultural challenges (...)
  • 48 Paul N. Edwards et al., “Science friction: Data, metadata, and collaboration,” Social Studies of Sc (...)
  • 49 Karina Kervin, Thomas Finholt and Margaret Hedstrom, “Macro and micro pressures in data sharing,” i (...)
  • 50 Janet Vertesi and Paul Dourish, “The value of data: Considering the context of production in data e (...)
  • 51 Steven J. Jackson et al., “Collaborative rhythm: Temporal dissonance and alignment in collaborative (...)

37At the same time, “forced marriage” collaborations around large data sets often confront profound cultural differences. For example, the study of a “big data” project involving ecologists and computer scientists showed how both had different levels of tolerance towards uncertainty and relations to power—the former being highly intolerant of uncertainty and more comfortable with hierarchies, the latter being highly tolerant of uncertainty and comfortable with loose organizational structures.47 Yet another potential source of friction48 is differing attitudes towards openness and sharing49 within as well as between fields, often influenced more by particular research group norms than by the subject of study.50 Finally, a substantial source of friction comes from the often delay-inducing temporal patterns of team members and work groups, stemming from a wide range of factors including research objects, time zone differences, other work commitments, personal habits, bottlenecks in peer review, and so on.51 For example, a field ecologist’s work rhythm may be determined by natural events (e.g., the annual migration of a particular bird species), while that of a climate modeler may be shaped by available supercomputer time. In a closely collaborative project, these differing temporal patterns, along with others, will generate a specific “collaborative rhythm,” and may also create friction and conflict.

The alternative dissemination of scholarship

38Besides disrupting research communities and their work patterns, big data has disruptive effects on how scholarly work is recorded and disseminated. Since the origins of modern science in the 17th century, virtually all infrastructures for scholarly communication have embodied a “scholarly value chain” characterized by the following functions:

  • Registration establishes the precedence of claims and findings.
  • Certification by other scholars validates claims.
  • Awareness mechanisms keep researchers abreast of new work.
  • Archiving preserves the scholarly record over time.
  • Rewarding creates incentives that increase the quality and quantity of scholarly contributions.52

39For many decades, the registration function has been fulfilled principally by published articles. These were packaged in journals, books, and archival conference proceedings which (along with the related citation practices) served as awareness mechanisms, as well as providing archivable records. Peer review has been the principal certification mechanism. These mechanisms, highly (though not entirely) dependent on print technology, have all been severely challenged by Internet- and web-based publication and dissemination. In the print tradition, data were rarely published in raw form. Instead, publication presented researchers’ synthesis and analysis of data, for example in graphs or tables. Raw data remained effectively the intellectual property of their producers. Though peer reviewers occasionally questioned data analysis or asked to see the raw data from which some result was derived, instead they almost always simply assumed the integrity of data analysis. Laboratories and research groups thus functioned as control zones, within which data were produced, managed, archived, and eventually lost. In principle, they guaranteed data integrity—though some scientists famously took advantage of this principle to publish shaky or fraudulent claims.

  • 53 Christine L. Borgman, “The conundrum of sharing research data,” Journal of the American Society for (...)
  • 54 Bryan Lawrence et al., “Citation and peer review of data: Moving towards formal data publication,” (...)

40By making it possible to circulate raw data nearly as easily as analysis and synthesis, electronic media place new demands on the scholarly communication system.53 One example is the disruption of publication and citation practices. The publication and citation systems for articles, books, and conference papers is very well established. Today, however, datasets are increasingly recognized as important, publishable scholarly work products. What “publication” means with respect to data remains very poorly defined, encompassing everything from barely documented ftp sites, to obligatory posting of datasets along with the journal articles built from them, to formal stand-alone publication. Data citation schemes remain largely experimental, with many competing versions.54

  • 55 Mark A. Parsons et al., “A conceptual framework for managing very diverse data…”, op. cit.

41Data publication, as an emerging norm, is also challenging peer review, the principal certification mechanism of the traditional scholarly communication system. But mechanisms for peer review of data still remain highly problematic.55 For example, the technological requirements for article review are simple: display text, images, and graphs on screen, or print them on paper. In contrast, evaluating data (whose forms range from simple spreadsheets to petabytes of binary information) may require elaborate technical scaffolding, access to software, and computational resources. Despite these complications, there is general consensus in the scholarly, publishing, and funding communities that in order to re-establish the currently broken scholarly value chain, data must be integrated into the full cycle of scholarly communication.

Conclusion

42In this article, we aimed to define “big data” through associated transformations in the nature and level of control over the data that underlie research, rather than as a simple reflection of scale or scope. We focused on how data resources and data publication stress traditional knowledge infrastructures, especially the disciplines, the role of methods, scientific collaborations, and the publication infrastructure. If large and potentially exhaustive datasets are often presented as disruptive, we showed that the transformation of the control zone can also destabilize modes of knowledge production: the case studies of weather forecasting and epidemiology showed how the availability of larger datasets could be counter-effective or destabilizing. But this fracturing of the control zone can cause other disruptions, such as upsetting the traditional separations between scientific disciplines and disturbing the peer-reviewed journal as benchmark for scholarly dissemination.

  • 56 Lisa Gitelman (ed.), “Raw Data” Is an Oxymoron, Cambridge (MA), MIT Press, 2013.
  • 57 Barbara R. Jasny et al., “Data replication & reproducibility. Again, and again, and again … Introdu (...)
  • 58 Jennifer C. Molloy, “The open knowledge foundation: Open data means better science,” PLoS Biology, (...)
  • 59 Mark Reith, Clint Carr and Gregg Gunsch, “An examination of digital forensic models,” International (...)

43If “big data” means “big uncertainty” about the provenance of data, a major challenge for scientists willing to use new data sources remains the questions of validity, integrity, and quality. Specifically, how is it possible to assess the quality of datasets coming from outside traditional control zones in science? Inevitably, questions such as these reduce to more fundamental debates in science about positivism, constructionism and the like. What is data quality and validity if “raw data is an oxymoron,”56 implying that data validity is measured by level of community agreement rather than its “correctness” as a transcription of some “underlying reality”? A possible amelioration of this big data problem lies in developing new tools and techniques that provide the basis for community-agreed-upon trust of new data sources. One path to achieving this is the development of mechanisms to allow reproducibility and replicability of scholarly work,57 and to acquire trust in data by making them open and reusable,58 thereby encouraging community quality determination. Another possible approach is retrospective determination of data integrity: recovering traces of origin, provenance, and the like from a digital artifact itself, perhaps drawing on the practice of digital forensics,59 a technique increasingly popular in the intelligence and legal communities. One example is work that we have done in the context of citizen science to infer observers’ expertise, and thus the quality of their contributed data, from trace artifacts embedded in the data. None of these techniques bring us back to the “good old days” when well-defined control zones provided a physical trust environment for data. Nostalgia will not move us forward and, hopefully, the more nuanced approach here will provide us the advantages of the new networked world without throwing out the fundamental supports that science depends upon.

Bibliographie

Arbesman, Samuel, “Stop hyping Big Data and start paying attention to ‘Long Data,’” WIRED, January 29, 2013, available at <https://www.wired.com/2013/01/forget-big-data-think-long-data/>.

Anderson, Chris, “The end of theory: The data deluge makes the scientific method obsolete,” Wired, June 23, 2008, available at <http://www.wired.com/science/discoveries/magazine/16-07/pb_theory>.

Atkinson, Ross, “Library functions, scholarly communication, and the foundation of the digital library: Laying claim to the control zone,” The Library Quarterly, 66(3), 1996: 239–265.

Bengtsson, Lennart and Shukla, Jagadish, “Integration of space and in situ observations to study global climate change,” Bulletin of the American Meteorological Society, 69(10), 1988: 1130–1143.

Borgman, Christine L., “The conundrum of sharing research data,” Journal of the American Society for Information Science and Technology, 63(6), 2011: 1–40.

Borgman, Jim, “Today’s random medical news,” The New York Times, April 27, 1997: E4.

Boulton, Geoffrey et al., Science as an Open Enterprise: Open Data for Open Science, Report 02/12, London, The Royal Society, June, 2012.

boyd, danah and Crawford, Kate, “Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon,” Information, Communication & Society, 15(5), 2012: 662–679.

Burke, Peter, A Social History of Knowledge II: From the Encyclopaedia to Wikipedia, Cambridge, Polity Press, 2011.

Bruns, Axel, “Faster than the speed of print: Reconciling “big data” social media analysis and academic scholarship,” First Monday, 18(10), 2013, available at <http://firstmonday.org/ojs/index.php/fm/article/view/4879>.

Converse, Jean M., Survey Research in the United States: Roots and Emergence, 1890–1960, Berkeley, University of California Press, 1987.

Courain, Margaret Eileen, Technology Reconciliation in the Remote-Sensing Era of United States Civilian Weather Forecasting: 1957–1987, PhD dissertation, Rutgers University, 1991, 964 p.

Crane, Diana, “Social structure in a group of scientists: A test of the ‘invisible college’ hypothesis,” American Sociological Review, 34(3), 1969: 335–352.

Crawford, Kate, “The raw and the cooked: The mythologies of big data,” DataEDGE 2013 (Berkeley, May 30–31, 2013), May 30, 2013, video available at <https://www.ischool.berkeley.edu/file/7386>.

Cummings, Jonathon N. and Kiesler, Sara, “Collaborative research across disciplinary and organizational boundaries,” Social Studies of Science, 35(5), 2005: 703–722.

Cummings, Jonathon N. and Pletcher, Carol, “Why project networks beat project teams,” MIT Sloan Management Review, Spring 2011, available at <http://sloanreview.mit.edu/article/why-project-networks-beat-project-teams/>.

de Solla Price, Derek J. and Beaver, Donald B., “Collaboration in an invisible college,” American Psychologist, 21(11), 1966: 1011–1018.

Edwards, Paul N., A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming, Cambridge (MA), MIT Press, 2010.

Edwards, Paul N. et al., “Science friction: Data, metadata, and collaboration,” Social Studies of Science, 41(5), 2011: 667–690.

Finholt, Thomas A. and Birnholtz, Jeremy P., “If we build it, will they come? The cultural challenges of cyberinfrastructure development,” in W.S. Bainbridge and M. Roco (eds.), Managing Nano-Bio-Info-Cogno Innovations: Converging Technologies in Society, Dordrecht, Springer, 2006: 89–101.

Gerlitz, Carolin and Rieder, Bernhard, “Mining one percent of Twitter: Collections, baselines, sampling,” M/C Journal, 16(2), 2013, available at <http://journal.media-culture.org.au/index.php/mcjournal/article/view/620>.

Gitelman, Lisa (ed.), “Raw Data” Is an Oxymoron, Cambridge (MA), MIT Press, 2013.

Glasner, Peter, “From community to ‘collaboratory’? The Human Genome Mapping Project and the changing culture of science,” Science and Public Policy, 23(2), 1996: 109–116.

Halevi, Gali and Moed, Henk F., “The evolution of big data as a research and scientific topic: Overview of the literature,” Research Trends, 30, September, 2012, available at <https://www.researchtrends.com/issue-30-september-2012/the-evolution-of-big-data-as-a-research-and-scientific-topic-overview-of-the-literature/>.

Hey, Tony, Tansley, Stewart and Tolle, Kristin (eds.), The Fourth Paradigm Data-Intensive Scientific Discovery, Redmond (WA), Microsoft Research, 2009.

Hilgartner, Stephen, “Access to data and intellectual property: Scientific exchange in genome research,” in Intellectual Property Rights and Research Tools in Molecular Biology (Summary of a Workshop Held at the National Academy of Sciences, February 15–16, 1996), Washington (DC), National Academy of Sciences, 1997: 28–39, available at <https://www.nap.edu/read/5758/chapter/5>.

Ioannidis, John P.A., “Why most published research findings are false,” PLoS Medicine, 2(8), 2005: 696–701, available at <http://robotics.cs.tamu.edu/RSS2015NegativeResults/pmed.0020124.pdf>.

Ioannidis, John P.A., “Finding large effect sizes: Good news or bad news?” The Psychologist, 21(8), 2008: 690–691.

Ioannidis, John P.A., “Why most discovered true associations are inflated,” Epidemiology, 19(5), 2008: 640–648.

Jackson, Steven J. et al., “Collaborative rhythm: Temporal dissonance and alignment in collaborative scientific work,” Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work—CSCW 2011 (Hangzhou, China, March 19–23, 2011), New York, ACM Press, 2011: 245-254.

Jasny, Barbara R. et al., “Data replication & reproducibility. Again, and again, and again … Introduction,” Science, 334(6060), 2011: 1225.

Kertcher, Zack, “Gaps and bridges in interdisciplinary knowledge integration,” in M. Anandarajan and A. Anandarajan (eds.), e-Research Collaboration: Theory, Techniques, and Challenges, Heidelberg/New York, Springer, 2010: 49–64.

Kervin, Karina, Finholt, Thomas and Hedstrom, Margaret, “Macro and micro pressures in data sharing,” in C. Zhang et al. (eds.), Proceedings of the 2012 IEEE 13th International Conference on Information Reuse and Integration—IRI 2012 (Las Vegas, August 8–10, 2012), Piscataway, IEEE SMC Society, 2012: 525–532.

King, Gary, The Social Science Data Revolution, Horizons in Political Science talk, Government Department, Harvard University, March 30, 2011, available at <http://gking.harvard.edu/files/gking/files/evbase-horizonsp.pdf>.

King, Gary, “Ensuring the data-rich future of the social sciences,” Science, 331(6018), 2011: 719–721.

Knorr Cetina, Karin, Epistemic Cultures: How the Sciences Make Knowledge, Cambridge (MA), Harvard University Press, 1999.

Lagoze, Carl J., Lost Identity: The Assimilation of Digital Libraries into the Web, PhD dissertation of Philosophy, Cornell University, Ithaca, February, 2010, available at <https://ecommons.cornell.edu/handle/1813/14813?show=full>.

Lave, Jean and Wenger, Etienne, Situated Learning: Legitimate Peripheral Participation, New York, Cambridge University Press, 1991.

Lawrence, Bryan et al., “Citation and peer review of data: Moving towards formal data publication,” International Journal of Digital Curation, 6(2), 2011: 4–37.

Lazer, David et al., “Computational social science,” Science, 323(5915), 2009: 721–723.

Lievrouw, Leah A., “The invisible college reconsidered: Bibliometrics and the development of scientific communication theory,” Communication Research, 16(5), 1989: 615–628.

Magoulas, Roger and Lorica, Ben, “Big data: Technologies and techniques for large-scale data,” O’Reilly, March 23, 2009, available at <http://www.oreilly.com/data/free/files/release2-issue11.pdf>.

Mahrt, Merja and Scharkow, Michael, “The value of big data in digital media research,” Journal of Broadcasting & Electronic Media, 57(1), 2012: 20–33.

Markham, Annette N., “Undermining ‘data’: A critical examination of a core term in scientific inquiry,” First Monday, 18(10), 2013, available at <http://firstmonday.org/ojs/index.php/fm/article/view/4868>.

Molloy, Jennifer C., “The open knowledge foundation: Open data means better science,” PLoS Biology, 9(12), 2011: e1001195.

National Research Council, The Future of Scientific Knowledge Discovery in Open Networked Environments: Summary of a Workshop, P.F. Uhlir (reporter), Washington (DC), National Academies Press, 2012.

Parsons, Mark A., Duerr, Ruth E. and Minster, Jean-Bernard, “Data citation and peer review,” Eos, Transactions American Geophysical Union, 91(34), 2010: 297-298.

Parsons, Mark A. et al., “A conceptual framework for managing very diverse data for complex, interdisciplinary science,” Journal of Information Science, 37(6), 2011: 555–569.

Pepe, Alberto et al., “From artifacts to aggregations: Modeling scientific life cycles on the semantic Web,” Journal of the American Society for Information Science and Technology, 61(3), 2010: 567–582.

Plantin, Jean-Christophe, “The politics of mapping plaforms: Participatory radiation mapping after the Fukushima Daiichi disaster,” Media, Culture & Society, 37(6), 2015: 904–921.

Raven, Kathleen, “23andMe’s face in the crowdsourced health research industry gets bigger,” spoonful of medicine: a blog from “Nature Medicine,” July 12, 2012, available at <http://blogs.nature.com/spoonful/2012/07/23andmes-face-in-the-crowdsourced-health-research-industry-gets-bigger.html>.

Reith, Mark, Carr, Clint and Gunsch, Gregg, “An examination of digital forensic models,” International Journal of Digital Evidence, 1(3), 2002: 12 p., available at <https://utica.edu/academic/institutes/ecii/publications/articles/A04A40DC-A6F6-F2C1-98F94F16AF57232D.pdf>.

Roosendaal, Hans E., Geurts, Peter A.Th.M., “Forces and functions in scientific communication: An analysis of their interplay,” in M. Karttunen, K. Holmlund and E.R. Hilf (eds.), Proceedings of the Conference on Cooperative Research Information Systems in Physics—CRISP 97 (Oldenburg, Germany, August 31–September 4, 1997), available at <http://www.physik.uni-oldenburg.de/conferences/crisp97/roosendaal.html>.

Sackett, David L. et al., “Evidence-based medicine: What it is and what it isn’t,” British Medical Journal, 312(7023), 1996: 71–72.

Sauer, John R., Peterjohn, Bruce G. and Link, William A., “Observer differences in the North American breeding bird survey,” The Auk, 111(1), 1994: 50–62.

Strasser, Bruno J., “Data-driven sciences: From wonder cabinets to electronic databases,” Studies in History and Philosophy of Biological and Biomedical Sciences, 43(1), 2012: 85–87.

Taubes, Gary, “Epidemiology faces its limits,” Science, 269(5221), July 14, 1995: 164-169.

Vis, Farida, “A critical reflection on big data: Considering APIs, researchers and tools as data makers,” First Monday, 18(10), 2013, available at <http://firstmonday.org/ojs/index.php/fm/article/view/4878/3755>.

Vertesi, Janet and Dourish, Paul, “The value of data: Considering the context of production in data economies,” in Proceedings of the ACM Conference on Computer-Supported Cooperative Work 2011—CSCW 2011 (Hangzhu, China, March 19–23, 2011), New York, ACM Press, 2011: 533–542.

Wagner, Caroline S., The New Invisible College: Science for Development, Washington (DC), Brookings Institution Press, 2008.

Wallis, Jillian C. and Borgman, Christine L., “Who is responsible for data? An exploratory study of data authorship, ownership, and responsibility,” in Proceedings of the American Society for Information Science and Technology, 48(1), 2011: 10 p., available at <http://onlinelibrary.wiley.com/doi/10.1002/meet.2011.14504801188/epdf>.

Wenger, Etienne, Communities of Practice: Learning, Meaning, and Identity, New York, Cambridge University Press, 1998.

Wynholds, Laura A. et al., “Data, data use, and scientific inquiry,” in Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries—JCDL’12 (June 10-14, 2012), New York, ACM Press, 2012: 19–22.

Young, S. Stanley, “Everything is dangerous: A controversy,” American Scientist, April 22, 2009, video available at <http://www.americanscientist.org/science/pub/everything-is-dangerous-a-controversy>.

Ziliak, Stephen T. and McCloskey, Deirdre N., The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives, Ann Arbor, University of Michigan Press, 2008.

Notes

1 Bruno J. Strasser, “Data-driven sciences: From wonder cabinets to electronic databases,” Studies in History and Philosophy of Biological and Biomedical Sciences, 43(1), 2012: 85–87; Peter Burke, A Social History of Knowledge II: From the Encyclopaedia to Wikipedia, Cambridge, Polity Press, 2011.

2 Gali Halevi and Henk F. Moed, “The evolution of big data as a research and scientific topic: Overview of the literature,” Research Trends, 30, September, 2012, available at <https://www.researchtrends.com/issue-30-september-2012/the-evolution-of-big-data-as-a-research-and-scientific-topic-overview-of-the-literature/>.

3 Karin Knorr Cetina, Epistemic Cultures: How the Sciences Make Knowledge, Cambridge (MA), Harvard University Press, 1999.

4 Chris Anderson, “The end of theory: The data deluge makes the scientific method obsolete,” Wired, June 23, 2008, available at <http://www.wired.com/science/discoveries/magazine/16-07/pb_theory>.

5 Tony Hey, Stewart Tansley and Kristin Tolle (eds.), The Fourth Paradigm Data-Intensive Scientific Discovery, Redmond (WA), Microsoft Research, 2009.

6 David Lazer et al., “Computational social science,” Science, 323(5915), 2009: 721–723.

7 danah boyd and Kate Crawford, “Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon,” Information, Communication & Society, 15(5), 2012: 662–679.

8 Kate Crawford, “The raw and the cooked: The mythologies of big data,” DataEDGE 2013 (Berkeley, May 30–31, 2013), May 30, 2013, video available at <https://www.ischool.berkeley.edu/file/7386>.

9 Annette N. Markham, “Undermining ‘data’: A critical examination of a core term in scientific inquiry,” First Monday, 18(10), 2013, available at <http://firstmonday.org/ojs/index.php/fm/article/view/4868>; Merja Mahrt and Michael Scharkow, “The value of big data in digital media research,” Journal of Broadcasting & Electronic Media, 57(1), 2012: 20–33.

10 Samuel Arbesman, “Stop hyping Big Data and start paying attention to ‘Long Data,’” WIRED, January 29, 2013, available at <https://www.wired.com/2013/01/forget-big-data-think-long-data/>.

11 Carolin Gerlitz and Bernhard Rieder, “Mining one percent of Twitter: Collections, baselines, sampling,” M/C Journal, 16(2), 2013, available at <http://journal.media-culture.org.au/index.php/mcjournal/article/view/620>.

12 Farida Vis, “A critical reflection on big data: Considering APIs, researchers and tools as data makers,” First Monday, 18(10), 2013, available at <http://firstmonday.org/ojs/index.php/fm/article/view/4878/3755>.

13 danah boyd and Kate Crawford, “Critical questions for big data…”, op. cit.

14 Axel Bruns, “Faster than the speed of print: Reconciling “big data” social media analysis and academic scholarship,” First Monday, 18(10), 2013, available at <http://firstmonday.org/ojs/index.php/fm/article/view/4879>.

15 Ross Atkinson, “Library functions, scholarly communication, and the foundation of the digital library: Laying claim to the control zone,” The Library Quarterly, 66(3), 1996: 239–265.

16 Carl J. Lagoze, Lost Identity: The Assimilation of Digital Libraries into the Web, PhD dissertation of Philosophy, Cornell University, Ithaca, February, 2010, available at <https://ecommons.cornell.edu/handle/1813/14813?show=full>.

17 Jean M. Converse, Survey Research in the United States: Roots and Emergence, 1890–1960, Berkeley, University of California Press, 1987: 242.

18 Gary King, The Social Science Data Revolution, Horizons in Political Science talk, Government Department, Harvard University, March 30, 2011, available at <http://gking.harvard.edu/files/gking/files/evbase-horizonsp.pdf>; Id., “Ensuring the data-rich future of the social sciences,” Science, 331(6018), 2011: 719–721.

19 John R. Sauer, Bruce G. Peterjohn and William A. Link, “Observer differences in the North American breeding bird survey,” The Auk, 111(1), 1994: 50–62.

20 Kathleen Raven, “23andMe’s face in the crowdsourced health research industry gets bigger,” spoonful of medicine: a blog from “Nature Medicine,” July 12, 2012, available at <http://blogs.nature.com/spoonful/2012/07/23andmes-face-in-the-crowdsourced-health-research-industry-gets-bigger.html>.

21 Jean-Christophe Plantin, “The politics of mapping plaforms: Participatory radiation mapping after the Fukushima Daiichi disaster,” Media, Culture & Society, 37(6), 2015: 904–921.

22 Geoffrey Boulton et al., Science as an Open Enterprise: Open Data for Open Science, Report 02/12, London, The Royal Society, June, 2012; National Research Council, The Future of Scientific Knowledge Discovery in Open Networked Environments: Summary of a Workshop, P.F. Uhlir (reporter), Washington (DC), National Academies Press, 2012.

23 Carl J. Lagoze, Lost Identity…, op. cit.

24 Lennart Bengtsson and Jagadish Shukla, “Integration of space and in situ observations to study global climate change,” Bulletin of the American Meteorological Society, 69(10), 1988: 1134.

25 Margaret Eileen Courain, Technology Reconciliation in the Remote-Sensing Era of United States Civilian Weather Forecasting: 1957–1987, PhD dissertation, Rutgers University, 1991, 964 p.

26 Paul N. Edwards, A Vast Machine: Computer Models, Climate Data, and the Politics of Global Warming, Cambridge (MA), MIT Press, 2010: ch. 15.

27 Roger Magoulas and Ben Lorica, “Big data: Technologies and techniques for large-scale data,” O’Reilly, March 23, 2009: 2, available at <http://www.oreilly.com/data/free/files/release2-issue11.pdf>.

28 David L. Sackett et al., “Evidence-based medicine: What it is and what it isn’t,” British Medical Journal, 312(7023), 1996: 71–72.

29 Ibid.: 71.

30 Chris Anderson, “The end of theory: The data deluge makes the scientific method obsolete,” Wired, June 23, 2008, available at <http://www.wired.com/science/discoveries/magazine/16-07/pb_theory>.

31 Examples from S. Stanley Young, “Everything is dangerous: A controversy,” American Scientist, April 22, 2009, video available at <http://www.americanscientist.org/science/pub/everything-is-dangerous-a-controversy>.

32 Jim Borgman, “Today’s random medical news,” The New York Times, April 27, 1997: E4.

33 Stephen T. Ziliak and Deirdre N. McCloskey, The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives, Ann Arbor, University of Michigan Press, 2008.

34 John P.A. Ioannidis, “Why most published research findings are false,” PLoS Medicine, 2(8), 2005: 696–701 (in particular 698), available at <http://robotics.cs.tamu.edu/RSS2015NegativeResults/pmed.0020124.pdf>.

35 Gary Taubes, “Epidemiology faces its limits,” Science, 269(5221), July 14, 1995: 164.

36 Ibid.: 167.

37 John P.A. Ioannidis, “Why most published research findings are false,” op. cit.

38 John P.A. Ioannidis, “Finding large effect sizes: Good news or bad news?” The Psychologist, 21(8), 2008: 690–691.

39 John P.A. Ioannidis, “Why most discovered true associations are inflated,” Epidemiology, 19(5), 2008: 646.

40 Ibid.

41 Leah A. Lievrouw, “The invisible college reconsidered: Bibliometrics and the development of scientific communication theory,” Communication Research, 16(5), 1989: 615–628; Diana Crane, “Social structure in a group of scientists: A test of the ‘invisible college’ hypothesis,” American Sociological Review, 34(3), 1969: 335–352; Derek J. de Solla Price and Donald B. Beaver, “Collaboration in an invisible college,” American Psychologist, 21(11), 1966: 1011–1018; Caroline S. Wagner, The New Invisible College: Science for Development, Washington (DC), Brookings Institution Press, 2008.

42 Etienne Wenger, Communities of Practice: Learning, Meaning, and Identity, New York, Cambridge University Press, 1998; Jean Lave and Etienne Wenger, Situated Learning: Legitimate Peripheral Participation, New York, Cambridge University Press, 1991.

43 Jonathon N. Cummings and Sara Kiesler, “Collaborative research across disciplinary and organizational boundaries,” Social Studies of Science, 35(5), 2005: 703–722; Jonathon N. Cummings and Carol Pletcher, “Why project networks beat project teams,” MIT Sloan Management Review, Spring 2011, available at <http://sloanreview.mit.edu/article/why-project-networks-beat-project-teams/>.

44 See the extensive resource list at <http://www.scienceofteamscience.org/scits-a-team-science-resources>.

45 Peter Glasner, “From community to ‘collaboratory’? The Human Genome Mapping Project and the changing culture of science,” Science and Public Policy, 23(2), 1996: 109–116; Stephen Hilgartner, “Access to data and intellectual property: Scientific exchange in genome research,” in Intellectual Property Rights and Research Tools in Molecular Biology (Summary of a Workshop Held at the National Academy of Sciences, February 15–16, 1996), Washington (DC), National Academy of Sciences, 1997: 28–39, available at <https://www.nap.edu/read/5758/chapter/5>.

46 Zack Kertcher, “Gaps and bridges in interdisciplinary knowledge integration,” in M. Anandarajan and A. Anandarajan (eds.), e-Research Collaboration: Theory, Techniques, and Challenges, Heidelberg/New York, Springer, 2010: 49–64.

47 Thomas A. Finholt and Jeremy P. Birnholtz, “If we build it, will they come? The cultural challenges of cyberinfrastructure development,” in W.S. Bainbridge and M. Roco (eds.), Managing Nano-Bio-Info-Cogno Innovations: Converging Technologies in Society, Dordrecht, Springer, 2006: 89–101.

48 Paul N. Edwards et al., “Science friction: Data, metadata, and collaboration,” Social Studies of Science, 41(5), 2011: 667–690.

49 Karina Kervin, Thomas Finholt and Margaret Hedstrom, “Macro and micro pressures in data sharing,” in C. Zhang et al. (eds.), Proceedings of the 2012 IEEE 13th International Conference on Information Reuse and Integration—IRI 2012 (Las Vegas, August 8–10, 2012), Piscataway, IEEE SMC Society, 2012: 525–532.

50 Janet Vertesi and Paul Dourish, “The value of data: Considering the context of production in data economies,” in Proceedings of the ACM Conference on Computer-Supported Cooperative Work 2011—CSCW 2011 (Hangzhu, China, March 19–23, 2011), New York, ACM Press, 2011: 533–542.

51 Steven J. Jackson et al., “Collaborative rhythm: Temporal dissonance and alignment in collaborative scientific work,” Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work—CSCW 2011 (Hangzhou, China, March 19–23, 2011), New York, ACM Press, 2011: 245-254.

52 Hans E. Roosendaal, Peter A.Th.M. Geurts, “Forces and functions in scientific communication: An analysis of their interplay,” in M. Karttunen, K. Holmlund and E.R. Hilf (eds.), Proceedings of the Conference on Cooperative Research Information Systems in Physics—CRISP 97 (Oldenburg, Germany, August 31–September 4, 1997), available at <http://www.physik.uni-oldenburg.de/conferences/crisp97/roosendaal.html>.

53 Christine L. Borgman, “The conundrum of sharing research data,” Journal of the American Society for Information Science and Technology, 63(6), 2011: 1–40; Alberto Pepe et al., “From artifacts to aggregations: Modeling scientific life cycles on the semantic Web,” Journal of the American Society for Information Science and Technology, 61(3), 2010: 567–582; Jillian C. Wallis and Christine L. Borgman, “Who is responsible for data? An exploratory study of data authorship, ownership, and responsibility,” in Proceedings of the American Society for Information Science and Technology, 48(1), 2011: 10 p., available at <http://onlinelibrary.wiley.com/doi/10.1002/meet.2011.14504801188/epdf>; Laura A. Wynholds et al., “Data, data use, and scientific inquiry,” in Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries—JCDL’12 (June 10–14, 2012), New York, ACM Press, 2012: 19–22.

54 Bryan Lawrence et al., “Citation and peer review of data: Moving towards formal data publication,” International Journal of Digital Curation, 6(2), 2011: 4–37; Mark A. Parsons et al., “A conceptual framework for managing very diverse data for complex, interdisciplinary science,” Journal of Information Science, 37(6), 2011: 555–569.

55 Mark A. Parsons et al., “A conceptual framework for managing very diverse data…”, op. cit.

56 Lisa Gitelman (ed.), “Raw Data” Is an Oxymoron, Cambridge (MA), MIT Press, 2013.

57 Barbara R. Jasny et al., “Data replication & reproducibility. Again, and again, and again … Introduction,” Science, 334(6060), 2011: 1225.

58 Jennifer C. Molloy, “The open knowledge foundation: Open data means better science,” PLoS Biology, 9(12), 2011: e1001195.

59 Mark Reith, Clint Carr and Gregg Gunsch, “An examination of digital forensic models,” International Journal of Digital Evidence, 1(3), 2002: 12 p., available at <https://utica.edu/academic/institutes/ecii/publications/articles/A04A40DC-A6F6-F2C1-98F94F16AF57232D.pdf>.

Auteurs

Jean-Christophe Plantin is Assistant Professor at the London School of Economics and Political Science, department of Media and Communications. He investigates the civic use of mapping platforms, the collaborative challenges in big data science, and the evolution of knowledge infrastructures. His research was funded by the Alfred P. Sloan Foundation, the Gordon and Betty Moore Foundation, the European Regional Development Fund, and the University of Michigan MCubed Program. His work was published in New Media & Society, Media, Culture & Society, and the International Journal of Communication.

Carl Lagoze is Associate Professor of Information at the School of Information, University of Michigan. His research interests concern knowledge infrastructure, digital libraries, eScience, and data curation.

Paul N. Edwards is Professor in the School of Information (SI) and the Department of History at the University of Michigan. His research explores the history, politics, and cultural aspects of computers, information infrastructures, and global climate science.

Christian Sandvig is Professor at the University of Michigan, where he teaches at the School of Information and the Department of Communication Studies. He is a researcher specializing in studying the consequences of algorithmic systems that curate and organize culture.

Le texte et les autres éléments (illustrations, fichiers annexes importés) sont sous Licence OpenEdition Books, sauf mention contraire.

Acheter

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search