Version classiqueVersion mobile

White Paper — Open Science in a Digital Republic — Strategic Guide

 | 
Scientific and Technical Information Department - CNRS

Freedom to analyse scientific results

What issues does this raise for the work of science?

Texte intégral

1TDM brings science and scientific research into a new age. TDM and Open Science have multiple advantages:

  • TDM clearly enables scientists to broaden their approach via the automated analysis of scientific literature and all kinds of data;

  • it also makes it easier to organise and structure research projects in France and to rationalise the decision-making processes for setting priorities and allocating budgets;

  • it creates new avenues for the exploitation of research output;

  • TDM can also contribute to public decision-making.

Analysing scientific publications and data

2TDM enthusiasts are especially interested in scientific literature, articles from journals, monographs, theses and reports. All this information expressed in natural language, which is rather unstructured when compared with quantitative data, offers vast and varied reserves. Each scientific article is the result of a logical research process in the framework of a given project with a declared goal, and has a place in a scientific context specified and reflected by the article’s references. When TDM is used to analyse large corpora of scientific literature covering a variety of periods, methodologies and equipment situations, either for a given issue or limited to a specific field of study, it can develop products and services of great benefit to researchers:

  • diachronic longitudinal studies on the evolution of scientific practices, the emergence of new concepts, the merging or separation of scientific fields, new and evolving relationships between different disciplines;

  • the quantity of information produced by researchers (2.5 million articles a year) shows the importance of using knowledge extraction technologies to help researchers access the contents of their colleagues’ articles. Sequential reading is likely to be replaced, at least partly, by browsing through knowledge sets resulting from the initial corpora. Text- and data-mining activities thus have direct consequences for the way researchers access scientific knowledge;

  • the improvement in the relevance of publishers’ access portals via semantic enrichment and annotations and also via “recommendation” features are all vital for a better sharing of the knowledge contained in the articles;

  • automatic categorisation, the classification of published work, links between publications beyond simple citations: these all use natural language for their analyses and for building derived knowledge;

  • revamped bibliometrics and the development of new metrics and ways of measuring the impact of publications, leading to progress in the way researchers, structures, programmes and institutions are assessed;

  • changes in scientometrics: retrospective analysis of fields of research, measurements of research performance, cross-fertilisation between disciplines;

  • seeking out and managing pools of experts;

  • the development of the predictive approach by automated generation of hypotheses from mining scientific literature.

3TDM therefore fosters the intensification of research. It stimulates fundamental research while also driving applied research. First of all, TDM makes quantitative analyses possible that are unprecedented in the history of science. Never before has it been possible to access, and potentially analyse, so many corpora of texts and data. The researcher’s task of scientific interpretation does not change in itself, but the range of research is expanding. Intensification also requires a transformation of the way in which researchers perceive science. Entire paradigms may need to change, as the Gargantext example shows.

4TDM makes it possible to find new correlations or emerging trends. As a tool for derived research, text mining can reveal interactions that are often new or previously impossible at transdisciplinary level.

5Combining TDM with the dissemination of research data and the acceleration of knowledge acquisition will facilitate the reproduction of experiments and make it easier to assemble and assess evidence. In some cases, it should be easier to refute findings, leading to greater scientific relevance. Verifying research is an offshoot of the development of TDM.

Setting out the issues and designing research projects

6Science progresses as the result of a series of methodological revolutions concomitant with an enrichment of the methodological arsenal available to researchers. The advent of digital science, and the pervasiveness of instruments generating massive amounts of data, has led to the birth of what Hey, Tolle and Tansley have called “data-intensive scientific discovery” (The Fourth Paradigm).

7Scientists now entrust observation and measurement tasks either to single instruments shared with colleagues, such as particle accelerators or on-board telescopes, or to networks of many small instruments such as oceanographic buoys, and weather or seismic stations. A science of data interpretation then develops downstream, of which TDM is the main pillar. But this development of algorithms for data analysis has to be backed up by less visible but equally important functions: infrastructures for calculation and for the management and sharing of data.

8The scientific approach has traditionally been based on deductive reasoning, starting from theories serving as premises to be tested. Hypotheses are based on theoretical knowledge, while the observations collected (data) are used to confirm these hypotheses (or refute them if they challenge the theory). A large part of Western education is based on this approach, as soon as children reach the “age of reason”. But this pre-eminence of theory over experimentation has been challenged over the last few years, notably with the arrival of Big Data and the development of TDM.

9This way of reasoning has been turned on its head: correlations revealed between aspects of the observations concentrated in the datasets suggest hypotheses from which to establish theoretical models of behaviour. Of course, this inductive reasoning does not guarantee the conclusion and can lead to multiple successive iterations. Abductive reasoning, involving logical inference from observations to theory, focuses on the simplest and most probable hypotheses and has been described as “inference to the best explanation”1. Inference engines using artificial intelligence are often based on this pragmatic abductive approach, which has proved so fruitful for developing TDM software.

10The use of TDM and the organisation of knowledge sharing, together with the changes in scientific methods, will enable research communities to build new research projects and discover new topics.

11This way of constructing a project in advance by the use of TDM offers enhanced control and productivity compared with the way decisions about research projects are currently made. Research projects structured like this will enable better allocation of resources.

Optimising the governance of scientific systems

  • 2 OECD, Governance of Public Research: Toward Better Practices, 2003.

12TDM will thus contribute to a better “governance of scientific systems” by streamlining “the decision-making process”, and by “priority setting, the allocation of funds and the management of human resources in a way that effectively responds to the concerns of the various stakeholders involved in the system”.2

13Indeed, TDM analysis of the results of research funding programmes (publications, patents, etc.) more accurately identifies the most fields that have proved most fruitful, as well as showing links between disciplines. Tools for viewing or mapping cooperation across different disciplines or geographical areas, either nationally or internationally, can bring out this information, which is so essential for guiding scientific research.

14The use of this information by those with decisions to make about scientific policy helps organisations define their scientific strategies, both in the field of basic research and in its financial exploitation.

Exploiting scientific data

15TDM and the massive analysis of data will not only make it possible to exploit the new knowledge resulting from automated processing and the subsequent development of innovations and discoveries, it will also enable the exploitation of the masses of unused data saved on the hard drives of researchers and not shared, despite their scientific value.

Exploiting “lost science”

  • 3 The expression “water tower effect” refers to an economic concept. This is an echo of the “trickle- (...)

16A “water tower effect”3 is possible for the entire world of research, with the processing of forgotten data.

  • 4 COMETS, “The ethical issues of scientific data sharing”, 2015, p. 4.

17At the moment, large quantities of data collected during experiments are “lost”. It has been estimated that about 10% of these data are published, while 90% remain on computer hard drives. In some disciplines, valid and important results remain unpublished and much of the data is underused or lost (this is particularly the case of data from negative outcomes that are simply forgotten). When it comes to the output of large-scale instruments, the raw data collected are so massive that they are processed directly online without being stored, such as for example those provided by satellite observations.4

18For Cristinel Diaconu, the loss of data seems to be sadly common:

“When we want to access the data, either they can no longer be found, or we can find them, but we don’t know what to do with them because we don’t understand what they mean. Worse still, data have sometimes been destroyed by the researchers, who thought they were useless after the end of a project. You don’t realise it at the time, but ten years later, you might be working on a project with echoes of the previous project, and the potential for discovery is lost because you no longer have the funding to repeat these manipulations.”

19Yet these archived data can be a real treasure trove. Cristinel Diaconu has calculated the financial value of these data for his own field: “My team and I realised that the additional cost of preserving the data is in the order of one thousandth of the total budget. Yet the publication of new articles resulting from the exploitation of archives in the following five years provides a profit of 10%. This is research that costs practically nothing! If there is no strategy for preserving data, we miss potential discoveries and research at very low cost. If they have been properly preserved, data cost almost nothing.”

Free reuse of the results of TDM

20Article 38 of the Act for a Digital Republic stipulates that “files produced at the conclusion of the research activities for which they were produced … constitute research data.”

21The legal framework governing “research data” is laid down in:

  • Article 30 of this same Act: “Once the data from a research activity financed at least 50% by grants allocated by the state, by regional or local authorities or public institutions, by grants from national funding agencies or by European Union funds, are no longer protected by specific rights, or special regulations, and they have been made public by the researcher, the research establishments or organisation, they can be freely reused.”

  • Act No. 2015-1779 of 28 December 2015 on free access to and the terms of reuse of public sector information (known as the Valter Act), which brings teaching establishments and institutions into the age of Open Data and imposes the obligation to make their data available “in electronic form” complying with an “open and easily reusable standard that can be exploited by an automated processing system”.

22These research data therefore constitute a source of knowledge that must be made available to the scientific communities for the purposes of acquiring new knowledge and enabling new research work.

23However, Open Science must not hamper the financial aspects of research.

24The provision of scientific data on open science platforms must not impede:

    • 5 Article R.413-5-1 of the French Penal Code: “restricted regime areas, as referred to in Article R. (...)

    the exploitation of data, including by patents, and respect for secrecy and specific provisions, such as Restricted Regime Areas;5

  • respect for contractual rules of confidentiality.

25The way research data is made available must also be organised to take into account the different practices of the various scientific communities.

Developing these innovations

26TDM techniques rely on innovative tools (software, supercomputing, technologies for the massive collection and processing of data), and French public research organisations and French companies contribute to their development.

27TDM is currently one of the major sectors with potential for innovation within the digital economy, and these technologies are central. This can be seen in the awards acknowledging work on TDM. Xavier Tannier, from the CNRS’s LIMSI laboratory at Orsay (Laboratoire d’information pour la mécanique et les sciences de l’ingénieur) and Iona Manolescu of INRIA have received a “Google Award”6 for their algorithm for text mining in print media. In the case of medical informatics, Pierre Zweigenbaum7 was inducted into the American College of Medical Informatics in 2014.

28TDM has potential benefits for the French economy: several start-ups in France have been created as a result of the need for research projects to develop TDM tools jointly with private partners (at the CEA in particular).

29In his article “Mining external R&D”,8 Alan Porter insists on the desirability of such innovation for businesses. Companies with research and development centres can certainly expect to benefit from applied research on a broader scale, using larger corpora. The advantage is even greater for companies that do not have an R&D centre. In this respect, text mining is a factor for growth. This is the result of the positive externalities of its development, based on three points: product innovation, productivity, and increased consumer satisfaction. The extent of this economic impact is correlated with the legal freedom to carry out TDM: the broader the possibilities of TDM, the greater the economic impact.

Support for public decision-making

  • 9 Catherine Laurent, Jacques Baudry, Marielle Berriet-Solliec, Marc Kirsch, Daniel Perraud, Bruno Tin (...)

30TDM can help improve decision-making. In the public sector, it would facilitate the development of evidence-based policies (EBP). EBP is a policy approach based on the empirical analysis of situations. This practice, which consists in analysing large factual databases for decision-making purposes, was originally developed in the medical field (evidence-based medicine) at the beginning of the 1990s.9 This approach then spread to other areas of public decision-making, such as the protection of the environment and security.

  • 10 Overseas Development Institute, Evidence-Based Policymaking: What is it? How does it work? What rel (...)

31In the United Kingdom, these empirical judgements started to take an important role in public life under the government of Tony Blair. A User Guide was even drawn up by the Overseas Development Institute (ODI) in order to disseminate these practices.10 The concept is divided into several branches: evidence-based decision, evidence-informed decision, and evidence-aware decision. TDM can optimise economic, social and environmental policies, as it is not based only on opinions and theoretical models, but also on factual analysis.

  • 11 Marwa Chalgham, Abderrahmane Fadil and Abdelaziz Dammak, Le Data Mining pour l’aide à la décision e (...)

32TDM can be used for decision support in other fields (not necessarily public), for example in geography (economic geography, social geography, geomarketing, etc.). The idea is to cross-reference geo-tagged data in order to identify the characteristics of geographical areas. To do this, “the descriptive techniques used in data mining and more precisely Agglomerative Hierarchical Clustering (AHC) were used to identify the number of homogeneous groups based on Ward’s aggregation criterion and Euclidean distance, using the R Data Mining Software.”11 The data processed by TDM can thus be used to support Geographic Information Systems (GIS).

  • 12 Quora, “Why is econometrics isolated from the big data/machine learning revolution?”, 2013
    https://w (...)

33Econometrics is another data-driven science that could benefit from progress in TDM. This discipline enables observers to monitor the way the economy is really functioning, which is useful in situation analysis and public and private decision-making. However, there are currently few links between econometrics and TDM/machine learning/Big Data. A possible explanation is the difficulty in determining causal links and in quantifying the impact of a variable on an observed phenomenon.12

  • 13 NESTA, Using Research Evidence: A Practice Guide, Section A: “What is evidence-informed decision-ma (...)

34Finally, the research activities and publications resulting from the use of TDM require a high degree of transparency and rigour as regards methods, peer reviews and external influences. This information helps researchers better understand the systemic aspects of the topic studied and also provides a means of assessing the reliability of the results. The analysis of empirical facts through TDM technologies thus offers – in some cases – the possibility of analysing the veracity of political statements and providing support for decision-making.13

Notes

1 https://en.wikipedia.org/wiki/Abductive_reasoning.

2 OECD, Governance of Public Research: Toward Better Practices, 2003.

3 The expression “water tower effect” refers to an economic concept. This is an echo of the “trickle-down” theory, extended to include innovations and not just tax policy. To summarise, TDM is a set of techniques that can be used in numerous sectors; it “flows” over them and can therefore “irrigate” different economic activities.

4 COMETS, “The ethical issues of scientific data sharing”, 2015, p. 4.

5 Article R.413-5-1 of the French Penal Code: “restricted regime areas, as referred to in Article R. 413-1, are those for which protection is imperative in order to prevent essential elements of the scientific or technical potential of the Nation from:
1° being appropriated in a manner that could lead to weakening the Nation’s defences, compromising its security or prejudicing its other fundamental interests;
2° or being diverted for purposes of terrorism, proliferation of weapons of mass destruction and their means of delivery or contribution to the increase of military arsenals.
The restricted regime areas may include, within their scope, premises for which enhanced protection is justified by the storage of products or by the execution of activities involving specific risks with regard to the imperatives mentioned in the first three sub-paragraphs.”

6 https://archives.limsi.fr/Actualites/GoogleAward.html

7 https://perso.limsi.fr/pz/

8 http://www.sciencedirect.com/science/article/pii/S0166497211000113

9 Catherine Laurent, Jacques Baudry, Marielle Berriet-Solliec, Marc Kirsch, Daniel Perraud, Bruno Tinel, Aurélie Trouvé, Nicky Allsopp, Patrick Bonnafous, Françoise Burel, Maria José Carneiro, Christophe Giraud, Pierre Labarthe, Frank Mastose and Agnès Ricroch, “Pourquoi s’intéresser à la notion d’« evidence-based policy » ?” Revue Tiers Monde, 4(200), 2009, http://www.cairn.info/revue-tiers-monde-2009-4-page-853.htm

10 Overseas Development Institute, Evidence-Based Policymaking: What is it? How does it work? What relevance for developing countries?, November 2005.

11 Marwa Chalgham, Abderrahmane Fadil and Abdelaziz Dammak, Le Data Mining pour l’aide à la décision en géomarketing (Datamining as a decision support tool in geomarketing), ROADEF – 15e congrès annuel de la Société française de recherche opérationnelle et d’aide à la décision, February 2014, Bordeaux, France, <hal-00946452>

12 Quora, “Why is econometrics isolated from the big data/machine learning revolution?”, 2013
https://www.quora.com/Why-is-econometrics-isolated-from-the-big-data-machine-learning-revolution;
Hal R. Varian, “Big Data: New Tricks for Econometrics”,
Journal of Economic Perspectives, 28(2), Spring 2014.

13 NESTA, Using Research Evidence: A Practice Guide, Section A: “What is evidence-informed decision-making, and why focus on research?”, 2015.

CC-BY-NC-ND-4.0

Le texte seul est utilisable sous licence CC BY-NC-ND 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Lire

Open access

Acheter

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search