Desktop versionMobile Version

Data Sciences: From First-Order Logic to the Web

 | 
Serge Abiteboul

Data Sciences: From First Order Logic to the Web

Inaugural lecture given on Thursday 8 March 2012

Serge Abiteboul

Anmerkungen des Autors

The Collège de France Chair of Information Technology and Digital Sciences is supported by the Institut national de recherche en informatique et en automatique (INRIA – French National Research Institute for Computer Science and Applied Mathematics).

Volltext

1Mr Administrator,
Dear colleagues,
Dear friends,

2Since today is International Women’s Day, I would like to dedicate my inaugural lecture to the woman studying computer science, to the woman studying mathematics or sciences, who is so rare on our campuses. She is sitting in the front row. She may be typing a text message with both thumbs. She may be Michel Serres’ "Thumbelina", offering me a perfect transition to situate the subject of my lecture:

I do not know of any living being, cell, tissue, organ, individual and maybe even species that cannot be said to store information, process information, and emit and receive information.
Michel Serres

  • 1 Gérard Berry, Pourquoi et comment le monde devient numérique, Collège de France / Fayard, coll. « L (...)
  • 2 Gérard Berry, Penser, Modéliser et maîtriser le calcul informatique, Collège de France / Fayard, co (...)

3The information that is stored, processed, and exchanged, is at the heart of the activity of living beings, of the objects in this world, of human associations. By helping us to manage information represented in digital format, computer systems have profoundly transformed our lives. Gérard Berry already spoke about the digitisation of information in his inaugural lecture1. The subject I have the great honour of addressing as the Collège de France Chair of Information Technology and Digital Sciences is the management of digital data by computer systems. I hope that, in keeping with my brilliant predecessors in this Chair2, I will be able to share the richness and beauty of computer science, and thus participate in teaching “knowledge in the making”.

  • 3 “Natural languages” refer to languages elaborated over time by groups of speakers, like French or E (...)

4To obtain information, we can query a database management system. To do so, we express our queries in a simple computer language, perhaps using graphics, perhaps even in our natural language3. The system translates this request into a formal language. This consists of a syntax, which allows the user’s query to be specified, and a formal semantics that gives this syntax an exact meaning. Mathematical logic allows for this kind of formal language. In this lecture, I will discuss the profound ties between what I here call data sciences and mathematical logic or, to be more precise, first order logic.

5Nowadays, users mostly search for information on the Web. Whereas English is prevalent within information technology, French is sometimes more precise, more elegant. I unquestionably prefer the word informatique (referring to “information science and technology”) to computer science (too limiting), and courriel to email. I also prefer the word Toile to the more common English term Web, for with the word Toile, the etymological reference to a spider’s web is so aptly completed with the references to a painter’s canvas or a cinema screen. The word Toile also allows an overly restrictive focus on a particular medium, the World Wide Web, to be transcended, so that a world of globally interconnected content can be envisaged more generally. While the word Toile is used in the original text, we will use the word Web in this English translation.

6We will consider the Web’s information systems that serve as entry points to information of all sorts. The most widespread examples of this kind of system are search engines like Google, which provides an index to billions of documents on the Web, and in a way allows us to think of the Web as a gigantic database. As for social network systems like Facebook, they serve as entry points to hundreds of millions of users’ personal data.

7The Web’s information systems, just like centralised data management systems, are mediators between intelligent individuals with little desire to trouble themselves with programming details, and physical objects like discs or USB keys. We are interested in intelligent systems that can manage information, understand it and make it available to human users. This last sentence deliberately echoes an anthropomorphic view of computer systems. We interact with machines that are becoming ever more autonomous, more difficult to distinguish from human beings. While the intelligence of a database management system is a small step toward artificial intelligence as defined by Alan Turing, the intelligence of the Web is a recent consideration, from both a philosophical and a scientific point of view. In this lecture I will discuss the emergence of collective knowledge fuelled by the sharing of large volumes of information. We will try to imagine what tomorrow’s Web might be like with millions, perhaps even billions of interconnected machines reasoning collectively.

8This lecture is organised as follows. First, I will go over a few fundamental notions about data, information and knowledge. Second, I will discuss two of information technology’s gleaming successes in the 20th century:

  • the one relates to data, with relational database management systems;

  • the other relates to information, with the Web’s search engines.

9I will then consider two great challenges for the 21st century:

  • how to make collective knowledge emerge from the Web;

  • how to move to a “Web of knowledge”.

Look Dave, I can see you’re really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.
HAL in
2001: A Space Odyssey

1. Data, information and knowledge

  • 4 Giving a precise definition of these notions is not an easy feat. See for example: Luciano Floridi, (...)

10Temperature measures taken every day in a weather station are data. A graph showing the evolution of the mean temperature over time, in a given place, is information. The fact that the temperature on Earth increases as a result of human activity is knowledge. These three notions are very closely linked. Roughly speaking, here is how I will use them4:

  • A piece of data provides a basic description, typically numerical for our purposes, of a given reality. It can be, for example, an observation or a measurement.

  • Drawing on the collected data, information is obtained by organising and structuring data so as to derive meaning.

  • By understanding the meaning of information, we obtain knowledge, in other words, “facts” held to be true in an individual's world, and “laws” (logical rules) governing this world.

11The starting point for the representation of data is the bit, a variable that can have the value 0 or 1. Data will be represented by sequences of bits. For example, the position of a lift in a six-floor building could be represented using 3 bits: 000 for the ground floor, 001 for the first floor, etc., 110 for the sixth floor (number 6 in base 2). A character is represented by a byte, which is a sequence of 8 bits (up to 4 bytes are needed for each character in certain alphabets for encodings like UTF-16). A text can be thought of as a series of bytes. The byte is the basic measure; 103 bytes form a kilobyte; 106 a megabyte; 109 a gigabyte; 1012 a terabyte; etc.

12A randomly selected series of bits is unlikely to have any meaning. Let us rather look at data to which we can give meaning. Consider for example the sequence of bits that would represent the following table:

Manon

Imperial College

London

Pierre

ENS

Cachan

Jérémie

Mines de Paris

Paris

Marie

ENS

Cachan

Myriam

Paris 11

Orsay

13An alien would surely not understand anything about this sequence of bits. But a program, a text editor, was able to analyse it and present this table in a form with which we are familiar.

14From data to information. The entries in this table are chains of characters. For the moment, they are data. Now, we can specify that the first column contains the first names of PhD students at a summer school in Cargèse in Corsica, the second, their university, and the last column the town where their university is located. By acquiring a meaning, these data have become information. Note that the absence of data is also informative. For example, there is no line saying “Philippe, ENS, Cachan”. This is also information!

15From information to knowledge. This information transforms into knowledge when it is introduced into a logical world. Each line becomes a statement, for example “Manon, a student at Imperial College in London, attended this summer school”. Suppose, for instance, that we know that “this table contains the entire list of all PhD students at this summer school” and that “all computer science PhD students attended this school”. We can then infer that either Philippe is not a PhD student at Cachan, or he is not studying computer science.

16We have gone from data to information, and from information to knowledge. Obviously, the boundaries between these concepts are not clear-cut. The world we are seeking to model with knowledge is complex and partly eludes us. For example, while some people think that Manon is a student at Imperial College, others may believe that this is not true.

Storage

  • 5 Its data persist after the computer has been switched off.

17Several types of media can be used to store digital data, namely: flash memory, optical discs (which include CDs and DVDs), hard drives (or magnetic discs), and magnetic tapes. These media provide large volumes of "persistent”5 storage, unlike Random Access Memory (RAM), made with electronic components.

  • 6 The following trends have been observed until now. Regarding storage capacities, the hard drives’ m (...)

18Let me offer some numbers to make this more tangible. The computer on which I am writing this text has a 4-gigabyte Random Access Memory and, instead of a disc, to store persistent data, it uses some one hundred gigabytes of flash memory, a new technology that is faster than hard drives but also more expensive. While we are on the topic, I will point out that technology is becoming ever more complex. The numbers are moving very fast when it comes to computer equipment. Prices are decreasing, speed of access or transfer is increasing, and volumes are growing6. In a few years' time, four gigabytes of memory will seem like nothing to the reader of this text.

19Let us not forget that the data we use are less and less often stored locally on our computers, and increasingly on machines connected somewhere on the network. For example, the document in which I am drafting this text is on Google Docs, stored on the disc of an unknown machine, at an unknown location. We say of these data that they are “on the cloud”. From a functional point of view, we therefore need to distinguish between access to data on a very fast local network, which will take a few milliseconds, and access via the Internet to data that might be on the other side of the world, which may take a second or longer.

20These technical aspects enable us to understand what can be achieved, how and at what price. I have deliberately simplified them somewhat to make them easier to understand. And a few words to those who like to hide behind “I don’t understand anything about computers”: the portrayal of IT in the media suffers from excessive fascination with hardware and programming. In my opinion, understanding the highly complex functioning of a processor or graphics card is of little importance. It is however crucial to master the basics of algorithms. It is not necessary to know how to program either (even though experience of programming with a language like CAML – Categorical Abstract Machine Language – can make algorithms easier to understand). For performance purposes, it can be helpful to understand where the information we use is stored: in memory, on a disc or on the network. Most of all, it is essential to understand the meaning of this information, how it is represented, how it is organised.

21Here are some numbers to remember:

Storage medium

Access time

Size

RAM

Microseconds

Gigabytes (109)

Hard disc

Milliseconds

Hundreds of gigabytes or terabytes

Local network

Milliseconds or longer

Terabytes (1012)

The Web

Tenths of a second or seconds

Virtually ∞

Measuring zettabytes with coffee spoons

22By lining up bits, we can represent information. We can store more and more information so that we can find it on demand, like a virtually unlimited backup of our personal memory.

23We can go beyond the sizes already mentioned by lining up bits:

Kilo

Mega

Giga

Tera

Peta

Exa

Zetta

Yotta

103

106

109

1012

1015

1018

1021

1024

24Let us briefly discuss these units of measurement. For example, this lecture should weigh some 100,000 bytes, in other words, 100 kilobytes. The kilobyte is a “cool” measure as it is almost acceptable to use 10= 1000 instead of 210 = 1024, which makes it easy to switch from the decimal system, which is the most common, to the binary system, so dear to IT specialists. About ten of Chopin’s Nocturnes on my phone take up 75 megabytes. The video of my daughter’s graduation ceremony and its handful of gigabytes are bordering on the gigantic. According to the numbers published by Michael Brodie7, all the books ever written would require only 200 terabytes of plain text, while storing the data produced by the CERN particle collider in one minute requires about a hundred petabytes. Representing all sentences ever uttered would require a few exabytes. Finally, the zettabyte is the order of magnitude of the annual traffic on the Internet these days, as well as of the available storage (counting all the discs, magnetic tapes, CDs and DVDs in the world):

1,000,000,000,000,000,000,000 bytes!

25The fever of the powers of 10! Each year we produce more information than we can store. Two issues arise with this heady profusion of information:

  1. How to find the right information in this mass?

  2. How to choose what we want to save?

26One should, of course, take into account the nature of what is being stored. The space taken up by pictures is growing very fast, chiefly due to the improving resolution of CCTV cameras. But there is also a significant increase in content with rich semantics, which can be used directly as databases and metadata. The format of information in most of the examples we have looked at is very simple. Far more complex information can also be represented digitally, such as that contained in a living cell’s DNA. In a way, determining what information is at stake in a given object, from bacteria to phenomena like share prices or the movement of planets, is a crucial step towards understanding that object. But this is up to sciences other than computer science, like biology, financial mathematics, or astronomy. Once this information has been obtained, machines can store it, exchange it, analyse it, etc. We are reaching the data sciences.

27Having briefly discussed the nature and volume of information, I will now turn to the database systems that constitute the bedrock of the domain: relational systems.

2. Relational systems and first order logic

Logic is the beginning of wisdom, not the end.
Mr. Spock,
Star Trek

28In this part, I will discuss the computer systems that help us manage data. We have, on the one hand, a data server somewhere on the Web, with discs and their tracks, complicated access structures like indexes or B-trees, memory hierarchies and their caches and, on the other, a user. Suppose that the server belongs to IMDb, which manages a database on cinema. Suppose that the user, call her Alice, wants to know which movies were directed by Alfred Hitchcock. To do so, she enters keywords or fills in the fields of a form provided by IMDb. Her query will travel from her browser to the data server. There, the query will be transformed into a program, perhaps a complex one, which will be executed to obtain the answer. This program is not something Alice wants to write; in fact, she does not have to write it.

29The basic system with which data is managed is a file system. A file is a sequence of bits that can represent a song, a picture, a video, an email, a letter, a novel, etc. Your personal computer and your phone store data in file systems. And sometimes, when you can’t remember where you have put something, you “search” these file systems. Rudimentary. Yet we will see that this is all a Web search engine does, except that it does so on a worldwide file system. In this part, we will talk about systems that also manage data, but which are far more sophisticated than file systems: database management systems. These are complex pieces of software, the fruit of decades of research and development. They allow individuals or programs to express requests to query or modify databases. We will focus here on the most widespread of these systems, relational systems, which include very widely used commercial software, like the Oracle database system, and very popular free software, like MySQL.

Relational calculus and algebra

  • 8 Serge Abiteboul, Richard Hull and Victor Vianu, Foundations of Databases, Addison-Wesley, 1995: htt (...)

30A database management system serves as a mediator between individuals and machines. To better adapt to individuals, it has to organise and present data intuitively. It also has to provide a language that can be used easily by human beings to express requests. These requirements are the starting point of the relational model8 introduced by Ted Codd, a researcher at IBM, in the 1970s. At the end of the 19th century (long before computer science and databases were invented), mathematicians developed first order logic, to formalise mathematicians’ language. Codd had the idea of adapting this logic to define a data management model, the relational model.

Figure 1. A relational database

Film

Screening

Title

Director

Actor

Title

Theatre

Time

Casablanca

M. Curtiz

Humphrey Bogart

Casablanca

Lucernaire

19:00

Casablanca

M. Curtiz

Peter Lorre

Casablanca

Studio

20:00

Les 400 coups

F. Truffaut

Jean-Pierre Léaud

Star Wars

Sel

20:30

Star Wars

G. Lucas

Harrison Ford

Stars Wars

Sel

22:15

31In the relational model, the data is organised into two-dimensional tables that we call relations. Unlike mathematicians, we assume that the relations are of a finite size. To illustrate this, I will use a database consisting of a Film relation and a Screening relation (Figure 1). A row in these relations is called an n-tuple, where n is the number of columns. For example, 〈 Star Wars, Sel, 22:15 〉 is a 3-tuple, a triple, in the Screening relation. The columns have names, called attributes, like Title.

32The data is queried using relational calculus as a language. Relational calculus (very strongly inspired by first order logic) uses names that represent relations like Film or Screening, entries in these relations like “Star Wars”, variables like t, d, and logical symbols, ⋀ (and), ⋁ (or), ¬ (no), ⇒ (implies), ∃ (exists), ∀ (for everything). With all this, logical formulas can be built, such as:

qHB = ∃ t, dFilm(t, d "Humphrey Bogart“) ⋀ Screening(t, x, y) )

33If this seems cryptic to you, in English it reads: there exists a title t and a director d such that the n-tuple 〈 t, d, “Humphrey Bogart” 〉 is found in the Film relation, and the n-tuple 〈 t, x, y 〉 in Screening. Note that x and y are not quantified in the formula above; we say of these two variables that they are free. The formula can be seen as a relational calculus query. It reads: give me the theatres x and the times y, if a director d and a title t exist such that... In other words: “Where and at what time can I see a movie with Humphrey Bogart?”. This language, relational calculus, makes it possible to state queries in a syntax that avoids the ambiguities of our natural languages. If they could love, machines would love the simplicity and the precision of relational calculus. In practice, they use the SQL language (Structured Query Language), which expresses the same queries differently. For example, the previous query is expressed:

select Theatre, Time
from Film, Screening
where Film.title = Screening.title and actor = “Humphrey Bogart”

  • 9 SQL goes further than relational calculus. For example, it allows for results to be sorted and for (...)

34It almost makes sense, no? And whether Alice expresses herself in French or uses a graphical interface, the system transforms her query into an SQL query9.

35The above query in relational calculus (or in SQL) clearly states what Alice is asking. This query has a precise meaning, a precise semantics. It defines an answer, a set of n-tuples. The exact definition of answers is beyond the scope of this lecture. What the query itself or its meaning do not explain, is how to compute the answer. This computation will use the relational algebra introduced by Codd. An important step consists in transforming the calculus query into an algebraic expression, which makes it possible to compute the answer to this query.

36Relational algebra consists of a small number of basic operations which, when applied to relations, produce new relations. These operations can be composed to build more and more complex algebraic expressions. To answer the query of the example, we will need three operations, join, selection and projection, which we will use to compose the following relational algebra expression:

EHB = ΠTheatre,time (Πtitle (σactor = “Humphrey Bogart”(Film)) ⋈ Screening)

Figure 2. The evaluation of an algebraic query

Figure 2. The evaluation of an algebraic query

37We can follow the evaluation of this algebraic expression by looking at Figure 2. The selection operation, denoted σ, filters a relation, only keeping the n-tuples that satisfy a given condition, here actor = “Humphrey Bogart”. The projection operation, denoted Π, also allows for the information in a relation to be filtered, but this time by eliminating columns. Perhaps the most exotic operation in algebra, join, denoted ⋈, combines n-tuples from two relations. Other operations not illustrated here make it possible to perform union and set-difference operations between two relations, or to rename attributes. The power of relational algebra stems from the possibility it offers of composing these operations. This is what we did in the algebraic expression EHB, which allows us to evaluate the answer to query qHB.

38My presentation is brief but it is important for the reader to understand the value of algebra. It is relatively simple to write a program that evaluates the answer to a relational calculus query. It is more tricky to get a program to compute this answer efficiently. Relational algebra breaks down the work. A specific, very efficient program can be used for each of the algebra operations; the result is obtained by composing these programs. Efficiency stems mainly from the fact that the operations deal with sets of n-tuples rather than with the n-tuples one by one.

39Codd demonstrated the following theorem:

A query can be expressed in relational calculus if and only if it can be evaluated with a relational algebraic expression, and it is simple to transform a calculus query into an algebraic expression that can evaluate this query.

40What have we learnt from Codd? Not much from a mathematical perspective. Relational calculus was borrowed from logicians. A (slightly different) algebraisation had even already been proposed by Tarski. But from a computer science viewpoint, Codd laid the foundations of data-centric mediation between individuals and machines. Thanks to him, we know that we can express a query using relational calculus, and that a system can translate this query into an algebraic expression and efficiently compute the answer. Yet, when Codd proposed this approach, the engineers who were then managing large volumes of data and applications showed a unanimous reaction: “Too slow! It won’t scale". They were wrong. To translate Codd’s idea into a billion-dollar industry, they were missing query optimisation. After years of effort, researchers managed to make relational systems run with acceptable response times. With these systems, the development of applications to manage data became a lot simpler, which led to a considerable increase in the productivity of programmers working on applications for managing large volumes of data.

Query optimisation

41An infinity of algebraic expressions exists for evaluating the same query. Although syntactically different, they define the same query. From a semantic point of view, they are equivalent. Optimising a query consists in transforming it into another query, that gives the same answers, but at the lowest possible cost (typically in terms of time). On a practical level, we must choose an execution plan, that is, an algebraic expression with specifications about the algorithm to use to evaluate each operation. An execution plan is basically a program to compute the answer. The first issue with this is that the search space, in other words, the space within which we are looking for the execution plan, is potentially gigantic. To avoid having to search it entirely, we use heuristics – methods, which, though they are not guaranteed to find the optimal plan, give quick and satisfactory results. These heuristics often use common sense rules like “the selections must be made as early as possible”. The other issue is that to choose the least time-consuming plan, the optimiser (the program in charge of optimisation) must be capable of estimating the cost of each potential plan, a complex task to which the system cannot afford to dedicate too many resources. The optimiser therefore “does its best”. And, typically, commercial optimisers like Oracle or DB2 do wonders with simple queries. Things get a lot messier with complex queries, for example ones that involve universal quantifiers, like the query: which actors have played only in comedies? Fortunately, in practice, most of the queries that are asked are simple.

42Underlying the discussion on query optimisation is the issue of the difficulty to obtain certain types of information. This raises the notion of “complexity”. Since Gödel, we know that some propositions can be neither proven nor disproven, and that some problems cannot be solved. This notion of undecidability is painfully making its way into the wider public. On the other hand, the public sees the time taken by queries as a purely technical issue. Obviously, the computation time depends on the server’s power, the speed of the disc or the quality of the optimiser. But, apart from these aspects, there are tasks that intrinsically require more time than others. For example, we could display the googol on the screen, a 1 followed by 100 zeros, in a few fractions of a second, but we would not waste our time displaying all the numbers from 1 to the googol (1, 2, ... 10100). That would simply take too long. Even some of the problems that have a short answer (for example, "yes” or “no”), though decidable, are intrinsically far more complex than others; there are even some that cannot be solved in a reasonable amount of time. Sometimes this difficulty actually proves useful. The cryptographic RSA system is based on the fact that we are not able to factorise (generally speaking) a very large integer into prime numbers in a reasonable amount of time, and that it is therefore very difficult to decipher a message without knowing the secret code.

43Complexity becomes particularly significant for processing large volumes of data. For each specific query, we will want to know:

  • How much time is needed to compute its answer (time complexity);

  • How much disc space, or how much memory, is needed (space complexity).

44Clearly, these quantities depend on the size of the database. If the query takes time t and we double the size n of our data, do we need to wait the same amount of time (constant time) or double the amount of time (linear time in n)? Or does the time increase polynomially (in nk where n is the size of the data) or even exponentially (in kn)? This is not a trivial issue: with large volumes of data, a time complexity n3 would require a huge computing power, while a complexity 2n would be prohibitive.

45Two comments will enable me to clarify this notion of complexity:

  1. Data complexity. In computer science, complexity is typically measured in the size of the problem, which corresponds here to the size of the data plus the size of the query. However, as queries are typically significantly smaller, it is far more constructive to think about complexity only in terms of the size of the data. We call this data complexity.

    • 10 For these “weak” complexities, the precise model of computation is important. This result is for co (...)

    Lower and upper bounds. If a program responds to a query in time n2 given data of size n, this only proves that it is possible to answer the query in time n2, which provides an upper bound. There may exist another program that can compute the answer faster, perhaps in constant time. If we can show that a minimum time nlog(n) is necessary, this provides a lower bound. For example, to compute the number of n-tuples in the join between two relations, nlog(n) is both a lower and an upper bound10.

  • 11 One example of a difficult problem in NP is that of the travelling salesman: given cities, routes b (...)

46Many complexity classes have been studied. Intuitively, a complexity class incorporates all the problems that can be solved without using more than a certain amount of available resources, typically time or space. For example, you may have heard about the class P, polynomial time. This encompasses all the problems that can be resolved in a time nk, where n is the size of the problem and k a given integer. Beyond P, we reach NP (non-deterministic polynomial11) time and EXPTIME (exponential time), which are prohibitive amounts of time. Yet, it is important to put things into perspective. Computer systems regularly solve some complex NP problems. And, conversely, for 1.5 terabytes of data, n3 is still currently out of reach, even using all the computers on Earth.

47Before addressing other aspects of the relational model, let us think about the origins of relational systems’ immense success:

  1. Queries are based on relational calculus, a logical, simple and understandable language for human beings, especially in variants like SQL.

  2. A relational calculus query can easily be translated into a relational algebra expression that is simple for machines to evaluate.

  3. It is possible to optimise the evaluation of relational algebra expressions, as this algebra offers only a limited computation model.

  4. Finally, we will see that for this relatively limited language, parallelism allows scaling up to very large databases.

48To emphasise the last two points, which are crucial, we could attribute the following slogan to databases: “Here we only do simple things but we do them fast”. We will now see that, in doing these things, logic also has an important role to play.

Logic and complexity

49There are close ties between complexity classes and classes of problems that can be expressed in logic. Ronald Fagin, for example, showed that NP coincides with “existential second-order logic” (in which a variable represents a set of values). I will now mention some other ties. Although I will try to skim over the technical details as much as possible, this discussion may still seem a little challenging. I nevertheless encourage the reader to try to grasp the beauty of certain bridges between logic, which can be seen here as a language allowing human beings to communicate with machines, and the computations that these machines perform with limited resources.

50Relational queries can be evaluated in P. This brings up the following question: is it possible, with relational calculus, to express any query that a machine can compute in polynomial time? As it turns out, no! The following query is in P but cannot be expressed with relational calculus: given a graph G, and two points s (for source) and t (for target) in this graph, does a path exist from s to t? As surprising as it may seem, while we can ask with relational calculus whether a path of length 3 exists from s to t, or even of length k, where k is fixed, to ask whether a path of arbitrary length exists, an infinite disjunction would be needed: a path with length 1 or 2 or 3, etc. To remedy this problem, we can add to the language a mechanism whereby a relational calculus query is reiterated until a fixpoint is reached. For example, for the previous query, starting with the set T= { s }, we add to T all the points that can be reached from T following an edge in G, as long as T is increasing. When the fixpoint is reached, all that is left to do is to check whether t is in T.

  • 12 Since there is a finite number of possible states, it is possible to detect if the program has ente (...)
  • 13 Serge Abiteboul and Victor Vianu, “Generic Computation and its Complexity”, Proceedings of the 23rd(...)
  • 14 In our discussion, we assume that the domain is not ordered. The problem is different if we conside (...)

51The language thus obtained is called fixpoint. Since what all programs do, is add n-tuples to relations, and never invent new values, the complexity remains within P. When programs are also allowed to delete n-tuples, the language that is obtained is called while. Its complexity is pspace: it can be implemented using a polynomial space in the size of the data. This kind of program can enter a loop, i.e., never stop12, and therefore never reach a fixpoint. Both fixpoint and while allow the expression of very complex queries. Yet this is quite deceptive: some ultra-simple queries cannot be expressed in fixpoint, not even in while. This is the case for example with the following query: does graph G have an even number of nodes? I worked in this domain for a long time, with my colleague Victor Vianu (University of California, San Diego). We characterised what could be computed with these languages. We notably proved13 that fixpoint equals while if and only if P equals pspace, thereby providing bridges between different logics and complexity classes14.

52Note that, while pspace seems a lot more powerful than P, we do not actually know whether it is different from P. Nor do we know whether P ≠ NP, the most famous open problem in computer science. Although knowledge in the field of complexity theory has been improving, many challenges, both fascinating and difficult, remain. To conclude this discussion on the links between logic and complexity, I will mention another open problem: obtaining a logic that captures precisely the queries in P, that is, the queries that can be answered in a reasonable amount of time. In practice, this amounts to having a language that would allow the expression of all queries in P, but only of those in P. While in practice this language may actually be difficult to use, what a beautiful problem, that of building a bridge between logical expressiveness and efficient evaluation!

53And to conclude this part, I will now discuss two essential dimensions of data management: transactions and parallelism.

Transactions

To serve and protect data.
Anonymous

54The modernisation of production lines was initially mainly spurred by electronics and automation. Before prevailing in manufacturing as well, computers profoundly penetrated industry by modifying the way transactions, such as orders or payments, were managed automatically. A computerised transaction is the dematerialised form of a contract. It can cost far less than a real transaction involving the movement of people over much greater time scales. With functionalities considerably expanded by information technology, transactions are now at the heart of numerous applications that have largely contributed to popularising relational systems, such as banking applications.

  • 15 The applications running on the relational system contain bugs. The system itself contains its own (...)

55Relational systems meet the needs of transactions by supporting the notion of relational transaction. A relational transaction guarantees the correct execution of a sequence of operations, for example by preventing a sum of money from disappearing into the ether (with a bank account being debited without another one being credited). Even a computer failure15 should not cause an incorrect execution. But first, the notion of “correct execution” therefore needs to be formalised. It would obviously be impossible to do so precisely if one had to take into consideration the millions of things that these kinds of systems do. However, computer science, like mathematics, can use a fantastic tool: abstraction. We can consider what a relational system does from the perspective of relational transactions and the way they can modify data, while at the same time ignoring all the other tasks the system is performing. It then becomes possible to formally define the notion of correct execution.

56We can mention other tasks performed by relational systems in parallel with the evaluation of queries and the management of relational transactions. They also manage:

  • Integrity constraints (such as “all project managers must be recorded in the personnel database”),

  • Triggers (such as “if someone modifies the list of users, send a message to the security manager”),

  • User rights (to control who has the right to read or modify certain data),

  • Views (to adapt to the needs of particular users),

  • Archiving (to guarantee the survival of data),

  • Data cleansing (for example to eliminate duplicates and incoherence).

Parallelism

57To manage large volumes of data, parallelism is essential. Increasingly, machines are multiprocessor. But I would especially like to insist on the use of several machines working simultaneously on a common task. This type of approach is particularly fundamental for the Web, which involves considerable volumes of information:

    • 16 A cluster of servers consists of a group of computers, called nodes, which collaborate to solve a p (...)

    Parallelism perhaps between the tens, hundreds, or even thousands of servers in a “cluster” 16;

  • Parallelism between the millions of servers on the Web that function independently but are permanently interacting.

58To conclude this part, I would like to give two examples, to give the reader a feel for the power of parallelism:

  • Rather than keeping its clients’ accounts in a single computer centre, a company can choose to let its regional centres manage them. Gathering all the data on a single machine, a comparable performance would require a highly sophisticated and therefore more expensive server. Note also that a distributed organisation is more suitable for decentralised company management.

  • Two types of organisation are possible to distribute films. With the first type, each film is kept on a single server. If the number of customers increases or if a film is too popular, the server quickly becomes saturated. With the other type of organisation, a peer-to-peer architecture, each machine is a peer, that is, both a server and a client. If a peer requests a film, it can store this film and transfer it to others later on. When a movie becomes more popular, it becomes available on a larger number of machines, and downloading becomes quicker and easier.

59In this part we have looked at data management in relational systems. We will now turn to the Web’s information systems and, to begin with, the most widespread ones: search engines.

3. The Web search engines

Internet: we don’t know what we are looking for there; but we find everything we're not looking for.
Anne Roumanoff, French humourist

  • 17 Serguey Brin and Lawrence Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, Proc (...)
  • 18 Serge Abiteboul, Ioana Manolescu, Philippe Rigaux, Marie-Christine Rousset and Pierre Senellart, We (...)

60The World Wide Web, introduced by Tim Berners-Lee and Robert Cailliau around 1990, is based on hypermedia documents. This is the Web to which we so quickly became accustomed. The information is in a natural language and the texts are loosely structured with HTML tags, for example for titles or enumerations. The anchors on which Internet users can click lead not only to other HTML pages but also to pictures, music and videos. In this part, I will talk about one of the Web’s greatest success stories, the search engine. Web search engines save us from having to tediously browse through the multitude of pages, and instead plunge us into a global digital library. I will explain how this type of engine works. The reader can find further details in Sergey Brin and Lawrence Page’s landmark article17 or in our recent book18.

61The search engine sees the Web as a global library. The Internet user searches for information, and even though the Web most likely cannot answer all of his or her questions, this information may be somewhere amongst the truly extraordinary masses of information and knowledge it holds. Like children, we marvel at the tens of billions of documents on the Web. But from the youngest age a child learns to evaluate, classify, and filter the considerable mass of information it encounters. What about us? If the search engine did not help us to focus on a small number of pages, what would we do? The technical exploit lies in finding, within an instant and thanks to its index, the Web pages that contain the words of the query. The magic part is that, from among tens or even hundreds of millions of possible pages, the search engine comes up with the few pages that so often contain what the user is looking for. Let us examine each of these two dimensions of search engines.

A Web index

Google's mission: to organize the world's information and make it universally accessible and useful.
Google

62A Web index associates each word with the list of pages containing that word. For example, one entry in this index would be:

Casablanca → http://www.imdb.com/title/tt0034583/, http://films.com/​Bogart/​,... .

which shows that the word "Casablanca” is found in these pages of the IMDb and films.com websites. If you give the search engine several words, like “Casablanca Bogart Bergman”, it will compute the list of Web pages that contain all these words.

63One serious difficulty stems from the size of this index: tens of terabytes of data for several billions of pages. The server of such an index is faced with two scalability issues:

  1. To index more pages, the server needs more and more storage space to maintain the index, and each query becomes more and more costly to evaluate.

  2. If the number of users increases, the server receives more and more queries.

64In both cases, the server is quickly overwhelmed. To resolve this problem, we use parallelism and a fundamental technique in computer science, hashing.

65To illustrate this technique, we will use K =10 machines M1, …, M10 and a function H, which, applied to a word, returns a randomly picked integer between 1 and 10 (and which, for a given word, returns the same integer every time). This function is called the hash function. Machine H(w) is made responsible for a word w. Suppose a crawler (a program that roams the Web searching for pages) discovers the word “France” in a page with URL p. The index entry, which says that page p contains this word, is stored on machine H(“France”), say M7. The data from the index are therefore shared relatively evenly between the ten machines, which resolves the first problem. Suppose now that someone wants data corresponding to "France". It suffices to interrogate machine M7. Queries are thus also shared relatively evenly between the ten machines, which resolves the second problem. A separate index obviously needs to be created on each machine. The hashing technique can typically be used for this too, this time with a centralised approach.

  • 19 Google calls its data centres farms. The number of farms and processors in each farm are kept secre (...)

66Now, if the size of the data we wish to index or the number of clients grows, we just need to add more machines. For example, Google uses thousands of machines in “farms”19 and has farms spread throughout the world. Parallelism has allowed for scalability. Brilliant did you say?

  • 20 This problem is part of the ACO complexity class, which is the class of problems that can be solved (...)

67Why does it work? Thanks to parallelism. Generally speaking, can we take any algorithm and accelerate it at will by using more machines? The answer is no! Not all problems can be parallelised so easily. It just so happens that index management is a very simple problem that is very parallelisable20 (embarrassingly parallel). We can therefore easily envisage indexing more and more pages, tens of billions and more.

A fixpoint and a few algorithms

Playboy: Is your company motto really “Don’t be evil”? Brin: Yes, it’s real. Playboy: Is it a written code? Brin: Yes. We have other rules, too. Page: We allow dogs, for example.
S. Brin and L. Page, founders of Google. Interview in
Playboy Magazine, 2004

68The core of the problem now remains choosing from millions of pages containing the words of the query. This is essential, as a user will rarely go further than the first ten or twenty results of the answer. At first, search engines like Alta Vista classified pages by using techniques based exclusively on page content, as in the case of traditional digital libraries. A page was considered more interesting if the term appeared in the title, or in bold font. These engines used TF-IDF (Term Frequency–Inverse Document Frequency) statistical measures that evaluate the relative importance of a term in a document within a collection of documents. The more the term is repeated in a document the more it “weighs”. And, the rarer the term is in the collection of documents, the more it weighs. This kind of technique, which works quite well with small bodies of documents, proved rather disappointing for the Web.

  • 21 Jon M. Kleinberg, « Authoritative sources in a hyperlinked environment », Journal of the ACM, vol.  (...)

69The young creators of Google had the idea of basing the ranking of result pages on some collective knowledge implicitly present in the mass of pages. More specifically, they used a classic mathematical technique, the random walk. This idea, inspired by earlier work, namely by Jon Kleinberg21, laid the foundations for Google’s PageRank algorithm and the company’s industrial success, one of the most astonishing of its kind in the history of mankind.

The random walk

70Imagine a "Web surfer”. He or she starts from one page, say the page www.inria.fr. Then, he or she roams around the Web randomly choosing, for each step, one of the links on the page, and clicking on this link. If the page does not have a link, he or she randomly chooses a page anywhere on the Web. And he or she goes on and on, forever. What is, ad infinitum, the probability of this user finding herself on a specific page? This is what is defined as the popularity of this page. Intuitively, if a page is popular (like the page www.lemonde.fr), many pages will reference it and the probability of finding oneself on this page is much greater than that of finding oneself on the page of an unknown blogger (like Alice). This may seem, on the face of it, just an abstract definition, a mathematical concept that is cute but totally useless. But in practice, this popularity actually corresponds quite well to Web users’ expectations.

71This leaves us with the issue of computing this popularity. To do so, we will turn it into an equation. Suppose that we indexed ten billion pages. We number them from 1 to N = 10 billion. Following a classical mathematical approach, imagine that we already know this popularity. We therefore have a vector pop, where for each page i, pop[i] is the popularity of the page. (This is the probability of finding ourselves on this page; note that Σi =1 to N pop[i] = 1.) Each page distributes say 90% of its popularity evenly between all the pages towards which it directs users, and the remaining 10% between all the pages indexed. If a page is a dead-end (it does not lead anywhere), it shares all its popularity with the pages that are indexed. Ignoring a few details, this leads us to a matrix Θ that captures these popularity “exchanges”, and to a fixpoint equation:

pop = Θ × pop,

a rather compact notation for a system of ten billion equations with ten billion unknowns. It so happens that the solution to this system is the popularity vector. And bingo! A known technique allows us to compute this solution.

The fixpoint

72In the absence of other information, let us take vector pop0 defined by pop0[i] = 1/N, in other words, all pages are assumed to be equally popular. We define:

pop1 = Θ × pop0 ; pop2 = Θ × pop1 ; pop3 = Θ × pop2

73By continuing this computation, we converge towards a fixpoint, that is the solution to our equation. We have computed the popularity vector! (Since in practice little precision is needed, 6 or 7 iterations are sufficient.)

  • 22 The matrix is sparse if most of its coefficients are zero. For a billion pages, if each page has an (...)

74Elementary did you say? Not really. Even if the matrix is very “sparse”22, performing this computation effectively with such volumes of data requires highly sophisticated algorithms, heavyweight engineering. It may not be mathematics anymore but it is very fine computing.

And to conclude on search engines

  • 23 Google’s current PageRank is said to use dozens of characters combined into a formula that is kept (...)

75I have presented a very simplified version of what constitutes a search engine. Modern search engines combine TF-IDF and page popularity, as we have just defined it, with many other criteria to choose which pages to rank at the top. Search engines are becoming ever more sophisticated23 to better meet users’ expectations. They are becoming more complicated, be it just to counter attacks by “spamdexers” who cheat to appear higher up in the results. They also raise crucial questions. To mention but a few:

  • Web queries are based on lists of key words, a primitive language with virtually no grammar. Surely, there must be a way of doing better.

  • A measurement system that favours the popularity of pages has the effect of encouraging uniformity, as the popular pages become more and more popular, while the others sink into anonymity. This is certainly debatable, just like the fact that the popularity used by current search engines does not seem to take into account whether the page is cited for its quality (its correctness) or not.

  • Should pages be excluded if they are racist, rude, false (why not?), or to favour a client or avoid upsetting a government (help!)?

  • Finally, there is something incredibly unsettling about the considerable power these search engines carry through their control of information, especially in a context of virtual monopoly (at least in Europe). Should we trust them without understanding the secret behind their ranking? And why such secrecy?

76I was part of the Stanford information system research group in 1995 when two young students, Sergey Brin and Lawrence Page, were there working on the prototype of the Google search engine. I was immediately won over by their ranking based on the popularity of pages. It took me a while, however, to get used to the idea of keeping the index in memory. This technique would have been unrealistic a few years earlier, as it would have required an unthinkable number of very expensive machines. In 1995, managing such an index in memory was beginning to be feasible, given the rapidly decreasing price of computers. This does show that in IT, possibilities are permanently evolving.

  • 24 Serge Abiteboul, Mihai Preda and Grégory Cobena, “Adaptive On-Line Page Importance Computation”, Pr (...)

77Back in France, I developed an algorithm with two students, Mihai Preda and Gregory Cobena, to compute the popularity of pages24. Designing the algorithm, proving that it does compute the fixpoint of the equation, implementing it on a cluster of machines, fixing bugs, optimising the program, experimenting, reaching a billion pages: I had never dealt with such large volumes of data. This was one of my most fantastic experiences as a researcher.

78In the 1990s, several companies shared the search engine market. Users acclaimed the Google engine. This extraordinary success was spurred by exceptional engineering to get thousands of machines to run around the clock, and by revolutionary commercial models and original management techniques based on the cult of creativity. But, for my part, I prefer to remember that in the beginning there was only a fixpoint and a few algorithms.

4. Networks and collective knowledge

To have or not to have a network: that’s the question.
Bruno Latour

79Writing allowed us to partly “externalise” our memory. Printing allowed us to transmit our external memory. The Web has considerably reduced the cost of the transmission of information. Above all, it has allowed individuals to make their own personal contributions to the collective heritage (with a number of reservations, like the digital divide, which I will discuss further on). The passive consumption of information in the early days of the Web has given way to active contributions by an ever increasing number of Internet users. Alice spends her evenings on Facebook chatting with a group of friends while her son plays World of Warcraft with friends from all over the world whom he has never met “for real”. She writes her blog. He tweets all day long.

80The Web is therefore a juxtaposition of billions of individuals and all their networks. After machine networks, after content networks, we are now reaching user networks. Many of the most widespread recent systems are dedicated to intensifying information exchanges between individuals within their own networks, from online gaming to social networking software like Facebook or Google+. The youth has passionately adopted social networks. After a short hesitation, the older generations, who have tons of spare time and perhaps the same desire for social contact, are also enthusiastically jumping in.

81These new systems are no longer concerned with the universality of the Web, but focus instead on individuals and on the more or less well defined groups to which they belong. They redefine the distance between these individuals and offer new forms of proximity. Take someone we do not know. We just need a name, and if their name is too common, a few vague details, for their life to unfold in front of us. Provided this person has some visibility on the Web, they invade our life, with what they post, what is said about them, through their thousand links with others and the traces they leave all over the place. The “target” does not even need to be famous25. We are bathing in what could have been paradise for biographers in the past, or perhaps a nightmare, for there is no room for dreams.

  • 26 Gloria Origgi, “Sagesse en réseaux: la passion d’évaluer”, La Vie des idées, 30 September 2008: htt (...)

82These systems raise a large number of research topics, sometimes at the interface between information technology and sociology. I wish to emphasize a particularly fascinating aspect here: the emergence of collective knowledge26. Several approaches are being used to obtain such knowledge:

  1. Ratings by users, for example of products or companies;

  2. The evaluation of expertise by users;

  3. Recommendations, for example of products;

  4. Collaboration between Internet users to collectively perform a task that is beyond their individual reach;

  5. Crowdsourcing, which puts humans at the service of computing systems.

Ratings

83Internet users are invited to rate other users, services, products, and thereby take part in the construction of collective knowledge. For example, eBay allows buyers to give their opinion on sellers. This creates a fantastic incentive to provide excellent service, at the risk, otherwise, of being poorly rated and losing customers. There is a profusion of systems using the opinions of users, like ViaMichelin for restaurants or AlloCiné for films. Note that, in both cases the critics who previously rated restaurants or films are losing a share of the monopoly. More experimental systems are trying to extract knowledge that is finer than ratings, using text-based opinions. This is where we encounter difficulties in analysing the feelings in a text.

84These rating systems also have their place in the global Web. For example, the bookmark system Delicious allows users to associate keywords (semantics) with pages. A measurement of popularity, like the one discussed in the previous part, can also be seen as a form of rating: a reference to a page is interpreted as a positive rating, with critiques and praises contributing equally to the popularity of a page. Interesting anecdote: it has been said that a certain service provider deliberately delivered bad service to some of its clients so that they would talk about it on the Web, thereby increasing the company’s popularity and thus its visibility. Even though this unverified information may only be one of the legends of the Web, the fact that popularity ignores the meaning of references in unsettling. By analysing Web links with a richer rating system (with negative ratings), this bias could be corrected.

The evaluation of expertise

  • 27 Alban Galland, Serge Abiteboul, Amélie Marian and Pierre Senellart, “Corroborating information from (...)

85A crucial technique for evaluating the quality of information is to determine the quality of the source, the general trust we have in the information provided by this source. To illustrate this technique, let me mention recent work on corroboration27. Imagine a system where Internet users introduce knowledge. They could be wrong. Yet if they only stated positive knowledge like “Alice owns a Citroën 2CV”, nothing could prevent the system from believing everything Internet users say, including their mistakes. For the system to begin to doubt, users would have to contradict themselves and, to do so, post negative information like “Alice does not own a BMW”. Internet users generally do not want to waste time explicitly entering such information, mainly because the list of false information is far too great to be accessible. Yet users post negative information without knowing it. For example, “Alice was born in Romorantin” signals that she was not born in Sèvres, due to a “functional dependency”, that is, a law governing the data (here, a law that states that a person cannot be born in two different places).

86In the work mentioned above, we used information including negations arising from functional dependencies. We estimate the veracity of the knowledge, from which we infer each user’s rate of error. This provides us with a better estimate of the veracity of the knowledge, and hence with more precise error rates for each user, etc. We carry on this process until we reach a fixpoint (yet again). This work is a good illustration of the means of collectively bringing knowledge to light.

87Like ratings, the evaluation of expertise has its place on the Web. This particularly applies to information posted by the press. Blogs, for example Maître Eolas’ for legal matters, are now considered as authoritative sources. Ordinary Internet users are increasingly replacing journalists, as we saw recently in Tunisia and Syria. This makes the need to cross check, to verify information all the more crucial. We can imagine that, tomorrow, programs will contribute to determining reputations in terms of information in the dizzying spacetime of the Web.

Recommendations

88A system like Meetic uses the data provided by its clients to pair them and to organise dates. A system like Netflix recommends films. To do so, these systems typically perform statistical analyses through the very general classic framework of data mining. They try to identify “proximities” between clients for Meetic, or between clients and products for Netflix. They can group people together because they share the same tastes even if they have never met, or discover unexpected relationships between products. The oft-cited example is that people who purchase diapers statistically buy much more beer than do others. Consumer and product classifications therefore mutually enrich each other, and thereby contribute to establishing new links between individuals and products.

89These types of analysis are performed on a very large scale, for example by Amazon or Google. Often they are still mathematically largely unfounded and their results are rarely satisfying. Performing statistical analyses of high quality, on ever-increasing volumes of data, is one of the challenges faced by the field of information management.

Collaboration

  • 28 Wikipedia exists in 281 editions and its English version had over 3 million articles in June 2011 ( (...)
  • 29 Koh-Lanta is a French adaptation of Survivor.

90Wikipedia is a fine example of cooperative editing. A large number of Internet users are collaborating to develop an encyclopedia. Anyone can participate. One can easily imagine the cacophony resulting from incompetence, disagreements, and personal interests. It seems like an impossible task. Yet, while the quality of its content is sometimes debated, it is fascinating to witness the considerable role Wikipedia has taken on in the dissemination of knowledge28. By calling on a multitude of authors, it has managed to surpass the classic notion of encyclopedia and given it a much broader scope. One can find anything and everything in Wikipedia, from the biography of a Koh-Lanta29 heroine, Clémence Castel, to the proof of the pumping lemma, a fundamental result in language theory. Mistakes abound... Then again, they also exist in traditional encyclopedias.

91Wikipedia is far from being the only example of such collaboration. Equally surprising is the success of the software produced by communities of developers in the world of freeware, like the Linux operating system. And we are beginning to see communities form to build bodies of open data like the linked data of the W3C (World Wide Web Consortium).

Crowdsourcing

  • 30 Reference to the "Mechanical Turk", an automaton chess player from the end of the 18th century, lat (...)
  • 31 Seth Cooper et al., “Predicting Protein Structures with a Multiplayer Online Game », Nature, vol. 4 (...)

92With crowdsourcing the idea is to post on the Web problems that programs do not know how to solve or that humans can solve at a lesser cost than machines. Web users then offer answers, typically for a fee. Systems like Amazon’s Mechanical Turk30 allow for this kind of contact. The crowd’s competences have been used for example to search for one of the most famous researchers in the field of databases, Jim Gray, who disappeared with his yacht off the coast of the Farallon Islands. Users had to observe satellite photographs to find clues. They were unsuccessful, but in another case, using the video game Foldit, Internet users managed to decode the structure of an enzyme similar to that of the Aids virus31. They accomplished that which had been stalling experts and computers: an understanding of how this enzyme folds itself up in a three-dimensional space to build its structure. Here, gaming joined forces with networking, in the purest spirit of social networking.

93The originality of these systems lies in the fact that individuals find themselves at the service of a computer system, which uses them, for example, to complement its knowledge base or solve contradictions within this base.

The power of the masses of Internet users

  • 32 "The masses are the real heroes."

群众是真正的英雄32.
Mao Tse-Tung

94These approaches generally allow for a complex analysis of problems requiring a large number of people and huge volumes of information to be solved. The evaluation of “quality” is at the heart of the issue: the quality of information, the quality of a source (an Internet user, a service). And, what is more, individuals are at the centre of the system, passively, for example through their profile, or actively, for example by stating what they know, what they believe, and what they like.

95Faced with systems seeking to build collective knowledge, Internet users are generally unaware of the data that has been used to answer their queries and do not understand how the result has been obtained. They may therefore find the information surprising, magical, worrying. The difficulty of explaining results is a serious weakness in the approaches we have just discussed, which limits their usefulness.

96Another serious problem these approaches face relates to breaches of information confidentiality. To offer better services, these systems must gather as much information about their clients as possible. A social network like Facebook, for example, builds a knowledge base for each of its customers. Internet users are increasingly asked to provide information to benefit from free services. These systems go so far as to exchange information about their respective clients: still to provide them with a better service? This leads to conflicts of interest. A social network system must choose between the need to protect its clients’ data (at the risk, otherwise, of losing them) and its natural avidity for confidential data. As for Internet users, they would like the data concerning them to remain as confidential as possible, but are also fond of very personalised services.

97To conclude this part, let us temporarily forget about these problems to marvel at the algorithms bringing to life information available on the Web of knowledge whose existence we could never have imagined. This takes us towards an older domain that, with the Web, has undergone a revival: knowledge management. This is the topic of the next part.

5. The Web of knowledge

  • 33 "But of the tree of the knowledge of good and evil, thou shalt not eat of it: for in the day that t (...)

33תָּמוּת מֹות מִמֶּנּוּ אֲכָלְךָ בְּיֹום כִּי מִמֶּנּוּ תֹאכַל לֹא וָרָע טֹוב הַדַּעַת וּמֵעֵץ

98The domain of knowledge bases existed long before the Web was born. But while databases were already a flourishing industry, knowledge bases were struggling to find their place under the sun. This, they are currently achieving with the Web.

99The Web of documents is founded on the premise that people like to write, read, say, and listen to text in their natural language. Nowadays, users mainly communicate with each other using text. Why and how can we shift to a Web of knowledge? And first of all, what is it?

The semantic Web

100In its most homeopathic form, its purpose is to explain the meaning of textual documents on the Web, of elements they are composed of, or, as we will see later on, of information services available on the Web (Web services). This can be done by publishing metadata, that is, data that explain data. For example, for the document you are busy reading, we could publish:

author = Serge Abiteboul
nature = inaugural lecture
institution = Collège de France
date = March 2012
language = English (translation from French)

101Within documents, semantic labels can also be linked to fragments of a text to explain them. For example, attached to the chain of characters Woody Allen, the label dbpedia:Woody_Allen signals that this person is referenced in dbpedia, a frequently used knowledge base. We can find out from this ontology that this is the famous film director who made Manhattan.

102Knowledge bases like dbpedia are called ontologies. Put simply, an ontology is composed of phrases like the following:

  1. classes Person, Director, Film

  2. Director sub-class of Person

  3. Director synonym of Film

  4. dbpedia:Woody_Allen is a Director

  5. relation has_directed

  6. dbpedia:Woody_Allen has_directed film:Manhattan

These phrases specify classes of objects (1), inclusions or equivalences between classes (2,3), an object’s membership in a class (4), relations between objects (5), instances of these relations (6).

103Using raw text discovered on the Web without any explanation is similar to using results of a scientific experiment without knowing the conditions in which the experiment was performed, its units of measurement, etc. Labels introduced in a text, based on ontologies, specify the meaning of that text and enrich it by adding semantics. For example, the label dbpedia:Woody_Allen, attached to a sentence, indicates that the sentence is about Woody Allen, a director, a film maker, a person, and not the musician Allen Woody. And this sentence becomes an answer to the question phrased with the keywords “film maker Woody Allen Manhattan” even if it contains neither the term film maker nor the word Manhattan. On the other hand, a sentence about Allen Woody’s visit (specifying that this is the musician dbpedia:Allen_Woody) to Manhattan would not be included as an answer. Ontologies therefore make it possible to provide more refined answers to queries.

104On the Web, anyone can publish an ontology. Experts use specific terminologies depending on their language, their domain, their culture, etc., in the best tradition of the Tower of Babel. While this diversity is a rich asset, it also complicates the search for knowledge. The same information can be represented in multiple ways. In particular, we are on the Web and are bound to find masses of erroneous facts. What is even more complicated to manage is the fact that sites can publish rules that jeopardise our own knowledge. For example, what are we going to do if someone states “Person is a synonym of Film”? Although this cannot be prohibited, we do need to make sure that it does not pollute our reasoning.

105This leads to a whole host of fascinating problems: how does one use ontologies to answer Internet users’ questions more adequately? How does one “align” ontologies? In other words, how does one establish links between concepts and relationships of two ontologies, to “integrate” information from two independent sources? How does one manage incoherence? How does one evaluate the quality of knowledge?

The acquisition of knowledge

106Now that we understand the value of having knowledge as well as text, the difficult question becomes “how to acquire knowledge”? Expert chemists will for example “enter” knowledge about the molecules they are studying into a base (using an editor). They have an objective reason to do so: the advancement of science. And nowadays this kind of publication in databases contributes to scientific visibility in the same way as publications in scientific journals do. But these very individuals who like to publish on the Web in their natural language are averse to the constraints of a knowledge editor. Cases of Internet users who enter knowledge into a system voluntarily and free of charge remain rare and, most of the time, the knowledge base building tasks are handled by software.

  • 34 Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich and Gerhard Weikum, YAGO2: A Spatially and Te (...)

107Take for example the Yago knowledge base, developed using the English version of the Wikipedia encyclopedia that I mentioned earlier. Wikipedia was initially a collection of texts. To improve its precision, its editors encourage the introduction of knowledge fragments. (Go to the Woody Allen page on Wikipedia and click on the “Modify” tab for evidence of this). It was therefore an excellent starting point to develop a “real” knowledge base. This base, called Yago, was built using software developed at the Max Planck Institute34. In 2011, Yago already had 2 million entities and 20 million relationships between these entities.

108While the Web is still largely dominated by HTML and text, the knowledge bases of tomorrow are already being built using the enormous resource that the mass of text-based documents constitutes. The aim is essentially to understand the texts and to “extract” knowledge from them. This is a complex task as it involves understanding the language. Knowledge extractors make mistakes and they can hardly be blamed: they work from texts that abound with imprecisions, mistakes and facts like “Jerusalem is the capital of Israel” that may be controversial. The integration of knowledge from several sources is also tricky, as is the verification of the knowledge obtained. All this involves a range of complex techniques, namely, the corroboration or crowdsourcing techniques previously discussed.

109What about tomorrow? Apart from text-based documents, we should expect to see the proliferation of millions of data and knowledge bases, of all sizes and kinds, with different levels of quality, and links between them. The problem will perhaps have changed but the fundamental questions will remain: where to find specific information and what source is reliable?

Web services

110The publication of knowledge makes it possible to answer queries better. It especially allows machines to use the Web. Take the following very simple query: “Who directed Manhattan?” A human user would have no trouble finding the right answer on the Web, for example by using IMDb. It would be more complicated for a machine to do so. On the other hand, a computer system could communicate with other systems and understand answers like:

(dbpedia:Woody_Allen, has_directed, film:Manhattan).

111Web services are systems that are connected to the Internet and communicate with other systems by exchanging structured data according to Web protocols.

112Underlying all this, there are standards. An anecdote will help highlight their value. Some of my students and I wanted to use a document classification program developed by colleagues. To run this software, we first had to install several program libraries, some of which were incompatible with our development environment – the usual nightmare of software installation. Fortunately, someone had the idea (not so obvious in the early 1990s) of using the classification program as a Web service. Our colleagues installed their software on a machine connected to the network and, a few moments later, we could use the service. Without the Web standards, it would probably have taken us days of frustrating and unproductive work.

113But let us return to our cinema enthusiast. He or she uses a service, say FMTF, for “FindMeTheFilm”. Our cinema enthusiast states (using ontologies) what he or she wants: to see the film Manhattan. FMTF looks for this film in video-on-demand sites using their service descriptions (also based on ontologies). FMTF compares the prices and offers of each service, taking into account the person’s subscriptions, his or her preferences, etc. To perform this task, FMTF collaborates with other services and exchanges data and knowledge with them. Finally, FMTF can choose a provider and stream the film to the home TV set. The Web, which was originally developed to serve human beings, is thus now at the service of Web services, and Web services at the service of all.

Inference

114Understanding the meaning of data and answering queries more precisely: here are examples of the advantages offered by knowledge bases. But from a technical point of view the most fascinating thing is the possibility of drawing on logic to automatically infer new knowledge. To explain this, we will re-examine the notion of fact. Until now we have dealt with extensional facts, like Screening (Star Wars, Sel, 22:15), which correspond to n-tuples stored in the database. Databases are therefore the custodians of all extensional facts in the world. Let us now introduce knowledge in the form of laws (rules) like:

WishToSee( Alice, t ) ← Film( t, Hitchcock, x ), not Seen( Alice, t )

which can be read as "if t is the title of a Hitchcock film, x an actor in this film, and if Alice has not seen this film, then she would like to see it”. Based on such rules, and facts like “Psycho is a Hitchcock film” and Alice has not seen it, we can infer a fact like “Alice would like to see the film Psycho”, a fact which is not stored in any database. This is an intentional fact. With this type of very simple rules, software is able to reason.

  • 35 Laurent Vieille, “Recursive Axioms in Deductive Databases. The Query/Subquery Approach”, Expert Dat (...)

115Observe that answering a query has become more complicated. It now requires inferring facts that allow for other facts to be inferred, and so on. Obviously, one should avoid inferring all possible facts, for this would require too much time and memory-space or storage. Indeed, some of the finest algorithms in the field, inspired by logical programming, make it possible to avoid inferring unnecessary facts35. There will not be time to describe these algorithms in this lecture.

Think global

116Inference is essential for the Web of knowledge in the making, essentially to answer queries more adequately or to integrate information from heterogeneous sources. We can imagine a future where millions, billions of systems exchange and infer knowledge. But we should not get carried away; this is not very complicated reasoning, unlike the reasoning used in mathematical demonstrations for example. Yet, it leads to immense technical challenges: how to reason with such volumes of knowledge? How to not simply be overwhelmed by the facts that are inferred? How to guarantee the quality of information? Its confidentiality? How to explain facts that are obtained?

117Our environment is going to change. We are going to have to learn to live in a world where we are surrounded by systems that reason, exchange knowledge, and interact with us. How is this going to affect our very way of knowing, of thinking?

Conclusion

Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?
T.S. Eliot

118The shift from concrete goods to relatively intangible digital information highlights a fundamental aspect of computer science: computer science is a science of the intangible. This differentiates computer science from material sciences like physics, chemistry, and the Earth and life sciences, because of the techniques and often the mathematics that are employed. It gives the IT industry its own set of characteristics, for the manufacturing, distribution and maintenance of products, and for commercial models. We have dealt with this intangibility in virtually every page of the lecture.

  • 36 Neutrality is a principle that guarantees equal treatment of all data streams on the Internet. This (...)
  • 37 Chris Anderson and Michael Wolff, “The Web is dead. Long live the Internet”, Wired, September 2010: (...)

119The Web is multi-faceted. It lives on an Internet that we would like to consider as neutral36 as possible. It is omnipresent. It has become virtually impossible to live without: to find work, to work, to find a home, to manage one’s bank accounts, to be part of a society, even to have friends. Many of us share the same nostalgia of the romantic, idealist, anarchist, and anarchic world of the early open Web. The Web is inexorably moving towards more enclosed spaces37, primarily under the pressure of the monetisation of content. It is both the most beautiful lacework, the fabric of all human knowledge, and the fertiliser of the most horrible fantasies, of all violence. It is also the world of an arrogant growth of imprecision and incoherence that is drowning pearls of humanity, and of an unlikely alchemy that transforms mass into quality.

120What we have learnt from the Web in recent years is that, apart from a global collection of documents, it offers an infinite range of applications to invent. We saw the arrival of the Web of smart phones, which many of us eagerly adopted while also worrying about their anxiety-inducing effects. Even though this new Web shares IT protocols with the classic Web, the new world is often at odds with the original philosophy of the “uncontrolled, free, and universal” Web, as paid applications become the norm. I have spoken about the Web of social networks and the semantic Web. Had I had more time, I would also have discussed the Web of objects and ambient intelligence that transformed business with RFID (Radio Frequency IDentification), and which, we are being promised, will “revolutionise” our homes. And we are witnessing the amazing success of the Web of virtual worlds, primarily with video games.

121While I have tried to avoid a blissfully optimistic presentation of data management technology, I have strongly emphasised technological success in this text. I will now briefly mention certain pitfalls, and try to highlight the research subjects they open onto.

Avoiding drowning in an ocean of data

122This has been one of the recurrent themes of this lecture. One of the great challenges for the years to come is to develop technologies that make it possible to find, evaluate, validate, verify, and rank information to help Internet users obtain “the right information, at the right time”. This involves continuing research in domains such as the evaluation of reputation, recommendation, or personalisation.

Access to information for all

  • 38 In France, in 2009, 40 % of the population never used information technology (source: CREDOC).

123Digital divides exist. The generational divide, roughly speaking, between those born before and after the Internet, is starting to disappear with objects like the iPad. The urban/rural divide could easily disappear with some political will, as rural populations adopt these new technologies with at least as much appetite as their urban counterparts. The social38 and North-South divides are more worrying. Information technology can help reduce them with software that is increasingly easy to use, especially free software. But the issue is first and foremost one of education. In France, computer science education is improving, but there is still a long way to go. The free neighbourhood library also needs to give way to the free and universal digital library of the Web. The utopia is now within reach: access, for all, to all culture and knowledge.

Democracy or not

124The Web and IT can serve governments to police their citizens, or even oppress them. They can also be used to establish a democracy of counter powers with activist networks that control, monitor, denounce, and rate public authorities, thereby contributing to a better functioning of democracy. Choices are mainly political, but scientists have a role to play in the establishment of these counter-powers. This particularly relates to the development of technologies making it possible to control the powerful forces: states and multinationals.

What about private life?

125We are becoming increasingly aware of the risks we run while spreading information on the Web that we would like to keep confidential. One of the most acute risks is perhaps identity theft. It is the role of science to develop tools with which we can regain control over our information, with the support of laws protecting personal data. Of course, it is up to our governments to set regulations, but it is important for us to also agree on an ethics of private life protection.

Better or worse individuals?

126Is information technology making us happier? Smarter? More productive? Could reducing the distance between some increase the distance between others, at the risk of confining individuals to alienating communities? In contact with all this virtual reality, do we run the risk of losing all contact with “real” life? Is an encounter less real on the Web than at the pub down the street? And perhaps the mother of all questions: are we going to use these tools to avoid having to think39 or, on the contrary, to think better and be more creative?

127The answers to these questions depend significantly on the new IT tools that remain to be invented, with the concern, perhaps even more so now, to better serve users and, why not, to make them better people. From a technical point of view, one of the challenges is to be able to offer individuals all the advantages of the Web’s most advanced systems, namely social networks or recommendation systems, without these individuals having to lose control of the information concerning them, as is currently all too often the case. Another challenge is to improve the collective production of knowledge. We also need to be able to use all this knowledge in our decision making more effectively, by integrating it better into the software tools we use on a daily basis like phones, email or electronic diaries.

And tomorrow?

Prediction is very difficult, especially about the future.
Niels Bohr

128Under the pressure of very dynamic start-ups and young giants like Facebook or Google, Web technologies have developed very quickly. As is often the case with information technology, quick and dirty solutions were cobbled together. While the domain of data management is currently dazzlingly dynamic, it is still virgin territory when it comes to the Web: it is not easy to draw a state of the art, or to teach Web data management; it is not easy to predict which trends will be long-lasting. Logical bases, that were the crown jewel of the relational model, are still somewhat of a mess when it comes to the Web. A global solution remains to be invented. Ties with logic, complexity theory, and language and automata theory need to be revisited. New theories most likely need to be developed. The systems we use need to be improved; new functionalities need to be invented. What a programme!

129It is neither possible nor desirable to write the Web off, just as it was not possible to refuse writing or printing. And despite all the pitfalls of the Web, I choose to continue believing that it will contribute to bringing about a better future. As for more technical aspects, I tentatively predict that the next milestone for the data sciences has already begun: it is the Web of knowledge. It has already been announced several times. It is moving slowly, but it really is on its way.

130From data to information, and from information to knowledge, is a natural evolution.

Acknowledgements: I would like to thank the Collège de France, INRIA as well as the European Research Council, via the Webdam project on the “Foundations of Web Data Management”. I would also like to thank Martín Abadi, Jérémie Abiteboul, Manon Abiteboul, Gilles Dowek, Emmanuelle Fleury, Laurent Fribourg, Sophie Gamerman, Bernadette Goldstein, Florence Hachez-Leroy, Tova Milo, Alkis Polyzotis, Marie-Christine Rousset, Luc Segoufin, Pierre Senellart, Julia Stoyanovich, and Victor Vianu for their comments on this text.

Anhänge

More information about the professor and the inaugural lecture’s video: http://www.college-de-france.fr/​site/​en-serge-abiteboul/​

Anmerkungen

1 Gérard Berry, Pourquoi et comment le monde devient numérique, Collège de France / Fayard, coll. « Leçons inaugurales », no 197, 2008.

2 Gérard Berry, Penser, Modéliser et maîtriser le calcul informatique, Collège de France / Fayard, coll. « Leçons inaugurales », no 208, 2010.
Martin Abadi, La Sécurité informatique, Collège de France / Fayard, coll. « Leçons inaugurales », no 219, 2011.

3 “Natural languages” refer to languages elaborated over time by groups of speakers, like French or English. This is not so much in opposition with “constructed” languages like Esperanto, as with formal languages like first order logic, SQL or Java.

4 Giving a precise definition of these notions is not an easy feat. See for example: Luciano Floridi, The Philosophy of Information, Oxford University Press, 2011.

5 Its data persist after the computer has been switched off.

6 The following trends have been observed until now. Regarding storage capacities, the hard drives’ memory density roughly doubles every year (Kryder’s law). As for circuits, the transistor density on a silicon chip roughly doubles every two years (Moore’s law).

7 http://michaelbrodie.com.

8 Serge Abiteboul, Richard Hull and Victor Vianu, Foundations of Databases, Addison-Wesley, 1995: http://webdam.inria.fr/Alice.
Michael Benedikt and Pierre Senellart, “Databases”, in E. K. Blum et A. V. Aho (Eds),
Computer Science. The Hardware, Software and Heart of It, Springer-Verlag, 2012, p. 169-229, doi: 10.1007/978-1-4614-1168-0_10.

9 SQL goes further than relational calculus. For example, it allows for results to be sorted and for simple functions like sums or averages to be applied.

10 For these “weak” complexities, the precise model of computation is important. This result is for computations on RAM machines.

11 One example of a difficult problem in NP is that of the travelling salesman: given cities, routes between these cities, and the lengths of these routes, how can we find the shortest route to link up all these cities.

12 Since there is a finite number of possible states, it is possible to detect if the program has entered a loop, but at the cost of additional work.

13 Serge Abiteboul and Victor Vianu, “Generic Computation and its Complexity”, Proceedings of the 23rd annual ACM symposium on theory of computing, New York, ACM, 1991, p. 209-219, doi: 10.1145/103418.103444.

14 In our discussion, we assume that the domain is not ordered. The problem is different if we consider that the domain is ordered. Vardi has shown that fixpoint allows us to compute precisely all queries in P, and that while expresses requests in pspace exactly.

15 The applications running on the relational system contain bugs. The system itself contains its own bugs. Lastly, the hardware can fail.

16 A cluster of servers consists of a group of computers, called nodes, which collaborate to solve a particular problem.

17 Serguey Brin and Lawrence Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, Proceedings of the 7th International Conference on World Wide Web, Amsterdam, Elsevier, 1998; Computer Networks and ISDN Systems, vol. 30, no 1-7, 1998, p. 107-117, doi: 10.1016/S0169-7552(98)00110-X.

18 Serge Abiteboul, Ioana Manolescu, Philippe Rigaux, Marie-Christine Rousset and Pierre Senellart, Web Data Management, Cambridge University Press, 2011: http://webdam.inria.fr/Jorge.

19 Google calls its data centres farms. The number of farms and processors in each farm are kept secret. We are talking here about dozens of farms, and sources in the early 2000s claimed the largest farm held 6,000 processors.

20 This problem is part of the ACO complexity class, which is the class of problems that can be solved with circuits of constant depth using a number of AND and OR gates polynomial in the size of the input. It turns out that the evaluation of relational algebra queries is entirely in ACO.

21 Jon M. Kleinberg, « Authoritative sources in a hyperlinked environment », Journal of the ACM, vol. 46, no 5, 1999, p. 604-632, doi: 10.1145/324133.324140.

22 The matrix is sparse if most of its coefficients are zero. For a billion pages, if each page has an average of 30 links, the matrix has about 30 billion non-empty entries out of a billion billion entries. It is very sparse. But even in an optimised representation, it remains gigantic.

23 Google’s current PageRank is said to use dozens of characters combined into a formula that is kept secret.

24 Serge Abiteboul, Mihai Preda and Grégory Cobena, “Adaptive On-Line Page Importance Computation”, Proceedings of the 12th International Conference on World Wide Web, New York, ACM, 2003, doi: 10.1145/775152.775192.

25 Raphaël Meltz, “Marc L., Genèse d’un buzz médiatique", Le Tigre, no 31, March-April 2009, p. 12-16. See also: http://www.le-tigre.net/Marc-L.html.

26 Gloria Origgi, “Sagesse en réseaux: la passion d’évaluer”, La Vie des idées, 30 September 2008: http://www.laviedesidees.fr/Sagesse-en-reseaux-la-passion-d.html.

27 Alban Galland, Serge Abiteboul, Amélie Marian and Pierre Senellart, “Corroborating information from disagreeing views”, Proceedings of the 3rd ACM International Conference on Web Search and Data Mining, New York, ACM, 2010, p. 131-140, doi: 10.1145/1718487.1718504.

28 Wikipedia exists in 281 editions and its English version had over 3 million articles in June 2011 (source: Wikipedia).

29 Koh-Lanta is a French adaptation of Survivor.

30 Reference to the "Mechanical Turk", an automaton chess player from the end of the 18th century, later exposed as a hoax.

31 Seth Cooper et al., “Predicting Protein Structures with a Multiplayer Online Game », Nature, vol. 466, 2010, p. 756-760, doi: 10.1038/nature09304.

32 "The masses are the real heroes."

33 "But of the tree of the knowledge of good and evil, thou shalt not eat of it: for in the day that thou eatest thereof thou shalt surely die.” Genesis 2:17.

34 Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich and Gerhard Weikum, YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia, Max-Planck-Institut für Informatik, November 2010: www.mpi-inf.mpg.de/yago-naga/yago.

35 Laurent Vieille, “Recursive Axioms in Deductive Databases. The Query/Subquery Approach”, Expert Database Conference, 1986, p. 253-267.

36 Neutrality is a principle that guarantees equal treatment of all data streams on the Internet. This principle excludes all discrimination regarding the source, destination and content of the information transmitted across the network (source: Wikipédia).

37 Chris Anderson and Michael Wolff, “The Web is dead. Long live the Internet”, Wired, September 2010: www.wired.com.

38 In France, in 2009, 40 % of the population never used information technology (source: CREDOC).

39 Nicholas Carr, “Is Google Making Us Stupid?”, The Atlantic, July/August 2008: http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/6868/

Abbildungsverzeichnis

Titel Figure 2. The evaluation of an algebraic query
URL http://books.openedition.org/cdf/docannexe/image/560/img-1.png
Datei image/png, 166k

Der Text und andere Elemente (Illustrationen, importierte Anhänge) stehen unter OpenEdition Books License, sofern nicht anders angegeben.

Kaufen

Suche in OpenEdition Search

Sie werden weitergeleitet zur OpenEdition Search