The Informatics of Time and Events
Inaugural Lecture delivered on Thursday 28 March 2013
Texte intégral
1Mr Administrator,
Dear colleagues,
Ladies and Gentlemen,
2It is a great honour for me to inaugurate the first Chair devoted to informatics at the Collège de France, after introducing this science within the Liliane Bettencourt Annual Chair of Technological Innovation in 2007/2008 and inaugurating the Annual Chair of Information Technology and Digital Sciences in 2009/2010. I would like to thank all those who contributed to the creation of this Chair, which is of prime importance for the entire French informatics community. I would particularly like to thank Pierre Corvol, Administrator of the Collège de France until September 2012, Pierre-Louis Lions, who promoted and supported this creation, the directors of INRIA for their constant involvement, and our Administrator Serge Haroche for his warm introduction.
Introduction
Informatics and digital technology: where is the link?
3Informatics is literally completely changing the global scene, through its immense variety of applications, stretching from scientific computing and company management in the pioneering years, to the computerized design of everyday objects, the simulation and operation of all means of transport and industrial processes, the radical changes in communication, knowledge access and leisure activities, and now in medicine and all the sciences – all coordinated via Internet.1 We are now at the dawn of a new, equally important age: the massive computerization of objects equipped with complex electronic systems on a single chip, run by elaborate software programs and connected through the Internet. This embedded computing has constituted the core of my academic and industrial work since the mid-eighties, and a large part of my lectures will be devoted to this topic.
4Informatics and its impacts have surprisingly remained largely unknown in France, be it among the general public, decision makers or in the education system. Things nevertheless seem to be changing, as the government’s recent white paper on digital technology attests.2 I would like to explain the difference between the words digital, the use of which recently spread, and informatics. All activities relying on the digitization of data can be called digital: digital photography, digital communication, digital economy, digital art, etc. But the core of all digital activity remains computer science and technology that define, study, and implement the automatic computation of digitized information. The fine French word informatique is in fact winning over the rest of the world, with digital and informatics increasingly being used in the same way as numérique and informatique. I shall therefore always use these words with the abovementioned meanings. My teaching will be devoted to informatics, and essentially to its applications in an increasingly diverse digital world.
Teaching informatics
5The status of informatics in French general education remains precarious, in spite of recent progress in the field. While it was first introduced at high-school level in the early eighties as computer programming education, it was removed in 1991, and again in 1999, after having briefly been reintroduced in 1995. Its teaching was then replaced with training limited to the use of software, networks, and basic computerized systems of the digital world.3 Although this training was useful, it actually had very little to do with genuine informatics education. If we are being honest, it had the effect of teaching people how to use systems and software designed and manufactured elsewhere, mainly in the United States and Asia. Yet citizens and decision makers alike are finding themselves increasingly powerless in the face of trends in the digital world, as they lack the means to properly understand them. Education limited to usage, which is inherently always at least one step behind, will have no impact on this situation. The real challenge for any country is to be part of the creators of tomorrow’s world, and therefore of those who will design and manufacture the hardware and software that will shape the world. The education system’s goal should be to teach the world of tomorrow, not yesterday’s world – except, of course, in history classes.
6In order to understand what drives the evolution of the digital world and how to take part actively in it, it is crucial to understand what informatics is as a science and a technology respectively, with specific ways of thinking and doing. Just as getting a driver’s license does not teach one anything about mechanics and physics, so too learning to use a computer does not teach one anything about informatics. This question was identified and resolved for physics, which has been taught for a long time, and for biology, which has been integrated into natural science education, but not for informatics. Is it because France was not one of its inventors, was the only major country that did not build a computer after the Second World War, and seems never really to have believed in the importance of that field since then?
7Fortunately, this is not a deadlock, and significant progress was recently made. As a result of fruitful collaboration between researchers, teachers, and education inspectors, initiated in 2008 – in which I had the honour of participating –, the last year of teaching for the Scientific Baccalaureate saw the creation of a specialization called “Informatics and Digital Sciences” (Informatique et sciences numériques, ISN). This teaching was set up for some 10,000 students at the beginning of the 2012/2013 academic year, and the decision has been made to extend it to the other Baccalaureate final years. New, well-designed informatics education is also going to be introduced in the preparatory classes for the scientific grandes écoles.
8At the same time, several reports have denounced the shortcomings of informatics education in several European countries, including a report I had the honour of coordinating at the Académie des sciences4 and one co-signed by European teaching and research organizations, in which I also took part.5 All of them emphasized Europe’s weak presence in this field, the striking lack of professionals trained in informatics for all the professions of the future and the need to consider informatics as a new autonomous discipline. They recommend teaching it to all students from primary school, and training teachers who will teach it at the same level and with the same stringencies as in the other scientific disciplines. Following a report by the Royal Society, simply titled Shutdown or Restart6, the United Kingdom has just taken the plunge by deciding to replace practical education on computer use with real informatics education, put on an equal footing with the other scientific disciplines for A Levels.
9In France, while the government’s white paper on digital technology explicitly mentioned that reflection was needed on informatics science education, it is unfortunately nowhere to be found in the law on school reform enacted on 25 June 2013. The law now only covers digital training in terms of usage and the use of digital tools for teaching, without specifying what this encompasses. Even though our leaders now recognize the importance of informatics in the digital world, they seem to be stubbornly clinging onto the belief that it does not need to be taught and that we just need to know how to use it slavishly.
10I would like to take this lecture as an opportunity to share a message from the entire French and international informatics community, which would like to see the introduction, as soon as possible, of real informatics education. This means an education that is not limited solely to use and that is well coordinated with usage and with the other taught disciplines. We could thus hope to foster both “digital literacy” and a real understanding of the underpinning scientific and technical phenomena.
The lecture series’ general approach and topics
11Figure 1 sums up informatics. At its centre are algorithms. Above are data, which come from humans, sensors, or are results of other computations. Below are the machines that perform the computations, currently based on the binary electronics of silicon – although other types of information machine are currently under study –, and the languages used to transcribe human thought and translate it into “machine” instructions. To the right are the interfaces with the outside world of humans or other informatics systems: physical data sensors, display devices, mechanical effectors, etc. This system functions as a closed loop: since circuits count over a billion transistors and software programs over a million lines, their design and manufacturing requires highly complex computer systems, which themselves use a multitude of algorithms written in diverse programming languages, and which process large masses of data.
12My personal work has mostly dealt with the bottom part of the diagram, hence the title of my Chair: “Algorithms, Machines and Languages”7. Without ignoring them of course, I will say little regarding the aspects linked to data and security, which have already been discussed by Serge Abiteboul8 and Martin Abadi9 respectively, within the framework of the Annual Chair of Informatics and Digital Sciences.
13The main theme of my teaching will be the treatment of time and events in informatics systems, a domain where our country has played a particularly innovative role in research and industry. In my 2009/201010 lecture series, I insisted on the need to model computations, languages, and machines mathematically in order to understand as well as master them. I will continue in this direction, through two fields of investigation that are closely linked to practice: the explicit treatment of time and events in embedded circuits and software, which are quite profoundly different from classical informatics systems, and the formal proof methods used to verify the correctness of circuits and programs, which are crucial when bugs can have costly or disastrous consequences, for instance in the case of aeroplanes, cars or medical systems implanted in the body.
14Finally, I will not subscribe to the scientific community’s traditional compartmentalization, which all too rarely includes within the same journals and conferences hardware and software, Internet programming and embedded system programming, etc. On the contrary, I will focus on the similarities and complementarities of these domains. In particular, I will not make a strong distinction between hardware and software, as I am personally equally interested in both domains. Although their technical vocabulary and final objects are very different, the fundamental questions and the conceptual and technical solutions are similar. And I will show that ideas and ways of proceeding invented for embedded circuits and software are just as relevant in unexpected domains such as musical composition and interpretation, or the orchestration and coordination of web applications.
Time and events in informatics
15The role of time in informatics is often seen solely through the prism of algorithmic complexity: the minimization of a circuit’s cycle time, of a program’s computation time, or of the information propagation delay within a network; the time/space exchange within algorithms and their implementations, etc. These are obviously crucial questions, as an application’s final cost and benefit are the result of a subtle balance between computation time, memory size, and energy spent. But classical algorithms and programming languages do not themselves talk about time, which remains an external consideration that is not part of the functional specifications. This is also true of the oft ad hoc management of the events to which programs respond or which they trigger.
16Yet the management of time and events should be an explicit and central parameter in the design and specification of many applications. This is where the originality of my teaching lies, through the study of five particular fields:
The design of digital circuits, with a global architecture and local micro-architecture shaped by an explicit fight against time. A new problem recently appeared: in order to validate a circuit’s design and build its application software before the circuit actually physically exists, it has become crucial to simulate it efficiently using software, which raises new problems regarding the relationship between the physical circuit’s time frame and that of its simulation.
Real-time control systems. The computerized steering of an aeroplane must be based on the time required by its aerodynamics, not the time specific to the computer, as the values and events triggered by the sensors or the pilot must be treated without delay. Informatics is here closely linked to control theory, which is the science of control. The generalized computerization of transport and objects of all sorts is making this field an increasingly important one.
The simulation of large systems (cars, aeroplanes, air traffic, computer networks, etc.), which is essential for their computer-assisted design. Its main parameters are often time and external events. One major difficulty lies in coordinating simulators specific to parts of the system that do not necessarily share the same view of time and events. The same difficulty is found in virtual reality systems and video games.
The orchestration of web applications. Instead of displaying an HTML page, many websites offer procedural services which return formatted data. Surprisingly rich applications can be built by composing these basic services. But they can return errors or have unforeseeable response times. The orchestration of the user interface, the questions addressed to the services and the management of their replies treating all possible errors is a problem of the same nature as those presented above.
Musical information, where contemporary compositions often mix human interpretation and computer-generated sounds. One important issue is harmoniously linking human interpretations and synthesized sounds, particularly for the mutual harmony of tempo and articulation. Another fascinating question concerns the design of “algorithmic scores” that could be used by composers, interpreters, and computers at the same time.
17Although these subjects are socially distinct, even impervious to one another, technically there is substantial overlap between them. As we will see, it is highly useful to apply the ideas and ways of proceeding of one to the other.
18Other fields of interest may appear over the years, particularly in relation to the neurosciences and to biology, where the study of temporal phenomena is crucial.
Discussing time, but formally
19In all the fields mentioned above, it is necessary to talk explicitly about time and events in specifications and programs. Yet neither spoken language nor classical mathematical formalizations adequately serve our purpose. I will now explain why, first by examining the large gap between the physics of time and its psychological perception.
Measuring time
20The measurement of time is based on implacable phenomena, like the movements of the sun, the moon or the stars. Astronomy has long studied their regularities and irregularities, which are complex and have given rise to local imaginative cosmogonies everywhere. Measurement by the stars was followed by a long series of inventions that were easier to handle, from the clepsydra to the pendulum clock, then from the spring pendulum to the fob watch. The first truly precise measurement was provided in 1773 by Harrison’s H5 chronometer, which drifted by only a few seconds on an Atlantic crossing and, for the first time, allowed for longitude to be measured precisely. Yet the unification of the local times of cities in a same country came much later, and became necessary with the development of the train and the telegraph.11
21Mechanical and then electronic chronometers improved before being replaced by atomic clocks, which currently define the second to the 10−10th of a second and underpin the GPS positioning system. The most recent clocks reach a precision greater than 10−17, in other words a few seconds on the scale of the age of the universe. The second has also become a virtually universal unit: it has served to define the metre since 1983 and could also serve to define the kilogramme. I will stop here, as many books have been written on the marvellous history of the measurement of time and its consequences.
How we talk about time
22The perception of time is independent from its physical measurement, as the study of spoken language about time immediately shows.12 It is very personal. For example: “I’m taking my time”, or “he made me waste my time”. It includes strangely elastic expressions such as “long years” – which curiously tends to replace “many years” –, “time flies”, and “time is going faster and faster”. Yet these expressions that everyone uses make no physical sense: with what unit could the speed and the acceleration of time be measured?
23The passing of time is often represented by an arrow, with the past on the left, the future on the right, and the present in the middle. There is only one past and only one future, ours. One instant is a point on this arrow, and the present is the lived instant. It is a fleeting instant that is constantly moving, stuck between the past and the future.
24When we talk about time, we therefore generally freeze on a frame, considering a given present. The distant past is called the dawn of time, and we never really know whether it is infinite or not. Then comes the nostalgia for the good old days or for the good times, etc. When we turn towards the future, we set our resolve on tomorrow13, which has a rather loose meaning, or on when the cows come home, synonymous with never in a month of Sundays.
25A duration is a piece of time of a certain length. The names of durations are quite poetic: a lapse of time, a bit of time, quite some time. In the South of France, when one waits a long time, one has enough time to kill a donkey with figs. The names of short durations seem to be subject to a syntactic elongation inversely proportional to the duration expressed, from pretty quick to before you could say Jack Robinson. And in French a week is eight days, whereas a fortnight is fifteen!
26To identify an instant, we date it by qualifying the time that has passed before or after a given event, for example the supposed birth of Jesus Christ. But the digitization of time shows that we have not completely integrated the zero into our vocabulary: the year begins on 01/01 at 00:00! As for events, they represent something which happens at a given time, and which can therefore be dated. Events are often periodical: they occur weekly, monthly, and so on.
The mathematical arrow of time
27The mathematical language of time is less poetic. The arrow of time is represented as the set of real numbers ℜ, with −∞ for the dawn of time and +∞ for the Greek calends. A zero written down somewhere provides the origin. An instant is always noted t, t’, etc. An interval of time is noted [t,t’] with t ≤ t’ and can potentially be opened at one end or the other; its duration is t’− t. A small time difference is noted δ, a tiny one Δt. The objects considered are often functions of time; they are continuous, integrable, differentiable, etc. But they are also specified events that occur at certain dates, called occurrences of the event. An event is periodical if it has occurrences regularly spaced out in time, otherwise it is sporadic. An occurrence of an event e at instant t precedes an occurrence of e’ at t’ if t ≤ t’, and the two occurrences are simultaneous if t = t’. While it is crucial to know if an occurrence of event e causes an occurrence of event e’, whether or not it actually does so depends on each system. If that is the case, the occurrence of e must precede that of e’ or potentially be simultaneous, or else it will result in serious time paradoxes.
28This rough representation is effective in disciplines like classical physics, signal processing or control theory. But it has serious shortcomings for our purposes. Note, first, that the distinction between events and occurrences of events often remains blurry, should it be made at all. Yet it is essential when discussing causality. For example, if each occurrence of e causes the following one, can we legitimately say that e causes e?
Linear temporal logic
29Systematically using real dates often leads to rather unpleasant formal specifications. Consider for example the saying after the rain comes the sun. It can be written mathematically as ∀t.∃t’. Rain(t) ⇒ Sun(t’). But why bother with t and t’ about which we could not care less? The idea of temporal logic is to introduce modalities to remove quantifiers and temporal variables, as the logical value of a modal formula becomes dependent on the instant considered. In linear temporal logic (LTL), our saying is written □(Rain ⇒ ◊Sun), where □ is always and ◊ one day. An important question remains, which we will leave to meteorologists to answer: does rain cause good weather, or does it just precede it? Neither standard mathematical notation nor LTL logic is able to settle this question. While causality is the essence of temporal systems, talking about it is never trivial.
30Temporal logic is not as simple as it seems. The French saying amour un jour, amour toujours (love one day, love forever) is not naively written ◊Amour ⇒ □Amour, for this formula means “if we fall in love one day from now, then we are actually forever in love from now”! The correct writing is ◊Amour ⇒ ◊□Amour “love one day implies one day forever love” – which is not necessarily true either.
The cone of time and tree temporal logic
31When planning our future or studying an informatics system, we often try to envisage all possible cases in light of the past and the present. A typical reasoning is “from now on, if I do A then I will obtain B after quite a while; however because I had C in the past, doing D will perhaps give me B but surely E much faster”. Time then takes on a conic form, with a filiform past and a future open to multiple executions. Tree temporal logic allows us to talk about it.
32Consider the specifications of the coffee/tea machines in Figure 2. The English machine, on the left, takes a one euro coin (it is imported) and pours bad coffee or good tea depending on whether one presses the C button or the T button. The French machine, on the right, takes one euro and then chooses itself whether it is going to pour good coffee or bad tea, only leaving one of the two buttons C or T active. Taste of the liquid supplied aside, these two machines are equivalent in linear reasoning, as they are only able to iterate action sequences €.C.Coffee and €.T.Tea, which in informatics is written (€.C.Coffee + €.T.Tea)*. Yet they are not equivalent in use: the French machine has a frustrating side for users as it does not let them choose their drink. We can distinguish between the two machines using the following tree temporal logic formula: AG(€ ⇒ EF(Coffee) ∨ EF(Tea)), which reads “in any state, putting in one euro allows one to get a coffee through a certain path or tea through another”. This property is fulfilled by the English machine, but not by the French one. The modalities AG and EF are of a different type: AG(F) means “for all states, the formula F is true” whereas EF(F) means “from the current state, a path exists leading to a state where F is true”. I shall not discuss temporal logic any further, and will refer the reader to the literature on the subject.14
The double cone of time
33We can also revisit our past. A perpetually indecisive person reasons as follows: “if I had managed to get A’ instead of A in the past, I would have had B more easily, but I am not sure I would have had C”. Here, time takes on a biconic form, with a cone of the past converging towards the present and a standard cone for the future.
Expanding the notion of time
34Our needs in informatics will cause us to expand the notion of time. Let us examine them in general terms first, before moving on to more concrete examples.
Proper time, coordinate time
35Most of the time, we only talk about one time, assumed to be universal. The best we have at hand for the moment for doing so is provided by the GPS. Its time is the key to the measurement of position, based on a simple principle: GPS satellites repetitively send their current time in electronic messages. When the receiver receives a message, it compares the time of emission with the time of reception, and deduces its distance from the satellite since the position of satellites and the speed of light are known. Using several satellites and calculating the intersection of the spheres they determine, one knows where one is.
36Putting this into practice is however by no means simple. First, since the GPS receiver, unlike satellites, does not contain an atomic clock, it is forced to compute in dimension 4 with at least four satellites to determine both position and time. But the absolute time of Newtonian mechanics does not exist: time depends on the observer, their speed, their local gravity, etc. Physicists therefore distinguish an observer’s proper time from coordinated time, generated by the coordination between several observers. A “good” coordinated time must be constructed by very finely synchronizing the GPS satellites’ clocks together, drawing on special and general relativity. The recent example of a subtle clock synchronization error, which led to an overestimation of neutrinos’ speed15, clearly shows the technical difficulty of clock synchronization. But note that the most recent clocks like that of David Wineland, 2012 Physics Nobel Prize Laureate with Serge Haroche, are so precise that they are able to show how gravity modifies time, simply by separating two identical specimens by a 30-centimetre height!
37Clock synchronization and coordinated time problems are both frequent and crucial in informatics. In a car, the brakes, suspension, and the steering system need to be coordinated, to maintain the attitude when braking in a corner for example. This requires a precise temporal alignment of the different calculators and of the informatics network connecting them. In software engineering, to organize the compiling of a program’s source files, the make utility triggers the recompiling of a source code if it is more recent than its object code. But in order to accelerate the compiling of large-size programs, several computers are often used in parallel. In this case, these machines’ clocks must be finely synchronized if recompilations are not to be forgotten. Finally let me mention, as a matter of interest, millisecond-long financial transactions: we may well wonder whether they really will be controllable and are as much of service to society as they are to their authors.
Multiple and irregular clocks
38In the twentieth century, circuits were sufficiently small and simple to be sequenced by a single clock, distributed everywhere by a superbly geometrical clock tree. That is no longer the case. Owing to the integration of an increasingly large number of transistors per surface unit (according to Moore’s law, applied in industry so as to double the number of transistors per chip every other year), modern systems on chips (SOCs) incorporate diverse sub-circuits, clocked by multiple clocks that are not necessarily synchronized: general microprocessor, digital signal processor (DSP), graphics accelerator, video decoder, etc. These sub-circuits exchange information through first-in first-out buffers or networks on chips, some with a predictable time, others not. Moreover, in order to save autonomous devices’ energy, the computations are slowed down as much as possible by lowering the voltage and slowing down the clock: there is no point trying to jump the gun, or as we could say in French, calculating the sound faster than the music. Finally, asynchronous or elastic circuits function far more dynamically, without a clock at all.
39Embedded systems are also evolving towards multiple clocks, with a growing number of electronic control units (ECUs), connected by time-triggered networks (TTP, FlexRay). I have already mentioned braking in corners. Temporal synchronization is just as crucial to large system simulators, built by coupling local simulators using their own simulation clocks.
40For all these systems, it is very important to know whether the different clocks are synchronized or not and, if they are not, how information is transferred between distinct clock zones.
Logical time, physical time
41In another vein, we find that two notions of time often compete in daily life. Farmers, for example, often find it more relevant to situate themselves in relation to the sunrise, to the sun’s zenith or to the sunset, rather than using a watch, except of course for dealing with the administration and with people in town. These events can be seen to define a logical time, by their reproducible though irregular repetition in relation to physical time. Their physical precision is not necessarily great, as shown by the etymology of the nice French word tintamarre (racket): the workers of Parisian vineyards were highly organized in former times, and demanded a real midday break. But this hour where the sun reaches its zenith in the sky is difficult to determine. When he could really feel it was time, the union leader would take a stone and chink (tinter) it against the iron of his hoe, called a marre. When all the other vintners did likewise, they produced a splendid tintamarre!
42With its minims, crotchets and quavers, a music score also specifies only a logical time, which the interpreter will not necessarily conform to, or even especially not conform to, when transforming it into physical time. Long live the baroque inequality and jazz swing that play with time; down with the electronic boom-boom of disco’s invariable 120 beats!
43Likewise, in modern circuits, the clock is no longer necessarily regular: slowing it down as much as possible is an excellent way of saving energy. This does not prevent us from working in logical time when designing the circuit, by acting as though time was regular. One of the seminars of the 2013-2014 lecture series will present elastic circuits16, a wonderful way of coordinating flexibly logical and physical time. When programming embedded software, we also often use logical times that are finely correlated to physical time only at the implementation stage, or even during the system integration.
Multiform time
44We can go further by calling logical time any quasi-time defined by the repetition of a given event, and discuss it with the same vocabulary and logic that we use for physical time. When walking towards a village, metres, and steps are just as often repeated as seconds. There is no difference between saying that the village is 10 minutes away or saying that it is 10 kilometres, 10,000 steps (10 kilosteps) or 10,000 heartbeats (10 kilobeats) away. Thus, all time logics together constitute a multiform time. The Esterel programming language, which I will briefly describe further on, was designed to program applications by playing on the idea of multiform time. It can be applied equally to real-time control applications and to the specification and synthesis of multi-clock circuits, for which each clock is seen to define its own logical time.
45Note that musical time is also multiform, as independent logical times are introduced by the repetition of motives and themes, by that of movements, and, at a faster rhythm, by the internal ornamentations of notes. It is therefore not surprising that the Ircam’s musicians and musical informatics researchers are interested in a language like Esterel and use its main notions in the experimental formalisms of algorithmic scores.17
Continuous time, discrete time
46The hypothesis of the continuity of time is far from straightforward. Modern physics no longer necessarily sees it as continuous on the basic scales, nor even as something which should remain a primitive concept, which is of no concern to us here. More important is the fact that the notion of dating by using real numbers is not natural in informatics, as real numbers with an arbitrary infinity of decimals are not computable. We therefore also need discrete scales of time, founded on countable and computable sets of instants and discrete events. In order for our work to remain compatible with regular physics, we can carry on using real numbers to refer to discrete dates. But we should then be wary of Zeno’s paradox (or that of Achilles and the tortoise), which arises as soon as one tries to place an infinite number of discrete instants between two successive real instants.18 We can also directly build a linear vision of discrete time by totally ordering all events, for example using an integer date. This is what is called timestamping, which is particularly useful in telecommunications and database synchronization. But we can order events only partially, without necessarily defining a relation of precedence between two instants. In this case, precedence becomes a partial order. This is what is often done in asynchronous distributed systems, where actors do not share a common time and where the transmission of messages can take any time. Timestamping is then done in logical time.
The thickness of the instant
47In everyday language, the notion of event has a very broad meaning, and is somewhat different from the mathematical notion of the instant without density. For example, we are taught that Charlemagne was crowned in 800. This sentence of everyday language can and should be seen on several scales. Broadly speaking, the year 800 is a moment in history, and Charlemagne was crowned at that moment. But the year 800 was composed of 366 days, themselves comprised of 24 hours, etc., and the crowning of Charlemagne required a very long sequence of basic operations, and therefore of distinct events on a more refined scale. A fine expression in everyday French illustrates the essence of this dual view: “l’espace d’un instant” (in the space of a moment).
48In informatics, and especially for synchronous circuits and languages like Esterel, this abstract event/sequence of concrete events duality plays a crucial role. Depending on needs, the same event can be seen as atomic, and therefore indecomposable, or rather as aggregated, then representing the abstraction of a succession of more basic events. We will juggle between these two visions of an event: in the atomic vision, we will consider that the aggregated event happens in zero time, which will be formidably efficient mathematically speaking. For implementation, we will have to detect or create this event “for real”, therefore in non-zero time. The key to correcting this double mechanism will be precise control of the real temporal sequence, which will allow us to ensure that the abstraction of zero time is reasonable in practice for the domain considered.
The question of determinism
49The difference between determinism and non-determinism is important for our systems. A system is deterministic if its reactions are always identical in identical situations, and non-deterministic otherwise. The system’s inputs are of course not known beforehand; this is what we call their external non-determinism. But, if the system must always react in the same way to the same inputs, it must not contain internal non-determinism.
50For example, synchronous circuits are always deterministic, and a car’s breaking must be equally so. Conversely, it is important for the transmission of packets on the Internet to be globally non-deterministic, so as to be robust to failures and reconfigurations. The two concepts are equally relevant, but in different situations. Determinism is mostly required in high-security embedded systems, which must have perfectly predictable behaviour: rockets, aeroplanes, trains, cars, etc. Conversely, it is the Internet’s intrinsic non-determinism that allowed it to overthrow the classical telephone network, which follows a quasi-determinist logic: preserving the internal determinism of such a network requires a lot of information centralization, complicates reconfigurations, and is difficult to scale up. The dynamic or static side of systems is also important here: the number of nodes in an informatics network is constantly evolving, whereas the number of wings on a deterministic aeroplane must remain resolutely fixed.
51The deterministic/non-deterministic distinction was very strict in the twentieth century. It is less so now, because of the generalized change of scale of informatics systems, and many hybrids are appearing. Modern electronic systems on chips asynchronously and non-deterministically combine locally synchronous and deterministic components, and the distributed control of embedded systems does the same thing by (slightly) non-deterministically combining locally deterministic software. But the formalization, implementation, and especially verification of such systems raise considerable problems that are still far from being resolved. They will be studied in the second year of the course.
Is causality transitive?
52The relationships of causality linking events or their occurrences essentially depend on the applications. But causality must obey a general rule: “A causes B” always implies that “A precedes B”, in the broad sense of the term, as A and B can also be simultaneous. The question of causality’s transitivity is far less clear. “A causes B” and “B causes C” are often thought naturally to imply that “A causes C”. This is trivial if we interpret “A causes B” as “A is part of the causes of B”, but it can have other meanings. For example, one can say “it is raining, therefore I open my umbrella”, then “my umbrella is open, therefore I remain dry”. But it would be quite strange to say “it is raining, therefore I remain dry”!19 Here, we consider that for an event A to be a cause of B in a sequence of events, the presence of A must be essential to that of B. Yet it is not essential for it to rain to remain dry with an open umbrella! This notion of causality is used, for example, to study protein building networks in the cell in structural biology. Formalizing the different kinds of relevant causality remains an open problem of prime interest.
From circuits to systems on chips
53Electronic circuits are the driver of informatics. They also perfectly illustrate certain crucial questions concerning time. Let us start with the thickness of the instant and the associated abstraction.
54The adding circuit FullAdder in Figure 3 calculates the sum s and the carry c of the 3 input bits x, y and z. It is a combinational circuit, composed of electric wires carrying potentials noted 0/1 and logic gates calculating a Boolean composition of their inputs. This circuit can be seen in two ways. From the electrical perspective, the computation is performed through the propagation of voltages in the wires and gates, with delays determined by their technology and their physical placement. If a gate’s inputs are maintained at 0 or 1, its outputs switch to the voltages determined by the gate’s function at the latest after its delay. This is a series/parallel system: the potentials are disseminated in parallel by the connections and wires, but the delays are added up when the wires and the gates are connected in series. If the circuit’s graph is acyclic, as with the adder, the delays are therefore added up on each of the inputs’ path towards the outputs. The maximum delay between any given input and output is called the critical delay; it is typically a fraction of a nanosecond long for FullAdder. By fixing the inputs, there is certainty that all the outputs remain constant after the critical delay, and that they do have the desired Boolean values. While intermediary transitions can occur on the way, they have become invisible. The dual vision is the abstract logical vision. The gates are seen as simple Boolean operators; the circuit becomes a system of Boolean equations to solve and the equations are seen as simultaneous, and resolved in an abstract instant without thickness. The logical model is far simpler to design, to reason with and to optimize, as it allows all Boolean optimizations and verifications. Meanwhile, the functionally equivalent vibratory model precisely implements the thickness of the abstract instant. Designers talk about logical functions and electronic engineers about wires and transistors, and the key information they exchange is the critical delay. This well-organized cooperation has allowed for the extraordinary improvement of integrated circuits and the development of systems for the physical synthesis of circuits based on high-level specifications.
55But of course the 1-bit addition is not enough, and numbers must be added, represented by 32-bit words for instance. Two main methods exist to do so: working in space or working in time. In order to add in space, 32 1-bit FullAddern adders can be lined up in parallel, with the wires ci propagating the carries as we used to do in school: we posit that c0 = 0 and cn = rn−1 for n > 0. This is simple but inefficient, as the critical delay is multiplied by the number of bits. We can do much better by exchanging time against space. For example, Von Neumann’s adder proceeds by dichotomy and speculation. Each 2n bit input is split into two parts of n bits, low-orders and high-orders. In parallel, the low-orders and high-orders are added up, but by speculatively performing two additions for the high-orders, with an input carry of 0 or 1 respectively. The n first bits are taken from the sum of low-orders as the low-orders of the result, and the carry of the low-orders is used to select the right sum of high-orders, which is already available since the two potential sums were calculated simultaneously. The other sum of high-orders is not used. Each sub-adder is of course recursively built in the same way. In total, more space and energy are used because of the unnecessary computations, but it goes much faster: the addition happens in physical time log2(n) instead of n, so here in 5 instead of 32 time units.
56The orthogonal method consists in performing the addition in time, using the synchronous circuit shown in Figure 4, called a serial adder. A single basic adder is coupled to a register, or basic memory element. The register is controlled by a clock, which emits square waves that we only consider here for their leading edge. The interval between two leading edges is called a cycle and its duration must be greater than the adder’s critical delay. The numbers are presented 1 bit per cycle, low-orders first. At each cycle, the adder calculates the sum and the current carry. Every time the clock’s leading edge arrives, the current carry is stored in the register and becomes the input carry for the following cycle. Note that there is no longer any limit on the number of bits that can be processed, which can be made infinite without any problem. Yet numbers written by starting with the low-orders, but with an infinity of digits, play a fundamental role in mathematics, where they are called p-adic numbers, with the integer p as the basis of number representation. Written in base 2, the 2-adic numbers that are at relevant here allow us to unite arithmetic and Boolean logic; in Figure 4, they are characterized by the initial index 2 in front of the infinite development. Unlike real numbers, they have never been interpreted in classical physics. However Jean Vuillemin has shown that they actually constitute the natural model of arithmetic circuits.20 In the “synthetic physics” that is informatics, real numbers do not exist and 2-adic numbers become physical instead.
57A circuit is said to be synchronous, when all its registers are controlled by the same clock. The lecture series will study many other aspects of synchronous circuits which I cannot describe here: pipeline-based optimizations, speculation, and many other space-time exchanges, systems of redundant bases with which to multiply as fast as one adds, which is equally enlightening and counter-intuitive, etc.
58By studying the behaviour of cyclic combinational circuits, we will also be studying the relationship between time models by establishing unexpected relationships between discrete time and continuous time, and between electricity and constructive logic. I will show that a cyclic combinational circuit stabilizes in bounded time for all its inputs if and only if its equations are solvable using only constructive logic, therefore without using the excluded middle x ∨ ¬ x = 1. For example, the Hamlet circuit defined by “ToBe = ToBe or not ToBe” oscillates electrically for certain delays of the gates and wires. This result is somewhat of an electric analogue of the fundamental Curry-Howard correspondence between computation and proof.21
59While synchronous circuits remain the dominant paradigm by virtue of their simplicity, they no longer scale up to the billions of transistors of modern systems on chips (SoCs), like those of recent smartphones. These large multifunctional circuits are built by composing sub-circuits clocked by distinct clocks determining different frequencies. We will study the fascinating and often underestimated problems raised by communication between clock zones, due to inevitable occurrences of physical metastability when a register’s input signal changes just as its clock edge arrives. I will also present other clockless circuit paradigms that make time elastic. And, in the future, we will see the appearance of new notions of quantum circuits, probabilistic circuits, neuromimetic circuits, etc., which are worth following closely.
Synchronous languages and multiform time
60Synchronous languages for embedded software were designed in the mid-eighties by three French teams: Paul Caspi and Nicolas Halbwachs’s team for Lustre in Grenoble, Albert Benveniste and Paul Le Guernic’s team for Signal in Rennes, and my team for Esterel in Sophia-Antipolis. These three academic languages, based on a resolutely scientific approach, were developed industrially in the nineties: Esterel with Dassault Aviation and Thomson for avionics, Bell Labs for telecommunications, etc.; Lustre, under its graphic version SCADE22, with Schneider Electric for nuclear plant control and Airbus for avionics; and Signal, with SNECMA to control reactors, and with other industrial actors for signal processing. Other synchronous languages were then developed in France and abroad. The company Esterel Technologies, created in 2001, and for which I worked as Scientific Director until 2009, extended Esterel to the design, synthesis and verification of electronic circuits for various large users. It then bought out SCADE and merged it with Esterel in its new version, which is now an authority for critical embedded systems subject to certification.
Cyclical synchronous software
61The logic of synchronous languages is similar to that of synchronous circuits, but with different syntax and implementation.23 As with the abstract view of circuits, these languages consider a sequence of instants without thickness during which the computations and communications described in parallel or in sequences are performed in zero time – what is called the hypothesis of perfect synchrony. It is therefore as though an infinitely quick computer was being programmed, which is dormant except when it responds in zero time to the inputs by producing its outputs. This synchronous vision, long considered iconoclastic within the academic community, considerably simplifies programming and mathematical semantics. It allows for a deterministic parallelism, whereas the classical visions of asynchronous parallelism always involve a non-determinism that is incompatible with most embedded software programs’ requirements. It also allows for optimized implementations and formal verifications that are beyond the reach of classical methods.
62Two thick instant implementations are possible: a direct translation into circuits, especially developed for Esterel, and a software implementation founded on the classical pragmatic approach of cyclical execution, with a cyclical alternation between four phases as with the spark ignition engine: inputs, computation, output, wait. It is possible to trigger the cycle arbitrarily, for example, by timers or external events, and its principle ensures the non-interference between inputs, computation, and outputs. If execution is subjected to real-time constraints, the generated code’s worst case execution time (WCET) must be computed and verified to make sure that it is shorter than the time granted to the cycle by the application. So as to implement parallelism and generate a unique sequential code, the parallel cycles are processed by finely interleaving their basic instructions while also respecting the dependencies induced by inter-cycle communication.
Signals and programs in Esterel
63In order to be more precise, let us look at an example in Esterel, by showing how this language makes the multiform vision of time explicit.
64Esterel instructions communicate with the external environment and between themselves through signals. A signal S is comprised of a present or absent Boolean status and, optionally, of a value of a certain type. In the abstract view, a program reacts to its inputs by producing its outputs instantaneously; the status and value of each signal are unique to each reaction. A program is written using temporal instructions, which can be sequenced, in parallel, or placed within temporal preemption statements that cancel their activity, and which can communicate instantaneously through signals.
65Here is an Esterel program describing the following sports training regime: every morning, run slowly for 100 metres, jump while blowing out with each step for 15 seconds, then run in full sprint until the end of the round, all for four rounds.
every Morning do
abort
loop
abort run slowly when 100 Meter;
abort {emit Jump || emit Blow} when 15 Second;
run full_speed
each Lap
when 4 Lap
end every
66In this manifestly temporal code, Morning, Meter, Second and Lap are input signals, Jump and Blow are output signals, whereas slowly and full_speed are unspecified modules here, whose temporal execution is driven by the instruction run. The character “;” is associated with sequencing, and the symbol “||” with parallelism. The other temporal instructions are ordered by the presence of a signal or when the counting of the occurrences of a signal ends. All signals are treated as though they defined independent times, leading to the multiform vision of time. The different times are manipulated by instructions to wait, like “await 5 Second”, or of preemption, like “abort … when 100 Meter”, which stops the execution of the body after 15 seconds. The instructions “every … end” and “loop … each” are loop variants of “abort … when”. A trap-exit exception system that is not specified here provides other modes of preemption, and offers a convenient way of processing many error cases, like for example a heart rhythm problem in the second part of a round.24
67Lustre and Signal are simpler functional languages, devoid of loop, preemption and exception instructions. They treat multiform time in a way that is syntactically different though equivalent, using abstract clocks, flows of Boolean values that play the role of Esterel signals’ presence status.
Formal semantics and compilation
68The programming style of synchronous languages is clearly different from that of classical languages like C or Java. While we already knew how to give meaning to the programs of classical languages, that is, to define their semantics, the question was raised in new terms for synchronous languages. We chose to be intransigent and to use mathematical semantics rather than implementation as a base. We were particularly concerned with critical embedded systems, which affect the safety of the goods and that of people present. Implementation must then follow mathematical semantics, and not the other way around, which is all too often the case.
69Several types of formal semantics were successively defined for Esterel, leading to increasingly efficient implementations. It all began with the application of finite state automaton techniques, leading to the (industrialized) Esterel v2 and v3 compilers, which also allowed us to study the causality problems raised by the synchronism hypothesis. But the compiled code could exponentially explode in size, which limited the size of the applications. Fundamental progress was made in collaboration with Jean Vuillemin’s team at the Digital Equipment Corporation research laboratory in Paris, when we understood that it was possible and far more efficient to translate the Esterel programs directly into electronic circuits, and then to simulate these circuits to generate software. This way of proceeding simplified formal semantics and definitively made the explosion in the size of the generated code disappear.
70We then understood how to link Esterel semantics with constructive logic andwith the analysis of cyclical combinatory circuits mentioned earlier in relation to the Hamlet circuit. This progress afforded us a much finer understanding of issues of causality in the language and led to Esterel’s constructive semantics25, which now serves as a reference.
The evolution of synchronous languages
71In 2001, I joined the company Esterel Technologies, specially created to develop Esterel and its applications to embedded circuits and software. We defined the far more complete language Esterel v7, by incorporating new and semantically well-defined instructions into it for the definition of the paths of complex ubiquitous data in electronics. This language was used in production to create complex circuits in large electronics companies. In 2003, Esterel Technologies bought out SCADE, used especially by Airbus for the A340 and A380 flight control, and decided that SCADE would specialize in software and Esterel v7 in electronics. In 2008, the main notions of Esterel were incorporated into the new generation SCADE 6, now applied to many high-security embedded systems. Unfortunately, Esterel v7 was wiped out by the crisis of 2008, along with many other inventions by innovative start-ups, and is no longer accessible, even to its authors.26 However, this will not prevent me from teaching its design and compilation methods.
Towards new applications
72The synchronous core of languages is now well understood. But as with circuits, there are growing problems due to the scaling up of applications. In software development, we need to break out of the framework of the compact systems usually dealt with in that domain (flight control, car braking, etc.), so as to focus on more distributed applications requiring the linking up of several systems. For example, as I have already mentioned, the correction of a car’s attitude when breaking in a corner requires the cooperation of the locally synchronous systems managing the steering, suspension and breaking. This can be done using guaranteed transmission time networks (TTP, FlexRay, etc.), therefore precisely coordinating several timescales. The cooperation between cars circulating on a same road will be far more complex as large-scale communication will no longer be synchronous. As with multi-clock circuits, globally asynchronous locally synchronous (GALS) systems are then needed, which are far more difficult to program and to verify.
73There are also interesting applications outside of the framework of embedded technology. Reactive languages, developed by Frédéric Boussinot et al., integrate synchronous concepts in classical programming languages, with a far more dynamic approach than that of synchronous languages. The HipHop language, which I will present in 2014, is a version of Esterel adapted to the orchestration of web applications and rendered dynamic by borrowing techniques from reactive languages. I suspect that the original concepts of synchronous languages will be incorporated into many other projects where the treatment of time and events proves important.
Continuous time/discrete time simulation
74Continuous control systems are most often described by systems of ordinary differential equations, augmented by threshold clauses to change equations, depending on discrete endogenous or exogenous events. Consider the system in Figure 5, with L the left ball, R the right ball, and W the wall, with the respective positions l, r and w. At the start, L is thrown to the right with velocity v, whereas R and W are immobile. At that moment, the equations for L and R’s positions l, r and velocities l’, r’ are simply written as follows:
l : init l0
l’ = v
r : init r0
r’ = 0
75But L’s uniform movement can only continue until it collides with R, which produces an event that is endogenous to the system. At that moment, the two balls exchange their velocity, with L stopping and R going towards the right with velocity v. The equations become the following:
l : init l0
l’ = 0
r : init r0
r’ = v
76Rather than managing two systems of equations, it is preferable to work with a single system that mixes the discrete and the continuous, by introducing a temporal operator related to that of Esterel, to detect the shock:
l : init l0
l’ = v every [ l up r → last(r’) ]
r : init r0
r’ = v every [ l up r → last(l’) ]
77The operator “x up y” constantly compares x and y and triggers the action of the operator “every” as soon as x ≥ y, with the latter then modifying the equation to be executed. At the transition instants, the temporal operator “last” returns the last known value of its argument, in other words, it limits it in the direction of the movement of this argument’s continuous value.
78Let us now take the wall W into account. When r = w, R touches the wall, which triggers the velocity inversion and its return towards the left at velocity − v. A shock with L ensues, which again triggers the exchange of velocities: R stops and L departs towards the left with velocity − v. The complete system is therefore written as follows, now with two clauses for the operator “every” of the equation of r’:
l : init l0
l’ = v every [ l up r → last(r’) ]
r : init r0
r’ = v every [ l up r → last(l’),
r up w → – last(r’) ]
Classical simulation
79The foundation of classical simulation is the integration of these equations based on the time variable. But neither continuum nor real numbers exist in informatics (except for a notion of computable real that is not relevant here), and everything must be discretized. Real numbers must be approximated by floating-point numbers, which is far from simple.27 I will ignore the associated difficulties, and focus on those linked to the progression of computation in the time of the phenomenon simulated.
80The basic idea is the discretization of time. The integration is approximate, for example à la Riemann or using more sophisticated interpolation methods, by moving forwards in time steps at a scale suited to the desired approximation. For my example, it is trivial. At each time step ε, the position of a ball with velocity v is incremented by εv. After departing, L therefore moves in steps of εv, with a “l up r” test at each stage. When this test becomes true, the equations are modified as indicated, to continue the simulation, making R advance towards the wall, etc.
Problems at the margins
81Let us now take a look at what happens if R is stuck to the wall at the start, as in Figure 6. When L reaches R, the latter cannot go towards the right because of the wall, and L is immediately sent back towards the left. Conceptually, we can say that L stops by giving its velocity to R, which instantaneously touches the wall without even moving, inverses its velocity, and touches L again without moving, communicating its velocity towards the left and stopping. There are therefore three events, which are simultaneous but causally ordered: the shock of L on R, R’s rebound, and R’s shock on L. This is exactly the phenomenon I described for the conceptually instantaneous synchronous propagation of communication in circuits and in Esterel.
82Classical systems struggle to deal with this problem, as they do not know the notion of synchronous propagation. Depending on the setting of the integration and computation mechanism, behaviour can differ, and even lead to ball R going through the wall, which is obviously absurd! No one concerned by the accuracy of behaviours and the precision of semantics can accept the idea that the simulator’s behaviour depends on internal settings that are difficult for the user to understand.
Non-standard semantics and superdense time
83A highly elegant solution was recently proposed28, based on the idea of drawing not on classical real analysis but on non-standard analysis, for which ε’s are not reals but infinitesimals inferior to any real number and which can be multiplied by rational numbers. We thus have 0 < ε < 2ε < x for any infinitesimal ε and any real number x. Non-standard analysis is another way of discussing abstract synchronism: instantaneous transitions can only occur in the infinitesimal field, without real time progression like in classical simulation, but with the progression of the ε’s. The paradoxes are resolved by the constraint according to which the non-standard function computed by the system must be standardizable, a simple mathematical condition that allows for undesired behaviours to be uniformly rejected.
84On a practical level, new propositions of continuous/discrete synchronous languages are distinguishing between the continuous and the instantaneous through a system of types, applying integration methods to the continuous and the methods of synchronous languages to the discrete. These methods seem destined for a great future for the simulation of complex systems.
Formally proving programs
85One of the constant concerns of the lecture series will be the need to prove formally that the circuits and programs considered do meet their specifications. Where feasible, proof is far superior to simple testing, as it formally analyses all possible cases whereas testing only validates or invalidates what is tried out. Formal proof has recently made considerable progress, with three types of techniques:
Model checking, which earned Joseph Sifakis, Edmund Clarke and E. Allen Emerson the 2007 Turing prize;
Abstract interpretation, initially developed in France by Patrick Cousot and his team, which has been applied in several ways in industry29;
Mathematical proof assistance, with systems like B, HOL, Isabelle, and especially Coq.
86Model checking mainly checks finite models, potentially of a very large size. It is particularly well-suited to the verification of synchronous systems. It has benefited from the great progress of Boolean symbolic computations or SMT (Satisfiability Modulo Theories). It is systematically used to check the many transformations carried out by circuit design tools and, increasingly, to check their functional properties. It is also used in embedded software, for example, for the signalling and management of train switching. Its applications are ever-expanding, including in bio-informatics.
87Abstract interpretation works through the reduction of infinite models to finite models, using a formal framework rooted in the notion of Galois connection. It unifies and simplifies many previously disparate program analysis methods. It is used in the car and space industries, to detect easily the famous software run-time error bug that caused the explosion of Ariane 501, and in avionics, to prove that this type of run-time error does not exist in the flight control software of the Airbus A380 – a considerable scientific feat.
88Proof assistants are used to automate the verification of logical proofs and to help users carry out these proofs. There have been some great moments in this field. In circuit applications, John Harrison (Intel) used HOL to prove the correction of the Pentium’s floating-point operators, following the Pentium Pro’s famous division bug that cost Intel 475 million dollars. In the software domain, Xavier Leroy (INRIA) used Coq to develop a truly proven and reasonably efficient C compiler, which can have important implications in high-security systems. In the domain of operating systems, Gernot Heiser’s team (NICTA Sydney) was able to build an efficient system kernel, most of the crucial properties of which were proven in Isabelle. This kernel is actually used on a large scale in telephones. In mathematics, Georges Gonthier and Benjamin Werner used Coq to prove the famous four colours theorem in 2006. In 2012, Georges Gonthier and his Microsoft/INRIA team finished the formal proof in Coq of Feit-Thomson’s famous Monster Theorem on the characterization of groups of odd finites, a monument of the twentieth century with a mathematical proof that takes up no less than 250 heavy pages. To do so, Gonthier developed a new mathematical engineering devoted to managing large proofs, the mathematical counterpart of software engineering devoted to managing large programs. The phenomenal scope of this work is literally overhauling the field, and applications to circuits and software programs are promising.
89The aim of my research is to understand how formal verification techniques, particularly those associated with Coq, can be adapted to programs handling time and events.
Conclusion
90In this overview I have shown the wealth and wide range of problems linked to time and events in informatics, and I have presented new technical visions of programming, semantics and implementation. The results achieved with these new approaches are far superior to those that would be obtained with classical methods, which are barely able to talk about events and even less so about time. The lectures of the next few years will first delve further into the concepts, techniques, and reasoning of the synchronous approach, and will then address two subjects that currently remain wide open: first the difficulties raised by the explosion of the size of hardware and software systems, which require a better understanding of the relationship between synchronism and asynchronism, and second, the formalisms, logical techniques, and algorithms allowing these systems’ key properties to be proven. It will also be interesting to see if the concepts developed in this research will prove relevant in very different fields like biology and the neurosciences.
Notes de bas de page
1 See my Inaugural Lecture: Pourquoi et comment le monde devient numérique, Paris, Collège de France/Fayard, 2008 as well as the videos of the 2007/2008 lecture series, available from the Collège de France website: http://www.college-de-france.fr/site/gerard-berry.
2 Government White Paper on Digital Technology, 28 February 2013: http://www.gouvernement.fr/sites/default/files/fichiers_joints/feuille_de_route_du_gouvernement_sur_le_numerique.pdf
3 Informatics and Internet Diploma (Brevet informatique et Internet, B2I) for students; Informatics and Internet Certificate (Certificat informatique et Internet, C2I) for teachers.
4 L’enseignement de l’informatique en France: il est urgent de ne plus attendre, Académie des sciences, May 2013: http://www.academie-sciences.fr/activite/rapport/rads_0513.pdf.
5 Informatics Education: Europe Cannot Afford to Miss the Boat: http://www.informatics-europe.org/news-and-events/157-joint-informatics-europe-acm-europe-report-on-informatics-education-in-schools.html.
6 Computing in Schools: Shutdown or Restart?: http://royalsociety.org/education/policy/computing-in-schools/report.
7 After I had made this choice, my friend Jean Vuillemin showed me the title of his teaching at the École polytechnique in the nineties: “Algorithms, Numbers, Machines and Languages”. This is what pataphysics calls plagiarism by anticipation!
8 Serge Abiteboul, Data Science: From First-Order Logic to the Web, Paris, Collège de France, coll. “Leçons inaugurales”, no. 226, 2013: http://books.openedition.org/cdf/506
9 Martin Abadi, La sécurité informatique, Paris, Collège de France/Fayard, coll. “Leçons inaugurales”, no. 219, 2011: http://books.openedition.org/cdf/421.
10 G. Berry, Penser, modéliser et maîtriser le calcul informatique, Paris, Collège de France/Fayard, coll. “Leçons inaugurales”, no. 208, 2009. A video of the Inaugural Lecture is available at: http://www.college-de-france.fr/site/gerard-berry.
11 With the exception of the Jaipur sultanate in India, home to the fabulous Yantra Mandir observatory (~ 1730) with a sundial with a precision of half a second, and where the sultan had ordered the unification of time in all the sultanate’s villages.
12 G. Berry, “Manifeste pour la réhabilitation du pavillon des poids et mesures”, Viridis Candela, correspondancier du Collège de ’pataphysique, no. 1, 2007, p. 33-60: http://www-sop.inria.fr/members/Gerard.Berry/Pataphysique/BerryPoidsEtMesures.pdf
13 Especially in the South American version “mañana por la mañana”.
14 See for example Z. Manna and A. Pnueli, The Temporal Logic of Reactive and Concurrent Systems: Specification, New York, Springer, 1992.
15 “OPERA Experiment Reports Anomaly in Flight Time of Neutrinos from CERN to Gran Sasso”, CERN Press Office, 23 September 2011, updated on 8 June 2012 due to the experimental error.
16 Jordi Cortadella, “Elastic Circuits, Blending Synchronous and Asynchronous Technologies”, seminar of 21 May 2013 (http://www.college-de-france.fr/site/gerard-berry).
17 See the last two seminars of the 2012-2013 lecture series delivered on 4 June 2013: Philippe Manoury, “La musique du temps-réel” and Arshia Cont, “Musique informatique: de la synchronisation interprète/électronique à la partition algorithmique” (http://www.college-de-france.fr/site/gerard-berry).
18 This paradox is actually not really one: it simply shows that it is not coherent to cut up time and space differently.
19 Special thanks to Jean Krivine for this excellent example.
20 See the video of the lecture “Circuits and 2-Adic Numbers – A New Vision of Time-Space Exchange”, 9 April 2013, available at: http://www.college-de-france.fr/site/en-gerard-berry/course-2013-04-09-10h00.htm
21 J.-Y. Girard, Y. Lafont and P. Taylor, Proofs and Types, Cambridge University Press, 1990.
22 Safety Critical Application Development Environment.
23 Synchronous methods were actually developed independently for circuits and embedded software programmes; their conceptual similitude was not observed until the nineties.
24 For a more complete description of the language and its possibilities, see videos of the lectures on Esterel (16 and 23 April, 14 and 21 May 2013) available at: http://www.college-de-france.fr/site/gerard-berry.
25 D. Potop-Butucaru, S. A. Edwards and G. Berry, Compiling Esterel, Springer, 2007.
26 It now belongs to the US company Synopsys, which decided to put its exploitation on hold.
27 The floating-point versions of constant usage functions, like trigonometric functions, have not even been normalized yet.
28 A. Benveniste, T. Bourke, B. Caillaud and M. Pouzet, “Non-standard Semantics of Hybrid Systems Modelers”, Journal of Computer and System Sciences, vol. 78, no. 3, 2012 (special issue in honour of Amir Pnueli), p. 877-910, doi: 10.1016/j.jcss.2011.08.009.
29 P. Cousot, “Interprétation abstraite”, Technique et science informatiques, vol. 19, no. 1-2-3, Paris, Hermès, 2000, p. 155-164.
Auteurs
Le texte seul est utilisable sous licence Licence OpenEdition Books. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.
Origine et histoire des hominidés. Nouveaux paradigmes
Leçon inaugurale prononcée le jeudi 27 mars 2008
Michel Brunet
2008
L’épidémie du sida. Mondialisation des risques, transformations de la santé publique et développement
Peter Piot
2010
Les nanotechnologies peuvent-elles contribuer à traiter des maladies sévères ?
Patrick Couvreur
2010
Des microbes et des hommes. Guerre et paix aux surfaces muqueuses
Leçon inaugurale prononcée le jeudi 20 novembre 2008
Philippe Sansonetti
2009
De l’atome au matériau. Les phénomènes quantiques collectifs
From the atom to matter. Collective quantum phenomena
Antoine Georges
2010