Media Archeology and Data Traceology: for a New Digital Episteme
p. 13-29
Texte intégral
1At a time of widespread automation and what Chris Anderson announced in 2008 as the end of theory1 - in other words, the end of science, formerly grounded in the principle of causality but today largely dominated by statistics and the principle of correlation - it seems worthwhile, as part of a reflection on the architecture of memories, to explore how we might rethink the current conditions of the production of knowledge. To this end, we will focus on two approaches which ground knowledge in its organological conditions of production: a spatial and systemic dimension - that of the media archeology inspired by Friedrich Kittler - and a dimension emphasizing relations within the framework of data traceology, as explored in digital studies.2 Could the shift from one dimension to the other become the vector for a new form of digital hermeneutics? More generally, in what way does interpretation presently rely on our capacity to shift back and forth between the spatial dimension and the temporal one, as well as between automation and moving beyond automation, through imagination and improvisation? Beyond digital humanities and Kittler’s cultural studies (Kulturwissenschaft), digital studies focus on the epistemological changes to knowledge, skills, and ways of life brought about by digital technology, and on the new ecology of attention and new forms of reading/writing created by this new epsiteme. To illustrate this focus, the present study will draw on various research projects conducted at IRI (Institute for Research and Innovation) on annotation, categorization, and contributive editorialization.3
Media archeology
2The media archeology inspired by the work of Friedrick Kittler4 proceeds by “sections,” strata, and a systematic unearthing of the technical structures that underlie media, from Greek writing - defined by the distinctive grammatization5 of the voice into twenty-four letters and most importantly seven vowels - to the gramophone, cinema, the typewriter, and eventually digital technology, which incorporates all of these modes of discretization and production and renders them fully calculable. Unlike McLuhan, who sees the medium as a continuation of the senses, Kittler starts from the medium as a “media-technical” support, a socio-technical organ, we might say using Stiegler’s terms, or a psycho-political organ, as Kittler suggests, following Lacan6 and Foucault.7 But Kittler also supplements this dimension with Alan Turing’s theory of computability and Shannon’s theory of information,8 both of which he considers the equivalent “of the symbolic as a syntax purified of all semantics, meaning, degrees of figuration.”9
3Kittler, a pioneer of the “maker” mentality, thinks like a manufacturer of electronic synthesizers. He sees media through an organological lens, but without perceiving their pharmacological nature. His method, investigatory and narrative, fundamentally progressive and altogether Hegelian, nevertheless reveals the ambiguous effects of media, whether analog or digital. Photography can be a threat to writing, if we are not careful. For Kittler, Microsoft’s attempt to lay claim to this new world of digital images with its Corbis service (now overtakenby Google) attests to this. VCRs have inherited CCTV’s legacy: “but where the danger is, grows the saving power also,” writes Kittler quoting Hölderlin,10 which Heidegger would later consider the Gefahr11 at the heart of Gestell: technology as a revealing and perilous force. As Bertrand Gille12 and André Leroi-Gourhan13 did before him, Kittler explores the evolution specific to different media, their technical history, their “technogenesis,” rather than their dependency on humans, as McLuhan endeavors to do.14 Conversely, and this time in keeping with McLuhan, Kittler objects to an archeology of media solely based on contents, on a sociological or even on a humanistic approach. Kittler is highly aware of the fact that the history of media is primarily a history of the norms and standards that metastabilize technical evolution, in Simondon’s sense of the term, and that found the indispensable categories of an archeology which otherwise has no frame of reference. Drawing on Canguilhem and Foucault,15 he shows how norms are primarily those we give ourselves in order to divide, to delude ourselves, to mark the point at which mimesis begins. And yet the technical standard or format, today digital, incorporates and freezes this adaptative capacity to understand media in the age of algorithms. This especially applies if the format is not humanly comprehensible, as is the case with “black boxes,” digital processes made inaccessible by design or in order to protect patent rights. Kittler allows us to see just how much of our capacity to grasp the image’s norms of production is crucial to perception. Understanding the algorithm is important in the context of data traceology and hermeneutics. For Kittler, accessing the “real” is only possible through the intervention of technical mediation, which is both the condition of its disappearance and the condition of the magical and the religious. It is thus particularly vital that this mediation be understood, especially since the more desired it is, the faster it is unmasked, and the quicker a new medium is invented.16 This narrowing of the gap between fiction and reality, typical of cinema, is identified by Kittler, following Virilio,17 with war: it was Goebbels who encouraged the development of color movies,18 it was Colt and Marey who designed the first cameras as guns.
From media to data?
4According to Kittler, television represents a rupture in the process of “disappearance” and historical erasure of optical media. We move from a preponderance of imagination to the dominance of computation. We depart once and for all from a direct relationship to perception and enter a symbolic regime no longer driven by the image but by the signal. A signal which entirely corresponds to the functions Shannon describes for the electronic as opposed to the mechanical realm: storage (provided we include the VCR/magnetic tape as part of TV’s technical apparatus), transmission (via channels), computation (by way of signal/image and image/signal converters, whose first iteration, the 1883 Nipkow disk, is described by Kittler, in a genuinely Merleau-Pontian vein, as a “mind’s eye”19). This signal linearly conveys the image’s pixels, which constitute data in the sense we will define below, data within a technical system Kittler already described as closed, entropic, and designed for worldwide surveillance.20 But while Kittler describes the beginnings of computer science as a reduction of the four dimensions of reality to zero, he also acknowledges that graphical interfaces or 3D are progressively recreating new visual mediations. He does not however grasp the full extent to which the mass processing of data introduces an algorithmic governmentality21 that bypasses perception and judgment, in many cases replaces the human-machine interface by hidden processes of recommendation, and even decision-making, and thus calls for new means of interpretation in both the semantic and social fields.
A new ecology of attention
5In his course on optical media, Kittler always links the systemic dimension of media with the political or religious dimension but does not directly focus on the psychological or social consequences of such systems. And yet media and data are today effecting a total reconfiguration of our faculties of perception and judgment, producing a new ecology of attention, in Yves Citton’s terms,22 a new libidinal economy, in Bernard Stiegler’s.23 Attentional processes are now entirely mediated by the data industry within closed systems, the famous “black boxes” that still require further analysis, as Dominique Cardon suggests.24 In this context, the archeology of the media that we perceive implies a traceology of data that we do not perceive directly. This is the meaning of the double reconfiguration called for by Yves Citton: that of our attentional ecology and that of the “mediarchies” and “datarchies” that render our democracies obsolete. Drawing on Naess and Guattari (biophysical, sociopolitical, and mental ecology), Citton advocates a qualitative rather than behavioral approach, hence one focused more on networks of actors than on the individual alone. The point is to develop a qualitative archeology of media along the lines of the work done by Kittler, Peters, or Parikka and of Stiegler’s general organology, one that emphasizes the analysis of technical facts as well as technical tendencies, counter-tendencies, and even technical imaginaries. This media archeology is based on digital inscriptions, archives, and art, just as a data traceology must be based on the traces of our browsing, on their categorization and modeling, and must distinguish, when at all possible, between those traces that are voluntarily inscribed and those that are not.
Data traceology and traceability
6Traceology is originally a scientific method tied to archeology. Its purpose is to determine the function of tools by studying the traces they leave behind. In the present context, traceology is an attractive method for understanding algorithms through analyses of the data they produce. In an ideal world where open source is the norm, analyzing the codes of algorithmic systems would follow an exclusively archeological method or one inspired by Kittler’s approach to analyzing media systems. But because of the multiplication of black boxes and of the intricacy of algorithmic processes, it is often necessary, in order to access algorithmic content and produce knowledge, to proceed via “reverse engineering,”25 i.e., to recompose or at least to make hypotheses about the tool by observing its traces. Digital traceology also implies data traceability, in other words, the possibility, if necessary, to track down the source and to identify, if possible, its human author or algorithmic producer. The assumption is that such a traceology should open possibilities for developing new interpretative tools (the focal point of the French National Research Agency’s program Episteme26) as well as for rethinking the architecture of the web itself, with a view to what we at IRI Centre Pompidou call a hermeneutic and negentropic web. In short, a web open to interpretation, to exploration, to diversity, and to the production of knowledge against the general entropy of a web exclusively dedicated to calculation.
Towards the hermeneutic web
7The need for an architecture of the web adapted to traceability and thus to general interpretation is not new. It is, in a way, a return to the architectural principles of the web, which was originally designed as a humanly interpretable decentralized structure but which, because of the massive amount of data to process, evolved toward increasing automation and, finally, toward centralization and privatization within a paradigm of platforms.
8In 1989, starting with the first research conducted at CERN by Tim Berners-Lee and Robert Cailliau, and later on with the release of the World Wide Web in April 1993, the main inspiration was still the model of the human-accessible hypertext, formalized as HTTP (HyperText Transfer Protocol) and programmable in XHTML (Extensible Hypertext Markup Language). This model’s success led to a growing process of automation: on the one hand, the use of the XML format allowed for the tagging of complex resources (trees, enhanced text) in representation languages which could be specific (MathML, MusicXML, TourML) or more generic (SMIL, SVG, X3D, JSON); on the other hand, there was a rise in importance of the semantic web based on RDF graphs in order to automize categorization, especially between machines, to move from the tree to the graph, from the thesaurus to ontology, and eventually (definitively?) to leave behind the document-based web for a data-based web.
9Alongside this process of semantic automation, it is striking to see today’s increasing need for interpretive and annotative formats, such as Open Annotation and its recently updated version, Web Annotation,27 which makes it possible to follow not only the circulation of resources but also to document processes of automatic categorization and to perform hermeneutic operations on the entirety of the web’s resources. The format is part of a movement toward linked open data, which makes it possible, for instance, to typologize relationships in a graph, to identify the target of the annotation, to pinpoint fragments of images, texts, or videos (W3C Media Fragment format28), to produce multiple-description annotations, to annotate not only the resource but the relationship to the resource, in other words, the motivation or intention of the annotation, which partly connects with the work on meta-categories we will examine below.
Fig. 1: Diagram presenting the W3C Web Annotation standard. Screenshot.

10Open Annotation is not unrelated to a historical foundation that predates Berners-Lee’s web: namely, the Xanadu project29 designed by the inventor of hypertext, Ted Nelson, who has consistently decried the direction taken by XML and even by SGML. The Xanadu project, as Nelson described it in 1960, was part of a project of transpublishing in which publishing could very simply and freely rely on textual elements whose relationship to the source was otherwise unbreakable. Nelson’s hypertext was already an interpretive tool.
11The economic logic of the contemporary web clearly no longer follows the principles of reciprocity and traceability nor the focus on granularity suggested by Nelson; above all, it is now completely detached from the document model and is instead based on traces and on the data those traces produce. The hyperlink as a new form of human writing is currently under major threat: we depend on machines to make connections in our place. Certain sites or social networks even make it impossible to link to anything outside their own web pages.30 However, the concern with the intelligibility and manual publishing of links continues to motivate the development of online forms of critical editorialization, and especially so because this critical web will take shape alongside new forms of social networking such as those suggested by Yuk Hui and Harry Halpin.31 This conception of the social network, deeply attached to a specific way of working (collaborative, because it is grounded in group work, and contributive because it takes into account individual input), must involve a complete reinvention of the web. The “web we want,”32 in this view, cannot only be a semantic web, a quantitative web; it must be a hermeneutic web,33 using calculation for social purposes and putting front and center the encounter of its users’ singular interpretations.
The redecentralization and territorialization of the web
12The redecentralization of the internet and especially of the web is a political issue and a matter of empowerment.34 It is an issue of redecentralization because the internet before the hegemony of platforms presupposed not only a distributed architecture but also a decentralized and symmetrical organization. In the Nextleap project,35 the theme of network decentralization has mobilized researchers from numerous disciplines, in particular computer science, economics, sociology, and law. An epistemological approach is also called for, since the current centralization of data on platforms is increasing the opacity of categorization processes and making unintelligible and uninterpretable such vital operations as pre- categorization, predictive analysis, and profiling, as well as the evaluation of the relevance of data processing, the tangled relationship of data processing to practice, and the impact of data processing on self-perception.36 In the face of these threats, several projects have sought to form crypted and protected spaces of communication (TOR), to design alternative architectures to cloud computing (the Wuala distributed storage system), or to develop distributed architectures such as blockchain in order to validate transactions (the Bitcoin model) as well as property titles and activities in general.
13All of these decentralized architectures rely on trust37 in so far as they engage contributors in service availability and mutual-dependence with regard to other contributors rather than to a trusted third-party, currency, or specific instrument. The blockchain itself is a trust-based infrastructure reliant on computation and, as such, its entropic effects mustn’t be ignored in terms of both energy and environmental issues and information theory.38 Another objection to these decentralized systems is that, paradoxically, they present greater security issues than centralized systems. We constantly have to find a compromise between autonomy and security and, from a political point of view, between transparency and secrecy, a compromise that is a constitutive part of democracy.
Contributive technologies and new episteme
14If we imagine ourselves in a hyperconnected world, our relationship to knowledge changes entirely. Value is less and less relative to exchange and to the use of information, in other words to what can be digitized and automated, and increasingly linked to the practice of data ecosystems that foster the production of a form of knowledge that cannot be automated because it is grounded in interpersonal relations, in sharing and publishing.
Contributive categorization and transindividuation
15Our first empirical experiment with contributive categorization protocols emphasizing transindividuation39 began with a system of classroom or meeting notetaking, first developed with the software Lignes de temps40 in 2009 and later with the application Polemic-tweet,41 built on the Twitter API, in 2010. The system involved three steps: 1) a presentation of the protocol that explained to contributors that their notes, in this case their tweets, would later be published and synchronized with the video recording of the class; 2) the use of the live participation interface, which allowed contributors to tag the tweets with four color-coded categories: green/to remember, red/to discuss, yellow/references, blue/questions; 3) the posting of the video recording indexed to the tweets, which made it possible to directly consult the sequences to be remembered or discussed, the references or questions, and to use an intravideo search engine based on the tweets’ contents. The system was the object of a detailed analysis in Samuel Huron’s dissertation (IRI-INRIA), whose results, based on a sample of 27 events annotated by 1,012 contributors, showed that the protocol was used at a rate of 40 %.42
16In 2013, to take the individuation process one step further, we implemented a protocol for contributive annotation of Bernard Stiegler’s online philosophy course.43 Following the workshops conducted by Paul-Émile Geoffroy, a protocol with four meta-categories was first applied to the written notes and then to their synchronization with the video: one annotation indicating paraphrasing and understanding (code green); another for questions and confusion regarding an explanation (code red); another for commentary and free interpretation (code blue); and the last for descriptive indexation of the content (keywords in yellow). The term “meta-category” is used when it does not directly refer to the annotated content but to the meaning of one’s contribution. Subsequently, the mere observation of co- occurrences of these meta-categories at a given moment of the course (Fig. 2) related possibilities for discussion between students and, based on this, a recommendation engine that we planned to add would be able to suggest interactions between participants.
17The categorization process is not limited to audiovisual material. It is currently being implemented, within the framework of the National Research Agency’s project Episteme, for the annotation system hypothes.is44 and for the annotation of photographs as part of the project IconoLab.45 The goal is to measure the quality of content by way of peer review, and especially its degree of reliability and relevance. The meta-categories used for IconoLab are: interesting, questionable, needs a reference, needs more contributors, needs an expert.
18The general aim of the Episteme project is to test various categorization protocols in contributive systems used by researchers and to carry out a traceology of one’s own contribution (bookmarking) as well as contributions of other participants based on a discussion thread. Discussions might focus on the categories used, shared, or debated in a contributive process whose changes are tracked over time, currently with the GitHub system.46
Fig. 2: Display of interpretative convergences and divergences during a course on the pharmakon. Screenshot.

Fig. 3: Reliability and relevance tags on IconoLab. Screenshot.

Contributive editorialization
19The editorialization formats developed at IRI have all inherited a generalized online addressing function from the video annotation tool Lignes de temps: each phase of the software corresponds to a URL that ensures traceability and enables traditional hypertext editing functions such as the insertion of a URL for a video within a text or footnote. This online addressing function, which is linked to the various points of the software’s history and to the traces left by users, gives access to new formats: no longer to hypertext alone but also to hypervideo, through the combination of URLs in formats such as HashCut,47 which continues the amateur practice of mashups48 by providing video clips composed of sequences indexed to the same category (or hashtag) and retaining the links to their original sources. This format helps develop an audiovisual narrative based on which one can return to the films from which the segments are taken. Library users can also create mashups when consulting an audiovisual collection. Their creations become entryways to the collection.
20As part of a collaboration with the University of Tokyo, which was looking to develop new forms of hybrid reading, we designed a synthetic format based on mental mapping but reinforced the function for categorizing the links between nodes that might represent concepts or documents. With a tool called Renkan49 ( “link” in Japanese), one can directly drag and drop any web resource (page, text fragment, video segment, photo, sound, etc.).50 This type of synthetic, schematic format always runs the risk of oversimplifying thought and emphasizing its computability as a form of formal ontology. We must be careful to preserve interpretation through diagrams; interpretation is at the core of a noetic loop constantly shifting between automation and de-automation.
21To illustrate this back and forth concretely, we should mention a final project, an ongoing experiment through the French Ministry of Culture carried out on the online portal Art History (Fig. 4).51 The starting point is a multifaceted search engine that makes it possible, based on the same interface, to search a geographical map, a timeline, a list of chosen topics for teachers, or a tag cloud. Each search result (list of resources), every resource or tag, can be displayed in the knowledge map with its semantic links. Based on this automatically-produced graph, teachers or pupils can add or delete links and enhance their map with resources from the portal or web. In this case, we are indeed dealing with semantic graph editorialization, referring here to a new medium based, to borrow from Kittler’s vocabulary, on digital traces.
Fig. 4: Multifaceted search engine. Screenshot.

Fig. 5: Production of an editable graph. Screenshot.

Notes de bas de page
1 https://www.wired.com/2008/06/pb-theory.
2 Bernard Stiegler, Digital studies: Organologie des savoirs et technologies de la connaissance (Limoges: FYP, 2014).
3 On the concept of editorialization as taken up in digital culture, see for instance Marcello Vitali-Rosati, On Editorialization: Structuring Space and Authority in the Digital Age (Amsterdam: The Institute of Network Cultures, 2018). (Translator’s note.)
4 Friedrich Kittler, Optical Media: Berlin Lectures 1999, trans. Anthony Enns (Malden, Mass.: Polity Press, 2010).
5 Sylvain Auroux, La Révolution technologique de la grammatisation (Liège: Mardaga, 1994).
6 Jacques Lacan, “Psychoanalysis and cybernetics or on the nature of language,” The Seminar of Jacques Lacan, Book II (New York and London: W.W. Norton & Co., 1991).
7 Michel Foucault, The Archaeology of Knowledge, trans. Alan Sheridan (New York: Routledge, 2002).
8 Peter Berz argues that for Kittler, the gramophone, cinema, and the typewriter are, in keeping with Lacan and Foucault’s point a view, the historical realization of the real, imaginary, and symbolic registers while, from Shannon’s technological point of view, the functions of storage, transmission, and computation allow us to grasp the medium as, respectively, image, writing, and number.
9 Friedrich Kittler, Optical Media: Berlin Lectures 1999, op. cit., 41.
10 Ibid., 27.
11 Martin Heidegger, The Question Concerning Technology and Other Essays, trans. William Lovitt (New York: Garland Publishing, 1977), 28.
12 Bertrand Gille, ed., The History of Techniques, Volume 1 & 2, trans. P. Southgate and T. Williamson (New York: Gordon and Breach, 1986).
13 André Leroi-Gourhan, Milieu et Techniques (Paris: Albin Michel, 1945), 30.
14 Marshall McLuhan, The Medium is the Massage (London: Penguin Books, 1967), 56.
15 Friedrich Kittler, Optical Media: Berlin Lectures 1999, op. cit., 37.
16 Ibid., 101.
17 Paul Virilio, War and Cinema, trans. Patrick Camiller (London & New York: Verso, 1989).
18 Friedrich Kittler, Optical Media: Berlin Lectures 1999, op. cit., 202.
19 Ibid., 210.
20 Ibid., 225.
21 Antoinette Rouvroy and Thomas Berns, “Algorithmic Governmentality and Prospects of Emancipation,” Réseaux 177: 1 (2013): 163-196.
22 Yves Citton, The Ecology of Attention, trans. Barnaby Norman (Malden, Mass.: Polity Press, 2016).
23 Bernard Stiegler, Symbolic Misery, trans. Barnaby Norman (Malden, Mass.: Polity Press, 2014).
24 Dominique Cardon, À quoi rêvent les algorithmes. Nos vies à l’heure des big data (Paris: Seuil, 2015).
25 Antoine Mazières, “Unifying Properties of Reverse Engineering across Disciplines”: http://cri-paris.org/team/antoine-mazieres-2.
26 https://projet-episteme.org.
27 https://www.w3.org/annotation/ (participants include: INRIA, Hachette, Hypothes.is, Institut Télécom, Library of Congress, Pearson, Sony, Stanford).
28 https://www.w3.org/2008/WebVideo/Fragments/.
30 http://www.linuxjournal.com/content/whats-our-next-fight.
31 Yuk Hui and Harry Halpin, “Collective Individuation: the Future of the Social Web,” in Unlike Us Reader: Social Media Monopolies and Their Alternatives, eds. Geert Lovink and Miriam Rasch (Amsterdam: Institute of Network Cultures, 2013). https://www.iri.centrepompidou.fr/wp-content/uploads/2011/02/Hui_Halpin_Collective-Individuation.pdf.
32 An initiative of the World Wide Web Foundation: https://webwewant.org.
33 The discussion and proposition of this hermeneutic web was at the heart of Les Entretiens du nouveau monde industriel [The New Industrial World Forum] in December 2015: http://enmi-conf.org/wp/enmi15.
34 Cécile Méadel and Francesca Musiani, Abécédaire des architectures distribuées (Paris: Presses des Mines, 2015).
36 Tarleton Gillespie, “The Relevance of Algorithms,” Media Technologies, eds. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot (Cambridge, Mass.: MIT Press, 2014).
37 Bernard Stiegler, ed., Confiance, croyance, crédit dans les mondes industriels (Limoges: FYP, 2012).
38 On the concept of negentropy in digital environments, see Bernard Stiegler, Dans la disruption (Paris: Fayard, 2016).
39 Going beyond Simondon, Stiegler sees technology as the condition for the transmission of knowledge between the individual and the group and thus as a condition for the production of knowledge (transindividuation process).
40 http://ldt.iri.centrepompidou.fr.
42 Samuel Huron, Petra Isenberg, and Jean-Daniel Fekete, “PolemicTweet: Video Annotation and Analysis through Tagged Tweets,” 14th International Conference on Human-Computer Interaction (INTERACT) (Heidelberg: Springer, 2013), 135-152.
43 http://pharmakon.fr and Gwenaela Caprani’s article at: http://www.riotice.com/?p=2673.
45 https://iconolab.iri-research.org.
46 http://labs.iri-research.org/catedit.
47 http://ldt.iri.centrepompidou.fr/ldtplatform/hashcut/iri.
48 The same practices and productions presented for the past three years by the Forum des images in Paris: http://www.mashupfilmfestivala.fr.
49 http://www.iri.centrepompidou.fr/outils/renkan.
50 http://www.musiquecontemporaine.fr/fr/cartesDeConnaissances.
Auteur
-
Vincent Puig
Director of cultural, research, and industrial relations at the Centre Pompidou since 1993 (head of the scientific committee at IRCAM, founder of IRCAM Forum, Vice-President Europe of the International Computer Music Community, Assistant Director of the Department of Cultural Development at the Centre Pompidou). In 2008, he cofounded with Bernard Stiegler the IRI in its current form as a research group dedicated to digital studies for an organological approach to skills and technologies of knowledge, which receives support from the Centre Pompidou, the CCCB, Microsoft, University of Tokyo, Goldsmiths College, Institut Mines-Telecom, France Télévisions, Orange, and Dassault Systèmes. He is Vice-President of the Services Evaluation Committee at the Cap Digital Competitive Cluster. He is a member of the board of Ars Industrialis (International Association for an Industrial Politics on Technologies of the Mind) and of the Scientific Council of the Mediterranean Institute for Advanced Research at Aix- Marseille (IMéRA), a foundation promoting interdisciplinarity in science and arts.
Le texte seul est utilisable sous licence Licence OpenEdition Books. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.
Le Sujet Digital
Claire Larsonneur, Arnauld Regnauld, Pierre Cassou-Noguès et al. (dir.) Stéphane Vanderhaeghe, Géraldine Bertres, Hélène Soldano et al. (trad.)
2015
Le Comportement des choses
Emanuele Quinz (dir.) Lise Thiollier, Gabriele Stera et Armelle Chrétien (trad.)
2021
Artistes-chercheur·es, chercheur·es-artistes
Performer les savoirs
Boudier Marion et Déchery Chloé (dir.)
2022
Architectures of memory
Jean-Marie Dallet et Bertrand Gervais (dir.) Armelle Chrétien et Joshua David Jordan (trad.)
2022
Angles morts du numérique ubiquitaire
Glossaire critique et amoureux
Yves Citton, Marie Lechner et Anthony Masure (dir.)
2023