Version classiqueVersion mobile

The Digital Public Domain

 | 
Melanie Dulong de Rosnay
, 
Juan Carlos De Martin

III. Developments and Case Studies

7. Open Knowledge: Promises and Challenges

Rufus Pollock et Jo Walsh

Texte intégral

  • 1 Substantial parts of this chapter are based on ”Componentization and Open Data”, a paper delivered (...)

1”Open knowledge” is material that others are free to access, reuse and redistribute. We are just beginning to witness its great potential. Increasing the visibility and discoverability of open resources is crucial if we are to encourage innovative re-combination and reuse—hence the importance of open metadata for open knowledge. Componentization—or the atomization of a given resource into ”packages”—has greatly contributed towards the ease with which software developers are able to reuse and build upon each other’s work. In this chapter, we argue that this kind of approach is becoming significantly more important in knowledge development. We will discuss some of the Open Knowledge Foundation’s work in these areas, with an emphasis on Public Domain Works and the Comprehensive Knowledge Archive Network (CKAN).1

2The Open Knowledge Foundation is a not-for-profit organisation founded in 2004 with the aim of protecting and promoting open knowledge in all its forms.2 By ”knowledge” we mean any text, data, image, multimedia and so on. By ”open” we mean free for anyone to access, reuse and re-distribute.3 Our work can broadly be broken down into promoting the idea of open knowledge, doing research and policy work, developing various open knowledge projects and tools for open knowledge.

3We have created the Open Knowledge Definition to provide a clear set of conditions for openness in relation to knowledge. This provides a common thread to material that is made available under different liberal licenses (such as Creative Commons Attribution and Attribution Sharealike, the GNU Free Documentation License, etc), material in which rights have been waivered (CC Zero, the Public Domain Declaration License, etc), material that is in the public domain, and so on. Our ”open knowledge” and ”open data” web buttons are intended to publicize ”openness” regardless of the legal basis of this. We have also drafted an Open Service Definition to fulfil the same function in relation to Software as a Service (SaaS). We aim to act as a hub and partner for the community of users and producers of open knowledge—facilitating discussion through our mailing lists, forums and annual conferences.

4We produce material on legal, economic, and domain-specific issues relevant to open knowledge in the UK, EU and internationally.4 We help to initiate and maintain specific open knowledge projects:

  • Open Shakespeare is a complete collection of Shakespeare’s works with ancillary information, a concordance and an annotation tool;5
  • Open Economics is a data store for economic data, plus a visualisation tool;6
  • Open Text Book is a registry of textbooks that are fully open;7
  • Public Domain Works is a registry of artistic works that are in the public domain—it merged with the Open Library;8

5Our KForge project is an open source system for managing software and knowledge projects—integrating tools such as a versioned storage system, a wiki, a tracker and a blog with the system’s own facilities for projects, users and permissions.9 We also run a free service called KnowledgeForge which runs on the Kforge software and currently houses a variety of open knowledge projects, from British parliamentary data to the works of Ivo of Chartres.10 The Comprehensive Knowledge Archive Network (CKAN) is a registry of open knowledge packages—we shall return to this later.

1. Databases of metadata and metadata for databases

6Openness means cheaper and better access to knowledge; it also encourages richer ecologies of sharing and participation. For example, the Dbpedia project extracts structured information from Wikipedia articles to allow complex querying. The W3C community project, Linking Open Data, is working hard towards inter-linking various open datasets. An increasing number of projects such as Gapminder, Swivel, and Manyeyes seek to socialise the process of visualising and analysing (open) datasets. The ”principle of many minds”, to which we often allude, states that ”the most interesting thing to be done with your material will be thought of by someone else”.

7In order to encourage this kind of collaboration it is essential that open knowledge resources are as visible and as easily discoverable as possible. Having more and better metadata is one way to facilitate this. Much metadata is of the kind we find in library card catalogues. For our Public Domain Works project we wanted to build up a large registry of metadata for artistic works, and then to use this metadata to determine which of these works are in the public domain, and hence open. Unfortunately we found that a lot of the material we were interested in was closed and prohibitively expensive. We were lucky that several databases from the BBC and from private enthusiasts were donated to us.

8In 2007, the project merged into the Open Library project—the brainchild of Brewster Kahle, who also founded the Internet Archive—which aims to provide a very large versioned database of bibliographic data, and has had some donations of data from libraries in the US. We are keen to work with them, and any other interested parties, to create a series of ”public domain calculators”11 which could be used to determine whether a given work is in or out of copyright in a given jurisdiction. While this constitutes a significant development in this area, unfortunately most bibliographic data is proprietary and cannot be reused or built upon by the technical community. As well as metadata for specific works, we can also have metadata for large collections of knowledge resources. We believe that this is integral to the greater reuse and recombination of knowledge resources.

2. Componentization and Open Knowledge

9The collaborative production and distribution of data is gradually progressing towards the level of sophistication displayed in software. Data licensing is important to this progression, but is often over-examined. Instead we believe the crucial development is componentization, which can be defined as the process of atomizing (breaking down) resources into separate reusable packages that can be easily recombined. By focusing on the packaging and distribution of data in a shared context, one can resolve issues of rights, report-back, attribution and competition. Looking across different domains for ”spike solutions”, we see componentization of data at the core of common concern.

10For those familiar with the Debian distribution system for Linux, the initial ideal is of a ”debian of data”. Through the ”apt” package management engine, when one installs a piece of software, all the libraries and other programs which it needs to run are walked through and downloaded with it. The packaging system helps one ”divide and conquer” the problems of organising and conceptualising highly complex systems. The effort of a few makes reuse easier for many; sets of related packages are managed in social synchrony between existing software producers.

3. Code got there first

11In the early days of software, there was little arms-length reuse of code because there was little packaging. Hardware was so expensive, and so limited, that it made sense for all software to be bespoke and little effort to be put into building libraries or packages. Only gradually did the modern complex, though still crude, system develop. These days, to package is to propagate, and to be discoverable in a package repository is critical to utility.

12The size of the data set with which one is dealing changes the terms of the debate. Genome analysis or Earth Observation data stretches to petabytes. Updates to massive banks of vectors or of imagery impact many tiny changes across petabytes. At this volume of data, it helps to establish a sphere of concern—distributing the analysis and processing across many sets of users, in small slices.

13Cross-maintenance across different data sets—rebuilding aggregated updates—becomes more important. Having cleanly defined edges, something like a ”knowledge API”, or many APIs, is envisaged. Each domain has a set of small, concrete common information models. To distribute a data package is to distribute a reusable information model with it—to offer as much automated assistance in reusing and recombining information as possible.

14Licensing clarity is important because without it one is not allowed to recombine data sources (though there is still a large gap between being allowed and being able). Code has come a long way with the legal issues, and differently flavoured Free Software Definitions have gained a good consensus. The state of open data is more uncertain, especially looking at the different ways of asserting the right to access and to reuse data in different legislative regions. Open data practice should demonstrate value and utility, and thus it becomes a natural choice, and not an imposition. The Open Knowledge Definition is an effort to describe the properties of truly open data.

4. Knowledge and data APIs

15Open knowledge research projects are carried out in an atmosphere of ”fierce collaborative competition”. The Human Genome Analysis project was a shining example: slices of source data were partitioned out to a network of institutions. Near-to-realtime information about the analysis results led to the redirection of resources and support to centres that were performing better. In the context of open media, people are also ”competing to aggregate”, to compile not mere volume but more cross-connectedness into indexes and repositories of common knowledge.

16Progress on the parts is easier to perceive than on the whole. In the parts, the provenance is clear—who updated data when and why, and how it was improved. The touchstones are to improve reusability, accuracy and currency of data. Working with subsets of datasets, in the absence of significant hardware or bandwidth barriers, anyone can start to carry out and contribute analysis from home. Knowledge is given back into a publically available research space, becoming easier to build on the work of others. The more people who access and analyse data, the more value it has to everybody.

17Open source software has shown that openness is complementary to commercial concerns, not counter to them. As the GPL encourages commercial reuse of code, open knowledge is of benefit to commercial activity. Providing a reference system and a common interface, more ”added value” applications are built on a base layer. The ability to monitor and report in near-to-realtime on the basis of package development can be useful to more than the ”funded community”; it provides real validation of a working (or non-working) business model.

5. What do we mean by componentization?

18Componentization is the process of atomizing (breaking down) resources into separate reusable packages that can be easily recombined. It is the most important feature of (open) knowledge development as well as the one which is, at present, least advanced. If you look at the way software has evolved, it is now highly componentized into packages/libraries. Doing this allows one to ”divide and conquer” the organisational and conceptual problems of highly complex systems. Even more importantly it allows for greatly increased levels of reuse.

19The power and significance of componentisation becomes very apparent when using a package manager (for example, apt-get for Debian) on a modern operating system. A request to install a single given package can result in the automatic discovery and installation of all packages on which that one depends. The result may be a list of tens—or even hundreds—of packages in a graphic demonstration of the way in which computer programs have been broken down into interdependent components.

6. Atomization

20Atomization denotes the breaking down of a resource such as a piece of software or collection of data into smaller parts (though the word atomic connotes irreducibility it is never clear what the exact irreducible, or optimal, size for a given part is). For example, a given software application may be divided up into several components or libraries. Atomization can happen on many levels.

21At a very low level when writing software we break things down into functions and classes, into different files (modules) and even group together different files. Similarly when creating a dataset in a database we divide things into columns, tables, and groups of inter-related tables.

22But such divisions are only visible to the members of that specific project. Anyone else has to get the entire application or entire database to use one particular part of it. Furthermore, anyone working on any given part of an application or database needs to be aware of, and interact with, anyone else working on it—decentralization is impossible or extremely limited. Therefore, atomization at such a low level is not what we are really concerned with; instead it is with atomization into packages.

7. Packaging

23By packaging we mean the process by which a resource is made reusable by the addition of an external interface. The package is therefore the logical unit of distribution and reuse and it is only with packaging that the full power of atomization’s ”divide and conquer” purpose comes into play—without it there is still tight coupling between different parts of a resource.

24Developing packages is a non-trivial exercise precisely because developing good stable interfaces (usually in the form of a code or knowledge API) is difficult. One way to provide stability but also to remain flexible in terms of future development is to employ versioning. By versioning the package and providing ”releases”, those who reuse the packaged resource can stay using a specific (and stable) release while development and changes are made in the ”trunk” and become available in later releases. This practice of versioning and releasing is already ubiquitous in software development—so ubiquitous it is practically taken for granted—but is almost unknown in the area of open knowledge.

8. Componentization for knowledge

25At present, knowledge development displays very little componentization but, as the underlying pool of raw, ”unpackaged” information continues to increase, there will be increasing emphasis on componentization and the reuse it supports. One can conceptualize this as a question of interface versus content. Currently 90 % of effort goes into the content and 10 % goes into the interface. With components this will change to 90 % on the interface 10 % on the content. The change to a componentized architecture will be complex but, once achieved, will revolutionise the production and development of open knowledge.

9. The Comprehensive Knowledge Archive Network (CKAN)

26Our CKAN project aims to encourage and support the emergence of a culture where knowledge packages can be easily discovered and plugged together as is currently possible with software. Named after software archives such as CPAN for Perl, CTAN for TeX, CRAN for R and so on, it is a registry for knowledge resources.

27It is currently in beta and consists of a versioned database of metadata for large datasets and substantial collections of knowledge resources: ”from genes to geodata, sonnets to statistics”. It gives the ”lowest common denominator” of metadata for its packages: author, id, license, user-generated tags and links. We plan to add support for domain specific metadata. We are also planning to make provision for the automated installation of knowledge packages.

Notes

1 Substantial parts of this chapter are based on ”Componentization and Open Data”, a paper delivered by Rufus Pollock and Jo Walsh at XTech (2007).

2 See the Open Knowledge Foundation’s website http://okfn.org.

3 For more details see http://www.opendefinition.org.

4 See http://wiki.okfn.org/Research.

5 See http://www.openshakespeare.org.

6 See http://www.openeconomics.net.

7 See http://www.opentextbook.org.

8 See http://www.publicdomainworks.net and http://www.openlibrary.org.

9 See http://www.kforgeproject.com.

10 See http://www.knowledgeforge.net.

11 http://wiki.okfn.org/PublicDomainCalculators

Auteurs

Open Knowledge Foundation

Open Knowledge Foundation

CC-BY-4.0

Le texte seul est utilisable sous licence CC BY 4.0. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.

Acheter

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search