Presenting the variables of the cost-benefit equation
Texte intégral
Introduction
1As noted in Chapter 1, this book presents a methodology for evaluating regulatory measures relating to content policies. The methodology involves several variables, which I present briefly in this chapter. Subsequent chapters will take a deeper dive into certain variables and balancing tests.
2The methodology starts with a given content policy, for example a policy to reduce or eliminate images of child pornography online, or a policy to apply the “right to be forgotten.”
3For any given content policy, there should exist an ideal combination of internet intermediary action (e.g. website blocking based on DNS server) and institutional framework (e.g. self-regulation) that maximize the net benefits of the measure. The net benefits are equal to:
- The benefits for society flowing from application of the measure;
- less the direct and indirect costs resulting from the measure.
4The direct and indirect costs include:
- The direct costs of implementation of the measure by the internet intermediaries;
- the direct costs of the institutional framework, including enforcement costs;
- the costs associated with interference with fundamental rights, including freedom of expression, privacy, procedural fairness, the right to property and the right to conduct a business;
- the costs associated with interference with the open internet, including harm to innovation;
- the costs associated with other unintended effects, including changes in user behavior.
5This chapter will provide a brief overview of these benefits and costs. Many benefits and costs will be difficult to measure, making a standard cost-benefit analysis difficult. Chapter 6 explores ways of quantifying certain benefits and costs, or using qualitative labels where quantification is not possible.
Figure 3: The ideal level of enforcement of a content policy is the one that yields the highest net social benefits (x axis = level of enforcement, y axis = total benefits less total costs)

6Figure 3 shows on the y axis the level of net benefits and on the x axis the level of enforcement of the relevant content policy using different regulatory options. Q1 represents the ideal level of enforcement, i.e. a measure that maximizes net social benefits. The measure Q2 provides a more complete level of enforcement of the relevant content policy, but does not maximize net social benefits when all costs (including effects on fundamental rights and the internet ecosystem) are taken into account.
7This chapter will provide a brief overview of five main variables in the “net social benefits” equation, i.e.:
- the content policy that requires an enforcement measure. Those content policies are intended to address market failures;
- the kinds of internet intermediaries that can potentially take action to enforce the content policy;
- the institutional options that are available;
- the potential harms to fundamental rights and to the internet ecosystem.
Variable 1: Content policies
8There are a range of content policies that internet intermediaries could potentially help enforce. For some content policies, the involvement of internet intermediaries may be highly effective without significant side effects. For other policies, the involvement of internet intermediaries may be ineffective, or worse: The harmful side effects may outweigh the benefit derived from the measure.
9Below is a list of fifteen categories of content policies that may prompt proposals for internet intermediary action. The content policies described below reflect government responses to market failures. In some cases, the content policies are designed to protect property rights, which are necessary to ensure an efficient allocation of scare resources. In other cases, the content policy is designed to compensate for negative externalities, as in the case of policies designed to limit spam and the use of cookies. The content policy may be designed to ensure that a market for certain criminal activities, e.g. the sexual exploitation of children or the sale of illegal drugs, does not exist at all, because of the intolerably high negative externalities that those transactions create.
10The existence of a given content policy, such as a law prohibiting copyright infringement or child pornography, constitutes a first level response by government to a market failure. The second level of analysis, and the one most relevant for our discussion, is whether existing government mechanisms for enforcement, combined with existing market-based enforcement mechanisms, provide an optimal level of enforcement. If the answer is no, then some form of regulatory intervention may be necessary to bring enforcement mechanisms closer to a socially optimal level. This regulatory intervention may implicate internet intermediaries.
11As will become evident in Chapter 6, the existence of a given content policy will usually be taken as a given in any regulatory impact assessment. The key question will not be whether the content policy should exist, but whether the level of enforcement is optimal. In some cases, internet intermediaries may have a role in building a socially optimal enforcement strategy. The actions of internet intermediaries may flow from self-regulatory measures or from government constraint. In other cases, the socially optimal enforcement strategy may rely exclusively on traditional enforcement tools used by the police and/or litigation before civil courts.
12Below is a non-exhaustive list of content policies for which internet intermediaries may have a role to play.
Cybersecurity threats
13Internet intermediaries have long contributed to the fight against malicious software code and other forms of cyber-attacks. Cyber security threats may consist of malware, spyware, denial of service attacks, or other techniques designed to disrupt computer systems and/or steal confidential information from systems. Internet intermediary action here has long been considered effective and essential to protect the internet and its users. Most internet intermediaries already take action to protect their services and customers against cyber-attacks. Cyber-attacks create costs for internet intermediaries, and the risk of these costs is to a large extent internalized by internet intermediaries. The market therefore provides a reasonably high level of cyber security enforcement, without the need for government regulation.
14Like national defense, certain aspects of cyber security have the characteristics of public goods, and may be under produced without regulatory intervention. For example, to build an effective national cyber-defense strategy, it may be necessary for corporations to disclose to government authorities information about cyber-attacks. Yet such information-sharing will generally not occur without a regulatory obligation because the information-sharing will create costs for corporations, and they will have a tendency to free-ride in the absence of regulatory constraint.
Spam and phishing
15Spam and phishing do not aim directly to disrupt computer systems, but instead seek to take advantage of vulnerable internet users. Some forms of spam may be legitimate advertisements. But they are sent to millions of e-mail addresses simultaneously at almost no cost for the sender. The massive sending of e-mails creates negative externalities: annoyance for internet users and potential risk for the network. Phishing is a form of internet fraud in which the sender of the spam pretends to be a bank or other legitimate vendor, and thereby tries to obtain through fraud confidential information from the internet user.
Cookies and other forms of tracking software
16Unlike spyware, cookies generally serve legitimate business purposes. They are used to track internet users’ browsing habits in order to make browsing easier, to permit the web publisher to gather statistics, and to permit web publishers and advertising networks to place targeted advertising. Certain uses of cookies pose privacy threats, particularly when cookies share browsing habits with advertising networks to facilitate behavioral advertising. The market failures here are information asymmetries: consumers do not know what cookies are doing and therefore cannot make rational decisions. Data privacy laws may therefore require that users be informed of the presence of cookies and that for certain uses of cookies (e.g. advertising) users give their prior consent. Targeted advertising permits web publishers to provide free services and information to users, contributing to the diversity of content available on the internet. Therefore content policies that limit the use of advertising cookies must be balanced against the benefits that cookies bring to web publishers, and ultimately to users.
“Right to be forgotten” content
17Certain content may be perfectly legal, but its availability on the internet may cause damage to individuals. For example, an old newspaper article describing the financial difficulties of an individual may continue to appear prominently in search results, even though the article is outdated and not relevant to the individual’s current life and financial situation. Individuals may also be harmed by articles or photos describing unflattering events that occurred in the person’s youth. To address this kind of harm, the European Court of Justice held that individuals have the right to ask search engines to eliminate from search results content of this kind, as long as the elimination does not unduly interfere with freedom of expression.1 The United States also has laws that recognize a “right to be forgotten” in limited situations. California’s so-called “Eraser Law”2 allows minors to require the deletion of photos or other information that they previously posted on social media; the Federal Credit Reporting Act (FCRA)3 prohibits organizations that provide credit scoring from taking into account negative events that occurred more than seven years previously. The application of the European version of the “right to be forgotten” poses complex problems for governments and internet intermediaries because each delisting requires a fact-specific balancing of fundamental rights. The territorial reach of the right to be forgotten is also highly contentious.
18From a law and economics perspective, the harm caused to a person as a result of his or her unflattering (but true) information being readily available to anyone searching the internet using the person’s name could be considered as a negative externality, i.e. the cost caused to the person is not taken into account in the transaction between the supplier of the search service and the user of the search services. Posner (1978) argues that the availability of true but unflattering information leads to efficient transactions, both in professional and private contexts. The availability of information reduces inefficient search costs, and fewer transactional errors due to information asymmetries.
19The user of a search engine purchases a search service precisely in order to find information about the object of the search. When the search term is a person, the information sought may include unflattering information. The searcher therefore derives a private benefit from finding such information, and that benefit may offset in whole or in part the cost suffered by the person whose embarrassing information was revealed. The “right to be forgotten” therefore does not resemble classic situations of negative externalities, because the thing that is being purchased and which has value for the purchaser is precisely the thing that creates the harm to the third party. This is similar to the situation of a person offering to pay a dollar to a factory to emit a certain amount of pollution because the purchaser derives benefit from the pollution emitted (even though the pollution bothers a neighbor). If there were a market in the pollution, the neighbor would pay the factory, or the prospective purchaser of pollution, in order to prevent the transaction from taking place. If the victim’s offer exceeds the purchaser’s, the transaction would not take place. Conversely, if the purchaser of the pollution proposes more than the victim, the transaction would take place, and this would be efficient.
20In the case of unflattering information, the same reasoning would apply. If the value of the information to the searcher is higher than the value of concealment to the victim, the information should be revealed. However as Posner (1978) points out, the market in true information is different from other markets. The concealment of true information about goods or persons would yield inefficient transactions (e.g. hiring decisions) based on incorrect information (information asymmetries). The concealment would generate inefficient search costs such as hiring a private detective instead of using a search engine, and lead to potential adverse selection effects, in that a searcher may assume that all persons are hiding unflattering information, even those who aren’t.
21Unfiltered search results also provide significant positive externalities to web publishers. Web publishers are not party to the transaction between the provider of the search service and the user, or party to the transaction between the provider of the search service and advertisers. They are third parties, and derive significant benefit from search transactions because their content is visible to those looking for it. This avoids having to spend large sums on advertising. Search neutrality, like net neutrality, creates significant positive externalities that also must be considered when imposing “right to be forgotten” policies on search engines.
22The “right to be forgotten” is relatively new, and has not yet been studied by economists. Under what conditions the right to be forgotten yields a welfare-maximizing outcome is beyond the scope of this book. What is important here is how regulators should evaluate different technical and institutional alternatives for achieving the desired “right to be forgotten” outcome.
Illegal online gambling
23Gambling is highly regulated in most countries. The purpose of regulation is both to protect individuals against gambling addiction and to protect the public against the development of organized crime and money laundering in connection with gambling operations. In some jurisdictions, gambling of all forms is illegal. In other jurisdictions, gambling operations are legal but require a license. In both cases, jurisdictions will seek to enforce their policies by preventing citizens from gambling on illegal gambling websites, most of which will be situated outside of the relevant country. Internet intermediaries are often asked to help limit access to these offshore web sites.
24Laws designed to fight illegal online gambling include France’s so-called “ARJEL” law, which created an independent regulatory authority in charge of licensing legitimate online gambling platforms, as well as making a list of illegal platforms to which access should be blocked.4 The ARJEL regulatory authority must apply to a court for an order requiring all internet access providers in France to block access to the illegal gambling sites.
25Like cigarette smoking, online gambling is a demerit good that generally creates more harm than good to the gambler. Yet because of myopia and addiction effects, the individual will not be able to weigh the harm and benefits in a rational manner, leading to decisions that are not in the gambler’s own interest. Gambling also creates negative externalities, by leading to personal bankruptcies, and markets for illegal loans, which hurt third parties. Finally, online gambling can create problems of information asymmetries, because it is very hard for gamblers to distinguish honest online gambling sites from ones that are dishonest or linked to organized crime.
Sale of cigarettes and alcohol
26The sale of cigarettes and alcohol is regulated both to protect public health, particularly that of minors, and to raise tax revenues for the state. Cigarettes and alcohol are demerit goods, creating harm to those who consume the products but also harm to third parties, thereby creating negative externalities. Jurisdictions applying these policies will seek to prevent consumers from purchasing cigarettes and alcohol via the internet from vendors who are not subject to the regulations of the relevant jurisdiction.
27Counterfeit drugs, illegal drugs
28Drugs are also highly regulated. Some drugs are illegal in all circumstances (e.g. heroin), other drugs require a prescription. A growing problem is the sale via internet of counterfeit drugs, i.e. drugs that are made to appear identical to legitimate drugs but are in fact illegal copies. The copies are at best ineffective, at worst highly dangerous. A buyer may not know the difference between a legitimate drug and a fake. Internet intermediaries may be asked to help shut down these dangerous web sites.
Intellectual property infringement
29The internet has made it easy to violate copyright and trademark laws. Individuals can create digital copies of protected works (music, films, books, photos, sheet music) with relative ease and make the works freely available on the internet. Copyright violations on the internet can originate from organized criminal operations, or from the actions of ordinary citizens who wish to share or consume content without monetary gain. Sale of counterfeit goods is also facilitated by the internet, leading to harm to purchasers who think they are buying a product with a particular level of quality, and to legitimate sellers who invest in product development and advertising and who are taken advantage of by free-riding counterfeiters. Copyright and trademark infringement laws protect a property right, thereby nudging people to acquire protected works and products through the legitimate market as opposed to through black market mechanisms. Internet intermediaries will often implement policies to limit online infringement, either pursuant to their own internal policies, or as a result of government regulation.
30There are numerous examples of laws and self-regulatory initiatives designed to fight online copyright infringement. One of the best known examples internationally is the French HADOPI law, described in Chapter 1.
31Other examples include the UK’s Digital Economy Act, which calls for an industry agreement involving ISPs and other stakeholders, failing which additional regulatory measures would be imposed on internet intermediaries. The United States relies heavily on the Digital Millennium Copyright Act to impose notice and takedown obligations on internet intermediaries for copyright infringement. The United States also has a system for seizing the domain names of websites that infringe intellectual property. Finally, the United States has implemented a self-regulatory framework, called the Center for Copyright Information, that resembles in many respects the French HADOPI regime.
32Many question whether the property right created by copyright laws is overly restrictive in the digital era. Proponents of copyright reform argue that certain digital uses of protected works should be permitted under an extended “fair use” exception to copyright. The debate on the scope of copyright is beyond the scope of this book, but sometimes interferes with the debate on the role of internet intermediaries. The way digital intermediaries enforce copyright can affect how copyright exceptions are applied in practice, thus affecting the scope and existence of the right itself.
Defamation and the protection of privacy
33Certain content published on the internet is illegal because it is defamatory or wrongfully invades an individual’s privacy. Photos taken of a person in a private setting may be illegal, as may be the publication of information relating to an individual’s private life (e.g. her confidential health information). Under law and economics, the existence of a privacy right is considered efficient so that people do not invest needlessly in high walls and security measures, and so that people communicate more freely. Publishing false and harmful information about an individual is defamatory. False information about people, like false information about goods, leads to inefficient transactions.
34The publishers of the defamatory or privacy-invading content may be impossible to locate, making enforcement on the internet difficult. Internet intermediaries may be called on to help. Content that violates defamation or privacy law will be removed at the source wherever possible, because the individual victim’s rights will outweigh the publisher’s freedom of expression and the public’s right to access the information. In other words, the objective will be to remove the content entirely from the internet, or block its access if removal is not possible. This contrasts with the “right to be forgotten,” where the underlying content is not defamatory or a serious enough violation of privacy rights to merit removal. Instead, the content should only be made more difficult to find when using certain search terms.
Websites promoting racial, ethnic or religious hatred
35A number of countries prohibit websites that promote racial, ethnic or religious hatred. The First Amendment of the United States Constitution prohibits the state from adopting laws that regulate speech based on the ideas conveyed, which limits the United States government’s ability to prohibit so-called “hate speech”. In Chapter 3 I explain that “chilling effects” are largely behind this broad interpretation of freedom of speech principles. In European countries, constitutional freedom of expression principles give more flexibility to the state to regulate content that promotes racial hatred or anti-Semitism. Internet intermediaries frequently assist in eliminating hate speech on the internet, either through voluntary acceptable use policies, or through government-imposed measures such as blocking. Chapter 3 examines the law and economics justification for prohibiting (or not) hate speech.
Regulations designed to protect local culture and language
36Certain countries have regulations to promote local film production, local culture and language. These rules are generally part of the country’s broadcasting regulations. However, as the difference between traditional broadcasting and internet content becomes blurred, a number of countries are considering how regulations designed to protect and promote culture on television can be applied to the internet. Some proposals would involve internet intermediaries as vectors for cultural policy. For example, the so-called “Lescure Report” in France5 proposed that ISPs allocate more bandwidth to online video platforms that voluntarily agree to promote French culture and audiovisual production. The objective of these cultural policies is generally to redistribute wealth toward producers of local content, so that they remain viable in the face of lower-priced popular content from the United States. The mechanism is similar to regulations designed to protect local agriculture. The rationale for these regulations is generally that a rich cultural ecosystem is a public good that will be under-produced by the market without public intervention.
Advertising laws
37Consumer protection laws govern all forms of advertising. Broadcasting laws generally contain additional rules for television advertising. Advertising laws may seek to address information asymmetries and/or seek to discourage consumption of demerit goods. Countries may seek to apply these advertising policies to content on the internet, even content that originates from outside the country. As is the case for other content policies, it is not inconceivable that internet intermediaries might be asked to help police these measures, by blocking certain forms of advertising that violate local laws.
Protection of minors against adult content
38Some content is legal for adults, but prohibited for minors. A major objective of broadcasting laws is to protect minors from sexually explicit and violent content. Providers of adult content are generally required to make sure that only adults may access the content. For some content, it may be sufficient that the program be scheduled late at night and/or have a warning label advising that the content may be inappropriate for young viewers. Lawmakers will try to find ways to make sure that these measures are applied on the internet, potentially calling on internet intermediaries to implement age verification mechanisms. Parental control software already addresses this issue in part.
39So-called “adult” content is linked to a broad category of content considered “inappropriate,” including nudity, violence or degrading content. Many social media platforms prohibit this sort of inappropriate content under their acceptable use policies. I will examine these acceptable use policies in Chapter 4, under the heading “unilateral self regulation.”
40Regulation of adult content must take into account the harm caused to the viewer of the content, but also potential harm to third parties (externalities). These harms must be weighed against freedom of expression. The harms to the viewer and to third parties are potentially much greater when the viewer is a child, because of the effects on the child’s personality and future development.
Child pornography
41Images depicting the sexual exploitation of children are illegal in virtually all countries. Police attempt to identify and arrest persons who possess and share such images. Governments may also take measures to ensure that websites proposing such images are not accessible to the public. These government policies are supplemented by self-regulatory programs implemented by internet intermediaries in cooperation with law enforcement authorities.
42Examples of initiatives designed to limit online child pornography include France’s so-called LOPPSI 2 law6 which authorizes the Ministry of Interior to create a list of URLs corresponding to websites with images of child pornography. The Ministry of Interior may then require internet access providers to block access to the relevant URLs. On the self-regulatory front, internet intermediaries participate in the INHOPE network, which permits a centralized reporting of child pornography sites. Internet intermediaries then voluntarily take steps to impede access to the relevant sites. For example, British Telecom implements a so-called “Clean Feed” system to impede access to child porn sites.7 This is done on a purely voluntary basis.
Content promoting terrorism
43In an effort to limit access to websites that recruit young and vulnerable people to join the so-called Islamic State and other terrorist organisations, France recently enacted a law permitting the police to require ISP blocking of websites that promote terrorism8 The blocking can be ordered by the government without a court order. As I will explain in Chapter 3, blocking websites without a court order raises concerns about fundamental rights. One of the important safeguards of fundamental rights is to make sure that a judge is involved when a fundamental right is curtailed.
Assisting law enforcement
44In addition to asking internet intermediaries to help enforce pre-defined content policies, governments also rely on internet intermediaries to help locate criminals and gather evidence for criminal investigations of all kinds. This is generally done by asking certain intermediaries to provide metadata that can help track a suspected criminal’s online activities. In the context of criminal investigations, the metadata is provided only after the police have obtained an authorization from a prosecutor or judge. In the context of intelligence gathering activities, government agencies are subject to fewer constraints, and may in some cases collect large amounts of metadata directly from internet intermediaries. The internet intermediaries’ role in assisting law enforcement or intelligence agencies is not linked to a particular content policy, and therefore falls outside the scope of this book.
45Nevertheless, the role of internet intermediaries in assisting law enforcement involves many of the same trade-offs as those examined here. For example, how far may governments go in asking internet intermediaries to gather data and make the data available to the government? What surveillance techniques are proportionate? An EU-funded study called “SURVEILLE” attempts to build a methodology for examining these questions, a methodology that shares some characteristics with the system presented in this book. I examine the SURVEILLE project in more detail in Chapter 7.
Valuing content policies and their enforcement
46For the purposes of building our methodology, we should assume that all content policies have legitimacy in the relevant country where they were adopted, i.e. they have all been adopted by a democratically-elected legislature after debate, and are subject to appropriate constitutional guarantees. I therefore exclude from discussion content policies with doubtful legitimacy, such as content policies adopted by totalitarian governments to suppress political dissent.
47All of the content policies listed above are presumed legitimate, but they do not all have equal importance to society. The fight against terrorism, child pornography and counterfeit drugs will likely be more important to society than content policies designed to curb online copyright infringement. This does not make rules designed to curb online copyright infringement any less legitimate than rules designed to fight child pornography. It simply means that where enforcement resources are limited, there may be a compelling argument for devoting more resources to the fight against child pornography than to the enforcement of online copyright infringement. This would be the case, for example, if for a given 1,000€ of enforcement resources, society could save one child from sexual assault, compared to preventing 1000 illegal movie downloads. (This assumes that society values the saving of a child more than preventing 1000 downloads, which is a reasonable assumption.)
48Content policies generally carry different values in terms of society’s “willingness to pay” for enforcement. The price society is willing to pay to prevent terrorism may be higher than the price society is willing to pay to enforce rules designed to protect local cinema production, for example. The price attached to a given content policy will help determine the amount of tax revenues devoted to the policy’s enforcement, and the number of trade-offs and collateral harms that society is willing to accept in order to enforce the policy. For example, society may tolerate more intrusions of privacy in the context of fighting terrorism than in the context of fighting copyright infringement. Some actions by internet intermediaries may be justified for enforcing a high-stakes content policy, whereas the same action would not be justified for enforcing a content policy that has a lower value to society.
49Trying to create a hierarchy of content policies using a willingness to pay measurement is fraught with difficulty, and can lead in some cases to absurd results. First, there exists no market for content policies, so willingness to pay can only be guessed at. Second, just because fighting child pornography commands a higher willingness to pay than fighting copyright infringement does not mean that resources should always be devoted to child pornography at the expense of copyright infringement. Societies will generally devote resources to both, but in different amounts.
50Another complicating factor is the absence of consensus on the social importance of certain content policies. Critics of copyright laws argue that certain aspects of copyright law reflect the highly-effective lobbying of certain industry groups as opposed to a balanced representation of citizens’ views. Thus content policies, like any other aspect of lawmaking, can be subject to various degrees of regulatory capture (Stigler, 1971). The problems associated with valuing content policies and fundamental rights will be addressed in more detail in Chapter 6, which deals with cost-benefit analyses.
51Each content policy may have an ideal enforcement strategy associated with it. Shavell (1993) examines the factors that determine the theoretically optimal form and level of law enforcement. Shavell’s determinants show when enforcement efforts should occur, i.e. before the act, via prevention and deterrence measures, or after the fact, via punishment (fines and imprisonment). Other determinants influence the form that sanctions should take, i.e. monetary sanctions or imprisonment, and whether private or public enforcement (or a combination of the two) is optimal.
52Enforcement strategies are closely linked to the type of content policy that is being applied. As we will see in Chapter 6, one of the most difficult tasks for policymakers will be to define their enforcement objective with precision. What level of preventive measures do we expect internet intermediaries to apply. In most cases, 100% enforcement of a content policy will not be optimal because of the high level of associated costs. As noted at the beginning of this chapter, optimal enforcement will occur at a point where social benefits net of costs are maximized. This will generally occur at a point where the sum of (i) the costs of enforcement plus (ii) the costs of the relevant harm that the content policy seeks to address, is minimized.
Variable 2: Internet intermediaries
53Now that we have identified the main categories of content policies, let us look at the internet intermediaries that might be called upon to promote the content policies, and the actions they might take. Whether internet intermediaries should take such action, and in what institutional framework (self-regulation, government compulsion), will be examined later.
54Below is a list of internet intermediaries that could be called on to enforce a content policy, and a short description of the measures they might take. The list does not purport to be exhaustive.
Search engines
55Search engines can take several actions to limit access to undesirable content. The first is to make sure that sites presenting undesirable content do not purchase advertising on the search engine, and therefore do not appear in the list of sponsored search results. For example, a search for information about Viagra would not show a sponsored link for a known vendor of counterfeit drugs. The second potential measure consists of ensuring that the relevant site does not turn up at all in the search results, or does not appear in the search results for certain national versions of the search engine. This is what happens when a search engine delists a site completely in response to certain name-based searches under a so-called “right to be forgotten” request. The third option for search engines is to adjust search results so that the undesirable content appears low in the rankings. This solution might be appropriate where delisting the site entirely might create a disproportionate restriction of freedom of expression. An example of this sort of action is where the search engine gives higher priority to music or video download sites that have received a trust seal, or lower priority to sites that numerous internet users have complained about. This solution has been discussed in the context of the text of Mein Kampf, which fell into the public domain in 2016. Search engines would give higher ranking to the version of Mein Kampf that is commented by respected historians, and lower ranking to sites that use Mein Kampf for revisionist propaganda (Alexandre, Coen & Dreyfus, 2016).
56The measures applied by a search engine might also be limited to certain geographic areas. Search engines often have localized versions of their sites that internet users see by default based on their IP address. For example, an internet user in France will generally hold an IP address belonging to a French ISP. When that internet user goes to a search engine, the search engine will recognize the IP address as belonging to a French ISP and present to the internet user the French localized version of the search engine. Geographic adjustments of search results based on the IP address of the user are relatively easy for users to avoid. If they take affirmative steps, users can obtain access to another version of the search engine. However, only a small percentage of users actually do this.
57More drastic geographic restrictions can be implemented, but those generally involve more intrusive and disproportionate measures that violate international law. For example a regulator might want to impose on the search engine a change to its worldwide search results, a measure that would have the effect of extending the enforcement of the country’s content policy to worldwide users of the search engine. For example, content that violates a content policy in Turkey would also be delisted for users in the United States or France. Another option would be to ensure that all internet traffic transits through a single gateway, making circumvention of the localized version of the search engine impossible. This technique is used in certain totalitarian countries. We see from this example that the range of actions is quite broad, going from relatively light touch measures to measures that have a strong adverse effect on freedom of expression worldwide. These adverse effects must be included in the cost-benefit analysis that I present in Chapter 6.
Hosting providers
58Hosting providers contract with providers of content to make their content available on the internet. Hosting providers constitute the entry point for content onto the internet and are the closest link to the person publishing the content. The notice and takedown rules that exist under United States and European law were drafted with hosting providers in mind. Once a hosting provider takes down content, the content is in theory eliminated at its source. In most cases, the hosting provider has a contractual relationship with the original publisher of the content and can therefore inform the publisher about the takedown request and seek the publisher’s comments. The hosting provider is the internet intermediary located closest to the source of the illegal content, and has in theory a direct contractual link with the content’s publisher.
59This simplified image of hosting providers and of takedown mechanisms is complicated by several factors. First, content that is eliminated at its source by the hosting provider may still exist on other caching servers belonging to other internet intermediaries. In theory, operators of caching servers are supposed to verify continuously that the source content is still present on the original hosting server and should eliminate content that has been removed from the original host. In practice, however, many caching servers will retain copies of the original content long after the source copy has been eliminated. The second complicating factor is that the notice and takedown rules that regulate how and when hosting providers eliminate undesirable content are not universally applied. Publishers of undesirable content will often use the services of a hosting provider located in a country that does not have notice and takedown rules, or in a country that has such rules, but doesn’t enforce them.
60Many social media platforms are considered hosting providers and therefore apply notice and takedown rules with respect to the content posted by users. The notice and takedown rules imposed by law will often be supplemented by contractual terms of use imposed by the platforms on their users. Under its terms of use, the hosting provider may proactively screen for certain kinds of undesirable content or remove undesirable content (e.g. nudity) upon receipt of a complaint. The terms of use may expand the notion of undesirable content to include content that is not necessarily illegal but that violates the acceptable use policy of the service, e.g. nude or degrading photos. This self-regulatory aspect of hosting providers will be examined in Chapter 4.
Internet access providers
61The internet access provider (“IAP”) is the telecommunication provider that connects end-users, and provides them with access to the internet. The internet access provider is generally the subscriber’s cable operator, mobile operator, or copper or fiber telecom operator. In sparsely populated areas, a satellite operator may be the IAP. When an internet user connects to the internet through his or her employer or school, the internet access provider may be the employer or school. For example, university students may connect to the internet using the university network; government employees may connect to the internet using the government network. The university or government network will be deemed the IAP in those cases.
62Internet access providers routinely analyze and filter internet traffic in order to protect the network and their users from harmful viruses and other cyber security threats. Some corporate, government and university networks also block access to certain undesirable content, such as pornography sites or peer-to-peer services. However, beyond this, internet access providers generally do not take measures to voluntarily block content unless they are required to do so by a court or government order. What internet access providers may or may not filter, and on what basis, are highly contentious issues. They go to the heart of the net neutrality debate.
63The net neutrality debate has two aspects. The first aspect is whether an internet access provider may block or slow traffic for its own commercial or competitive purposes. The second is whether an internet access provider may block or slow traffic for public policy reasons. Most of the net neutrality debate has focused on the first aspect, i.e. whether an internet access provider can charge a premium to certain content providers in order to give them higher quality service, while leaving other content providers with a slower service. As regards the second aspect of the net neutrality debate, most commentators believe that IAP blocking for public policy reasons should be conducted with extreme care and only after a court decision. The reason for this caution is that any action by an internet access provider would create a number of undesirable spillover effects that would harm individual users’ freedom of access to information, as well as the proper functioning of the internet. Because of these negative externalities, blocking measures should therefore be decided by a court.
64Internet access providers are technically able to limit subscribers’ access to certain content. However, the technical tools used to accomplish blocking are imperfect. One mechanism consists of interfering with the internet address lookup function on the DNS server. The internet access provider alters the data on its own DNS server so that the DNS server provides wrong address information for certain domain names, making it impossible for the user to reach the desired site. Critics view this form of DNS blocking as a dangerous form of tinkering with the internet’s universal addressing system. It is a technique used by hackers for “man in the middle” attacks.
65Internet access providers may also block access to certain IP addresses. The problem here is that a given IP address may be shared by many content providers. Consequently, using an IP address as a means to block access to content creates a risk of blocking access to large amounts of other perfectly legal content.
66Internet access providers are also able to apply deep packet inspection (DPI) to identify with precision certain undesirable content and then either block the content or slow it down. This form of filtering creates a significant risk for individuals’ right to privacy. Deep packet inspection tools of this kind may be used by internet access providers to identify cybersecurity threats. DPI may also be used in totalitarian countries to spy on individuals’ internet use and identify political opponents. Deep packet inspection also creates technical challenges: DPI may slow down the network and/or be costly to deploy.
67Internet access providers may also apply more “light touch” measures, such as sending warning notices to certain internet users who repeatedly download copyrighted content without permission. The internet access provider may also provide the identity of the subscriber to administrative authorities, who may send warning notices to the subscriber, or even apply sanctions. This is how “graduated response” mechanisms work, which are designed to discourage users from violating copyright law.
68Internet access providers also respond to law enforcement and court requests to provide information on the identity of the subscriber, and on the internet sites accessed by the user. In this way, they contribute to law enforcement efforts to punish the authors of undesirable content, but do not intervene directly in blocking content.
Internet domain name registrar
69The management of the internet’s addressing system is delegated to a limited number of organizations in the world. For example, the United States corporation VeriSign manages the addressing system for all domain names that end in.com,.gov,.net or.org. The French organization AFNIC manages the addressing system for domain names that end in.fr. These organizations maintain the central database that permits a given domain name to be associated with a particular IP address. The central database is like a central phone book that is then replicated at a local level around the world so that each internet access provider and each internet backbone provider has the same phone book in its own DNS servers.
70By altering the central database entry for a given domain name, an internet addressing authority can send false address information to local DNS servers around the world. This can have the effect of blocking access to a given website at a global level. This is what occurred when the United States Department of Justice seized the domain name Megaupload.com. The United States government took possession of the domain name as if it were a physical object located in the United States, and required that VeriSign change the IP address associated with that domain name. This changed the address data in the central database and the false address information was then replicated in the DNS servers around the world. The effect was that anyone attempting to connect to the Megaupload.com site was rerouted to a web page created by the United States Department of Justice.
71Publishers of illegal content can quickly set up new sites, with new addresses, and avoid the effect of these kinds of blocking measures. The DNS blocking must be constantly updated to be effective.
Payment service providers
72Some of the most serious forms of illegal content on the internet (e.g. illegal drugs) involve payment by end users. Because payment cannot be made in cash on the internet, sellers of illegal content must rely on payment service providers. Payment service providers are regulated, and rely on the international banking system. Where providers of payment services refuse to provide service to sellers of illegal content and services, this can contribute to drying up their source of funds. Certain new payment services are not regulated and do not depend on the international banking system. Peer-to-peer payment services, such as Bitcoin, are decentralized and can be used even in cases where legitimate payment service providers refuse to do business with the relevant merchant.
73Using payment service providers as a proxy to limit access to illegal content and services only functions for paying content. Free content is not affected. Moreover, new payment technologies allow users and merchants to effect payments without relying on the services of traditional payment networks. This can reduce the effectiveness of using payment service providers as enforcers of content policies. Nevertheless, payment service providers often voluntarily agree to discontinue service to sites that are manifestly illegal. This is referred to as the “follow the money” approach to fighting illegal content. The actions of payment providers generally result from non-binding charters or MOUs (Memoranda of Understanding). Non-binding charters and MOUs are part of what I call “multilateral self-regulation” in Chapter 4.
Internet advertising networks
74Some websites offer illegal content for free, relying on advertising revenues from third-party advertisers to fund the site. To earn money from advertising, a website generally must use the services of one or more internet advertising networks. Those networks act as intermediaries between the website and third-party advertisers. Most mainstream brands do not want to be associated with undesirable content and will instruct ad networks to avoid them. Conversely, some advertisers may actively seek out this kind of content and be happy to place advertising on illegal websites. For example, an advertiser for online gambling may want to advertise its services on other gambling sites, including illegal ones.
75Internet advertising networks are able to stop doing business with certain websites, which has the effect of reducing the websites’ ability to monetize advertising space. The effectiveness of this measure may be limited by the fact that alternative advertising networks can easily emerge to serve the market. Unlike payment service providers, which in theory are regulated, internet advertising service providers are unregulated, and barriers to entry are low. As part of “follow the money” initiatives, major advertising intermediaries undertake to cease providing service to sites that propose manifestly illegal content. As is the case with payment providers, these initiatives are generally part of non-binding charters or MOUs.
Application stores
76Application (or “app”) stores are centralized platforms that permit publishers of applications to present their applications for download to end-users. The application stores publish guidelines that application publishers must comply with in order to make their application available on the platform. Not surprisingly, the application stores prohibit applications that provide illegal content or services. Although each application is not individually reviewed by the operator of the app store, once the application store receives complaints it will evaluate the relevant application and eliminate it from the platform if it does not comply with the app store’s guidelines. In this respect the application store resembles a hosting provider, acting pursuit to a “notice and takedown” procedure.
Content delivery networks (CDNs)
77Content delivery networks provide services to providers of bandwidth-intensive content such as video in order to improve the quality perceived by end-users. Typically a content delivery network will transport content to caching servers that are located near the internet access network of the end-user. When the end-user requests a given video, the request is rerouted to a server located near the end users’ internet access provider, thereby reducing the distance and number of network hops required.
78Before it was shut down by the United States Department of Justice, Megaupload.com used the services of a large content delivery network to enhance the quality of service to end-users. So far, content delivery networks have not been visibly involved in limiting access to undesirable content.
Internet backbone providers
79Internet backbone providers carry traffic between the upstream internet access provider serving the hosting provider that hosts the content and the downstream internet access provider serving the end-user. These internet backbone providers are interconnected with each other through transit and peering agreements. The interconnection points between backbone providers handle huge amounts of data, making individual filtering difficult. Nevertheless, some totalitarian countries implement monitoring and filtering measures at large internet traffic exchange points. Intelligence agencies in democratic countries may monitor data via infrastructure installed at the level of internet backbone providers. However, filtering is generally not implemented at large internet exchange points, at least in democratic countries. Filtering at a centralized international gateway can make it difficult for end-users in the country to work around filtering at the DNS level. However, such filtering reduces the performance of the internet for users in the country, and transforms the local internet into a national network, interconnected with the internet at a centrally controlled choke-point.
End-user software
80Software or apps on the end-user’s terminal can help block harmful computer viruses. Parental control software can block access to adult websites. Ad blocking software can limit advertising. Privacy software can limit cookies and other tracking devices. Putting access control features on end-user software has the advantage of not interfering with the normal operation of the internet. It also puts the end-user in control, and thereby reduces the risk of interfering with the user’s fundamental rights. Relying on software in the user’s terminal requires regular software updates, and thus may be less effective than more centralized filtering measures. Moreover, because the end-user is in control, the end-user can easily de-activate the system and obtain access to illegal, but desired, content. This may frustrate one of the objectives of policymakers, who want to make access to certain content difficult for users.
81While in theory under the control of the end-user, end-user apps or software are also controlled, at least in part, by the software publisher. The software publisher may make choices by default. These default choices result in the blocking of certain content. Although users can in theory override the publisher’s default settings, the default settings will in most cases remain untouched. The criteria used to set the default list of blocked content can be based on objective public interest criteria, or may in some cases be influenced by commercial or competitive considerations. For example, the ad-blocking software AdBlock Plus seeks payment from certain large content providers in order to be placed on AdBlock Plus’s “Acceptable Ads” white list.9 This sort of commercial negotiation with upstream content publishers is what defenders of net neutrality seek to prevent.
Set-top boxes or modems
82Set-top boxes and modems are generally under the control of the internet access provider. The set-top boxes or modems may contain software installed by the internet access provider to regulate access to certain content. In some cases, the end-user will be provided an interface to modify default settings defined by the internet access provider. In other cases, the end-user will only be able to change the settings by changing his or her account with the internet access provider. The account parameters are then changed within the internet access provider’s network and the update is pushed to the subscriber’s set-top box or modem via a software update. The software in the set-top box and the modem are part of the internet access provider’s network.
Device operating systems
83The application programming interface (API) of device operating systems can contribute to the protection of end-users against harmful content. The operating systems will have measures to protect against cyber attacks that may result in the theft of end-user data or unauthorized surveillance. APIs are also used to force app developers to implement privacy-friendly policies. For example, the API will not allow the application to have access to the user’s list of contacts without obtaining explicit user consent. The same rule may apply when an application seeks to access geolocation information within the device. Consequently, publishers of operating systems can build in mechanisms that help enforce content policies.
Most measures focus on the hosting provider or the IAP
84To date, most initiatives to limit access to harmful content have focused on notice and takedown at the hosting provider level. “Follow the money” self-regulatory initiatives are frequent, as are court-ordered blocking by IAPs at the DNS server. Search engines are also becoming popular targets, as was demonstrated by the recent “right to be forgotten” cases. The purpose of this list is to highlight the wide range of technical intermediaries and actions that regulators may consider when approaching a problem relating to access to harmful content. The methodology I propose in Chapter 6 requires the drafters of the impact assessment to list all the potential internet intermediaries that might contribute to enforcing the content policy, so that no alternatives are overlooked.
Variable 3: The institutional framework
85By institutional framework, I mean the legal rules and institutions that provide the context in which an internet intermediary may act to enforce a content policy. For example, in a purely self-regulatory framework, the underlying legal rules will be contract law and liability rules applicable to internet intermediaries. The institutions will be courts evaluating internet intermediary behavior after the fact. It is on this institutional framework that an internet intermediary will build its self-regulatory program, via its contractual terms of use and an evaluation of potential liability in different scenarios.
86In a government-administered regulatory framework, the underlying legal rules will be the regulations adopted by the legislature and the regulatory authority; the institutions will be the regulatory authority and the courts.
87The point here is that internet intermediary actions never occur in a vacuum. They will always arise in the context of an institutional framework of some kind. Even self-regulatory measures are part of an institutional environment consisting of liability and property laws enforced by courts. The choice of the institutional framework will affect the costs and benefits accompanying the relevant measure. A well-crafted institutional framework will permit the enforcement mechanism to achieve its intended result more effectively while minimizing adverse effects on fundamental rights. By contrast, a poorly-crafted framework may cause the measure to miss its intended mark, and/or aggravate harm to other rights.
88The institutional framework is closely related to the question of the ideal law enforcement strategy, a question I alluded to briefly in the first part of this chapter. For example, general liability rules will generally be more efficient when the victim is able to identify the person responsible for the harm and enforce a damages award against him or her (Shavell, 1984). By contrast, where the perpetrator of the harm is difficult to identify or is likely to not have assets sufficient to pay for the harm, then regulation will in most cases be more efficient than liability rules. As expressed by Shavell (1984),
“the measure of social welfare is assumed to equal the benefits parties derive from engaging in their activities, less the sum of the costs of precautions, the harms done, and the administrative expenses associated with the means of social control. The formal problem is to employ the means of control to maximize the measure of welfare.” (Shavell, 1984, p. 359).
89In this book, the “institutional framework” corresponds to Shavell’s “means of social control”, i.e. the combination of legal rules and institutions that form the backdrop for internet intermediary action. Various institutional choices will be examined in Chapter 4. The methodology proposed in Chapter 6 requires that various institutional frameworks be considered, and that the costs of each alternative be taken into account in the cost-benefit analysis.
Variable 4: Negative externalities caused by internet intermediary actions
90The effectiveness of any measure taken by an internet intermediary can be measured in the first instance by comparing the cost of implementing the measure with the measure’s success in enforcing the relevant content policy. (Measuring “success” in enforcing a content policy is a complex question in its own right, which I will examine in Chapter 6.) If an internet intermediary can cheaply implement a measure that has a 99% success rate in enforcing the relevant content policy (e.g. prohibiting access to online child pornography), the measure would appear at first glance worthwhile. In reality, the calculation is more complex because most measures undertaken by internet intermediaries can create negative externalities, i.e. costs to society that are not internalized by the internet intermediary and its customer. The costs of these negative externalities may be much higher than the direct costs of the measure to the internet intermediary, and may also outweigh the anticipated benefits flowing from enforcement of the relevant content policy. When these negative externalities are considered, the relevant measure may prove to do more harm than good.
91Many of the costs associated with the negative externalities examined below are difficult to measure in monetary terms. The monetary value associated with harm to freedom of expression or privacy, for example, can be estimated only through use of artificial and contestable assumptions. As I will show in Chapters 3 and 6, various methods can be used to measure hard-to-quantify benefits and costs. For example, a scoring mechanism was proposed by Alexy (2012), and is currently used in a European Commission – funded research project called SURVEILLE, which is intended, among other things, to measure the impact that certain police surveillance and investigation techniques have on fundamental rights.
92The approaches I propose in Chapter 6 will permit policymakers to compare different approaches and their relative impact on fundamental rights, innovation and net neutrality. The methodology is designed to identify the approach that appears, based on the scoring systems, to maximize social benefits net of the costs.
93Listed below are the main negative externalities that might arise from the action of an internet intermediary.
Adverse effects on fundamental rights
94The first series of negative externalities relates to the impact on fundamental rights. The effects on fundamental rights, and how to balance competing fundamental rights, will be examined in detail in Chapter 3. The list below is an introduction to the subject.
Freedom of expression
95Any action limiting access to content on the internet is a restriction on freedom of expression of the publisher of the content, and a restriction of the right to access information for the recipient of the content. The right for individuals to publish or access information is not absolute. Some content may be prohibited. But the limits on freedom of expression must be precisely defined by law and must be limited to what is absolutely necessary to achieve other important values in a democratic society. Freedom of expression is considered one of the most important individual rights in democratic society because it is a precondition for the exercise of other rights. The exchange of valuable political ideas is considered a public good, requiring government intervention to ensure sufficient production. For this reason, courts show a low tolerance for any measure undertaken by an internet intermediary that might be overbroad, i.e. contain false positives that interfere with access to legitimate information. For example, if a technical measure designed to block images of sexual exploitation of children also accidentally blocks works of art containing child nudity, or images from a medical journal, courts will in most cases find that the measure’s adverse effects on freedom of expression outweigh the benefits derived from blocking child pornography. When an internet intermediary blocks content, the costs associated with the harm to freedom of expression will generally not be internalized by the intermediary and its customers. The harms are therefore externalities.
Right of privacy
96The right to privacy and to the protection of personal data is a constitutional right in Europe. In the United States, the constitutional right to privacy is recognized, but only vis-à-vis actions taken by the government against the individual. Actions taken by private entities are covered by other legal provisions, including tort law and consumer protection law.
97The right to privacy and the right to protection of personal data are not identical. The right to privacy generally refers to the right of each individual to keep aspects of his or her private life secret. The right to protection of personal data refers to the right of each individual to control how data about him or her are used, and to object to improper uses. Because the two rights overlap in many instances, I use the terms “privacy” and “data protection” interchangeably, to designate the bundle of rights that protect individuals against intrusions into their private life and/or misuse of their personal data.
98Measures taken by internet intermediaries may have an impact on individual rights to privacy. Like freedom of expression, the right to privacy is not an absolute right and may be balanced against other rights. The balancing will involve a proportionality test that I will explore in Chapter 3. Some authors (Alexy, 2012, Portuese, 2013) explain that proportionality is a form of cost-benefit test. Because privacy is a fundamental right, courts will attach a high cost to any measure that poses a threat to privacy. The measure will have to be justified by important countervailing benefits in order to pass the proportionality test. In Europe, courts have criticized actions taken by internet intermediaries to stop copyright infringement on the ground that those measures create a threat to privacy. Where national security is at stake, courts may allow more intrusions into privacy than would be permitted to fight copyright infringement. The level of permitted interference with privacy may depend on the underlying content policy that is being pursued. Privacy rights can directly conflict with freedom of expression, as in the case of the so-called “right to be forgotten” under European law. The friction between the right to privacy and freedom of expression will have to be balanced on a case-by-case basis.
The right to property
99The right to property is a constitutionally-protected right, meaning that actions taken by internet intermediaries to protect property interests (such as copyright) enhance a fundamental right. This aspect of internet intermediary action, protection of the right to property, will generally be counted as a benefit in the cost-benefit analysis. On the “cost” side, property rights may be infringed where a government seizes a domain name, as the United States Department of Justice did in its action against Megaupload.com.
The right to create and operate a business
100European courts have also recognized the right to create and operate a business as a fundamental right. Where a government imposes measures on a particular business, such as the installation and operation of filtering equipment, or restrictions on a business model, the government interferes with the business-owner’s fundamental right to operate a business. The same holds true for regulations that limit the use of advertising cookies. This interference therefore also needs to be considered in the overall costs created by the measure.
The right to procedural fairness
101In both Europe and the United States individuals are entitled to procedural guarantees before being deprived by the state of any fundamental rights, including the right to free expression, property or liberty. Any measure taken by an internet intermediary that leads to a deprivation of a right, or the application of a sanction, may violate individuals’ rights to procedural fairness. The institutional safeguards accompanying any internet intermediary action, examined in Chapter 4, will in some cases mitigate the effects of internet intermediary action, by providing internet users or content providers with an opportunity to object.
Internet-specific harms
102In addition to potential harms to fundamental rights, measures taken by internet intermediaries may cause other forms of negative externalities, including harm to the proper functioning of the internet, and to innovation.
Harm to the good functioning of the internet
103The internet is built to avoid control points. Packets are routed through different networks and interconnection points so as to avoid congestion or blockage. Once all the packets reach their destination, they are reassembled. The content of the packets is then analyzed and the appropriate application run on the end-user’s device. The network has a relatively simple routing function. Most of the intelligence for internet applications is provided by terminal devices at the edges of the network, once the packets are reassembled at their destination. Defenders of net neutrality argue that internet access providers and internet backbone providers (collectively “ISPs”) should limit their role to routing packets on a non-discriminatory basis. They should not be concerned with the content, application or services that are located inside the packets. This is referred to as the end-to-end principle of the internet. Defenders of net neutrality recognize an exception to this end-to-end principle for ISP measures to fight malware and other cyber security attacks. For other content and services, however, defenders of net neutrality believe that ISPs should remain neutral, and limit their actions to routing packets to the right destination.
104One of the reasons neutrality is important is that actions by internet intermediaries could threaten the principles under which the internet has successfully functioned to date. According to net neutrality advocates, even small instances of non-neutrality should be resisted because each measure sends the message that it is okay to interfere with the end-to-end architecture of the internet. This could lead to the gradual destruction of the internet: ISPs and governments will engage in an arms race to erect barriers that will let only certain kinds of content and applications pass. To reach consumers in a given country, content providers would potentially have to negotiate with both the government and the ISPs in the country to obtain a right of passage. In regulatory terms, this would be the equivalent of seeking a broadcasting license in each country where the content is accessible. This is the nightmare scenario that proponents of net neutrality want to avoid, and which could lead to a progressive balkanization of the internet.
105Some advocates of net neutrality have a tendency to exaggerate. But their arguments contain a good dose of truth. For example, when the United States Department of Justice seized the domain name Megaupload.com and altered the address corresponding to that domain name on the central DNS server of VeriSign, United States authorities disrupted the global addressing system for the internet. They may have had a good reason for doing so, but one can understand why opponents of net neutrality fear that this sort of action could set an example for other countries and internet intermediaries. The Department of Justice’s action sends the message that it is no longer taboo to disrupt the global addressing system for the internet to enforce national content policies. ISPs, governments and courts may begin to take their own form of action to disrupt the addressing function or block IP addresses to promote local content policies. If this action becomes widespread, the internet will become a series of national networks with separate control points and content rules for each country.
106Preservation of the open internet is not important for its own sake, but rather to promote freedom of expression and innovation, both of which are enabled by the internet’s open and global architecture. The open internet is therefore a proxy for other values and potential harms that need to be considered. The harm to freedom of expression was examined above. In the following section I touch briefly on harm to innovation.
Harm to innovation
107The layered and end-to-end architecture of the internet is a strong enabler of innovation. Yochai Benkler (2006) states that the open internet permits “innovation without permission.” If actions by internet intermediaries interfere with the open and end-to-end character of the internet, these actions could also disrupt innovation. Measuring harm to innovation is difficult. It would require the comparison of two situations: one in which a content and application provider must deal with separate content policies and enforcement actions by internet intermediaries, and another situation in which internet intermediaries refrain from any enforcement action linked to content policies. The amount of investment in content and application development would then be compared for these two situations. Organizing an experiment of this kind would be difficult, particularly because one could not neutralize other factors potentially affecting innovation, such as the availability of venture capital. Measuring the open internet’s positive effect on innovation is beyond the scope of this book. Nevertheless, most scholars agree that the internet’s openness has made the internet a general purpose infrastructure, which in turn enables innovation. Some of the relevant literature on regulation’s effect on innovation is mentioned in Chapter 5.
108As is the case for harm to fundamental rights, I will propose in Chapter 6 methods to measure the relative harm to innovation resulting from alternative regulatory proposals.
Unintended effects linked to user behavior
109Enforcement action by internet intermediaries can prompt users to change their behavior in a way that reduces the intended benefits of the measure, or frustrates another important policy objective. This creates unanticipated costs. For example, a measure designed to reduce peer-to-peer exchanges of copyright-protected content might have two unintended effects. First, the measure may cause internet users to adopt other alternative technologies for content sharing such as streaming or direct download, thereby reducing the anticipated benefit of the original measure. Second, the measure may prompt larger numbers of internet users to use encryption and “dark networks” such as Thor. This phenomenon may in turn frustrate law enforcement efforts to combat crimes much more serious than copyright infringement, thereby creating new unanticipated costs.
110As this example shows, technology and user behavior will often find ways to work around measures imposed by internet intermediaries. These work-arounds may create their own kinds of costs that should be considered when evaluating the costs and benefits of a particular enforcement measure.
International effects
111Indirect costs should also include costs resulting from international effects, if any. If the proposed measure will affect individuals or businesses in other countries, the potential costs of this extraterritorial effect should be weighed. Katz (2000) refers to “jurisdictional externalities.” Other countries may retaliate, through enactment of their own measures having extraterritorial effect. This could create a “race to the bottom” pursuant to which countries impose their own measures on internet intermediaries regardless of their extraterritorial effect.
112Another cost to consider is the competitive distortion potentially resulting from certain intermediaries within the country being subject to regulatory measures while other intermediaries located outside the country are not. If the intermediaries outside the country compete for the same customers, this would create a competitive distortion penalizing intermediaries located within the country. This in turn could discourage investment in the relevant country, another potential negative externality flowing from actions of an internet intermediary.
Solving the problem so as to maximize social benefits
113To summarize, the relevant factors that policymakers must consider in the cost-benefit analysis are:
- the kind of content policy to be enforced, its relative importance in terms of society’s willingness to pay, and how “success” in enforcing the content policy should be defined;
- the kind of internet intermediary that could be called on to help;
- the kind of action that the internet intermediary could take, and its relative efficacy;
- the institutional options available to increase effectiveness of the measure and reduce its adverse effects;
- direct costs associated with the proposed measure in terms of direct costs to the internet intermediary and its customers, and costs to taxpayers, including costs of enforcement;
- the negative externalities associated with the harms to fundamental rights, including freedom of expression and privacy;
- the negative externalities associated with harms to the global end-to-end architecture of the internet and innovation;
- changes to user behavior that might reduce the efficacy of the measure or create unanticipated costs.
114In mathematical terms, the problem is to find the maximum net social benefit, i.e. the combination of B, C and D that maximizes the sum of A (the benefits flowing from the enforcement of the content policy), minus the costs resulting from E, F, G and H. Solving the problem is challenging because A, F, G and H are qualitative, and to some extent subjective, values, which makes them difficult to measure and to compare. The purpose of my methodology is to propose a uniform way of thinking about these variables so as to approach – albeit imperfectly – an outcome that would result if the variables were quantifiable in a more objective sense.
115I present the system in Chapter 6. However, first I will present how benefits and costs to fundamental rights should be considered (Chapter 3), how institutional choices affect the costs and benefits of regulatory alternatives (Chapter 4), and how existing literature on “better regulation” affects how we approach the problem (Chapter 5).
Notes de bas de page
1 CJEU Case n° C-131/12, Google Spain v. AEPD and Costeja, May 13, 2014.
2 CAL. BUS. & PROF. CODE § 22581 (West 2015).
3 15 U.S.C. § 1681.
4 Law n° 2010-476 of May 12, 2010.
5 P. Lescure, Contribution aux politiques culturelles à l’ère numérique, May 2013.
6 Law n° 2011-267 of March 14, 2011.
7 https://en.wikipedia.org/wiki/Cleanfeed_(content_blocking_system) (consulted January 22, 2017).
8 Law n° 2014-1353 of November 13, 2014.
9 https://adblockplus.org/about#monetization (consulted January 22, 2017).
Le texte seul est utilisable sous licence Licence OpenEdition Books. Les autres éléments (illustrations, fichiers annexes importés) sont « Tous droits réservés », sauf mention contraire.
Nouvelles énergies pour la ville du futur
Eva Boxenbaum, Brice Laurent et Annalivia Lacoste (dir.)
2013
Recharger les véhicules électriques et hybrides
Matthieu Glachant, Marie Laure Thibault et Laurent Faucheux
2013
« Moi je lui donne 5/5 »
Paradoxes de la critique amateur en ligne
Dominique Pasquier, Valérie Beaudouin et Tomas Legon
2014
Les technologies numériques de santé
Examen prospectif et critique
Valérie Fernandez, Laurent Gille et Thomas Houy
2015
Le phénomène « pro-ana »
Troubles alimentaires et réseaux sociaux
Antonio A. Casilli et Paola Tubaro
2016
Diversifier le recrutement public
Le cas des magistrats
Florence Audier, Maya Bacache-Beauvallet et Éric Mathias
2016
Smart(er) Internet Regulation Through Cost-Benefit Analysis
Measuring harms to privacy, freedom of expression, and the internet ecosystem
Winston J. Maxwell
2017
