Network Working Group R. Barnes
Internet-Draft BBN Technologies
Intended status: Informational A. Cooper
Expires: April 16, 2013 CDT
O. Kolkman
NLnet Labs
October 15, 2012

Technical Considerations for Internet Service Filtering


The Internet is structured to be an open communications medium. This openness is one of the key underpinnings of Internet innovation, but it can allow communications that may be viewed as either desirable or undesirable by different parties. Thus, as the Internet has grown, so have mechanisms to limit the extent and impact of abusive or allegedly illegal communications. Recently, there has been an increasing emphasis on "blocking", the active prevention of abusive or allegedly illegal communications. This document examines several technical approaches to Internet content blocking in terms of their alignment with the overall Internet architecture. In general, the approach to content blocking that is most coherent with the Internet architecture is to inform endpoints about potentially undesirable services, so that the communicants can avoid engaging in abusive or illegal communications.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http:/⁠/⁠⁠drafts/⁠current/⁠.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on April 16, 2013.

Copyright Notice

Copyright (c) 2012 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http:/⁠/⁠⁠license-⁠info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

Table of Contents

1. Introduction

The original design goal of the Internet was to enable communications between hosts. As this goal was met and people started using the Internet to communicate, however, it became apparent that some hosts were engaging in arguably undesirable communications. The most famous early example of undesirable communications was the Morris worm, which used the Internet to infect many hosts in 1988. As the Internet has evolved into a rich communications medium, so have mechanisms to restrict undesirable communications.

Efforts to restrict or deny access to Internet resources have evolved over time. As noted in [RFC4084], some Internet service providers impose restrictions on which applications their customers may use and which traffic they allow on their networks. These restrictions are often imposed with customer consent, where customers may be enterprises or indiviuals. Increasingly, however, both governmental and private sector entities are seeking to block access to certain content, traffic, or communications without the knowledge or agreement of affected users. Where these entities do not directly control networks, they aim to make use of intermediary systems to effectuate the blocking.

Entities may seek to block Internet content for a diversity of reasons, including defending against security threats, restricting access to content thought to be objectionable, and preventing illegal activity. While blocking remains highly contentious in many cases, the desire to restrict access to content will likely continue to exist.

This document aims to clarify the technical implications and trade-offs of various blocking strategies and to identify the potential for different strategies to come into conflict with the Internet's architecture or cause harmful side effects ("collateral damage"). The strategies broadly fall into three categories:

  1. Control by intermediaries (intermediary-based filtering)
  2. Manipulation of authoritative data (server-based filtering)
  3. Reputation and authentication systems (endpoint-based filtering)

Examples of blocking or attempted blocking using the DNS, HTTP proxies, domain name seizures, spam filters, and RPKI manipulation are used to illustrate each category's properties.

Whether particular forms of blocking are lawful in particular jurisdictions raises complicated legal questions that are outside the scope of this document.

2. Architectural Principles

To understand the implications of different blocking strategies, it is important to understand the key principles that have informed the design of the Internet. While much of this ground has been well trod before, this section highlights four architectural principles that have a direct impact on the viability of content blocking: end-to-end connectivity, layering, distribution and mobility, and locality and autonomy.

2.1. End-to-End Connectivity and "Transparency"

The end-to-end principle is "the core architectural guideline of the Internet" [RFC3724]. Adherence to the principle of vesting endpoints with the functionality to accomplish end-to-end tasks results in a "transparent" network in which packets are not filtered or transformed en route [RFC2775]. This transparency in turn is a key requirement for providing end-to-end security features on the network. Modern security mechanisms that rely on trusted hosts communicating via a secure channel without intermediary interference enable the network to support e-commerce, confidential communication, and other similar uses.

The end-to-end principle is fundamental for Internet security, and the foundation on which Internet security protocols are built. Protocols such as TLS and IPsec [RFC5246][RFC4301] are designed to ensure that each endpoint of the communication knows the identity of the other endpoint, and that only the endpoints of the communication can access the secured contents of the communication. For example, when a user connects to a bank's web site, TLS ensures that the user's banking information is communicated to the bank and nobody else.

Some blocking strategies require intermediaries to insert themselves within the end-to-end communications path, potentially breaking security properties of Internet protocols. In these cases it can be difficult or impossible for endpoints to distinguish between attackers and the entities conducting blocking.

A similar notion to the end-to-end principle is the notion of "transparency," that is, the idea that the network should provide a generic connectivity service between endpoints, with minimal interaction by intermediaries aside from routing packets from source to destination. In "Reflections on Internet Transparency" [RFC4924], the IAB assessed the relevance of this principle and concluded that "far from having lessened in relevance, technical implications of intentionally or inadvertently impeding network transparency play a critical role in the Internet's ability to support innovation and global communication".

2.2. Layering

Internet applications are built out of a collection of loosely-coupled components or "layers." Different layers serve different purposes, such as routing, transport, and naming (see [RFC1122], especially Section 1.1.3). The functions at these layers are developed autonomously and almost always operated by different entities. For example, in many networks, physical and link-layer connectivity is provided by an "access provider", while IP routing is performed by an "Internet service provider" -- and application-layer services are provided by a completely separate entity (e.g., a web server). Upper-layer protocols and applications rely on combinations of lower-layer functions in order to work. As a consequence of the end-to-end principle, functionality at higher layers tends to be more specialized, so that many different specialized applications can make use of the same generic underlying network functions.

As a result of this structure, actions taken at one layer can affect functionality or applications at higher layers. For example, manipulating routing or naming functions to restrict access to a narrow set of resources via specific applications will likely affect all applications that depend on those functions.

In a similar manner, physical distances grow as one moves up the stack. A host must be physically connected to a link-layer access provider network, and its distance from its ISP is limited by the length of a link, but Internet applications can be delivered by a host anywhere in the world.

Thus, as one considers changes at each layer of the stack, changes at higher layers become more specific in terms of application, but more broad in terms of impact. Changes to an access network will only affect a relatively small, well-defined set of users (namely, those connected to the access network), but can affect all applications for those users. Changes to an application service can affect users across the entire Internet, but only for that specific application.

2.3. Distribution and Mobility

The Internet is designed as a distributed system both geographically and topologically. Resources can be made globally accessible regardless of their physical location or connectivity providers used. Resources are also highly mobile -- moving content from one physical or logical address to another can often be easily accomplished.

This distribution and mobility underlies a large part of the resiliency of the Internet. Internet routing can survive major outages such as cuts in undersea fibers because the distributed routing system of the Internet allows individual networks to collaborate to route traffic. Application services are commonly protected using distributed servers. For example, even though the 2010 earthquake in Haiti destroyed almost all of the Internet infrastructure in the country, the Haitian top-level domain name (.ht) had no interruption in service because it was also accessible via servers in the United States, Canada, and France.

Undesirable communications also benefit from this resiliency -- resources that are blocked or restricted in one part of the Internet can be reconstituted in another part of the Internet, creating a "water balloon" effect. If a web site is prevented from using a domain name or set of IP addresses, the web site can simply move to another domain name or network.

2.4. Locality and Autonomy

The basic unit of Internet routing is an "Autonomous System" -- a network that manages its own routing internally. The concept of autonomy is present in many aspects of the Internet, as is the related concept of locality, the idea that local changes should not have a broader impact on the network.

These concepts are critical to the stability and scalability of the Internet. With millions of individual actors engineering different parts of the network, there would be chaos if every change had impact across the entire Internet.

Locality implies that the impact of technical changes made to realize blocking will only be within a defined scope. As discussed above, this scope might be narrow in one dimension (set of users or set of applications affected) but broad in another. Changes made to effectuate blocking are often targeted at a particular locality, but result in blocking outside of the intended scope.

3. Examples of Blocking

As noted above, systems to restrict or block Internet communications have evolved alongside the Internet technologies they seek to restrict. Looking back at the history of the Internet, there have been several such systems deployed, with varying degrees of effectiveness.

4. Blocking Design Patterns

Considering a typical end-to-end Internet communcation, there are three logical points at which blocking mechanisms can be put in place: the middle and either end. Mechanisms based in the middle usually involve an intermediary device in the network that observes Internet traffic and decides which communications to block. At the service end of a communication, authoritative databases (such as the DNS) and servers can be manipulated to deny or alter service delivery. At the user end of a communication, authentication and reputation systems enable user devices (and users) to make decisions about which communications should be blocked.

In this section, we discuss these three "blocking design patterns" and how they align with the Internet architectural principles outlined above. In general, the third pattern -- informing user devices of which services should be blocked -- is the most coherent with the Internet architecture.

4.1. Intermediary-Based Blocking

A common goal for blocking systems is for the system to be able to block communications without the consent or cooperation of either endpoint to the communication. Such systems are thus implemented using intermediary devices in the network, such as firewalls or filtering systems. These systems inspect user traffic as it passes through the network, decide based on the content of a given communication whether it should be blocked, and then block or allow the communication as desired.

Common examples of intermediary-based filtering are firewalls and network-based web-filtering systems. For example, web filtering devices usually inspect HTTP requests to determine the URL being requested, compare that URL to a list of black-listed or white-listed URLs, and allow the request to proceed only if it is permitted by policy (or at least not forbidden). Firewalls perform a similar function for other classes of traffic in addition to HTTP. Note that this class does not cover cases where the intermediary is authorized by the endpoints to act on an endpoint's behalf (e.g., mail servers), since these involve the cooperation of at least one affected endpoint.

Accomplishing blocking in this way conflicts with the end-to-end and transparency principles noted above. The very goal of blocking in this way is to impede transparancy for particular content or communications. For this reason, they run into several technical issues that limit their viability in practice. In particular, many issues arise from the fact that an intermediary needs to have access to a sufficient amount traffic to make its blocking determination.

The first challenge to obtaining this traffic is simply gaining access to the constituent packets. The Internet is designed to deliver packets from source to destination -- not to any particular point along the way. In practice, inter-network routing is often asymmetric, and for sufficiently complex local networks, intra-network traffic flows can be asymmetric as well.

This asymmetry means that an intermediary will often see only one half of a given communication (if it sees any of it at all), limiting its ability to make decisions based on the content of the communication. For example, a URL-based filter cannot make blocking decisions if it only has access to HTTP responses (not requests). Routing can sometimes be forced to be asymmetric within a given network using routing configuration or layer-2 mechanisms (e.g., MPLS), but these mechanisms are frequently brittle, complex, and costly -- and often reduce network performance relative to asymmetric routing.

If an intermediary blocking device can access the packets that constitute a communication, then the next question is whether the intermediary can access the application content within these packets. If the application content is encrypted with a security protocol (e.g., IPsec or TLS), then the intermediary will require the ability to decrypt the packets to examine application content. Since security protocols are designed to provide end-to-end security (i.e., to prevent intermediaries from examining content), the intermediary would need to masquerade as one of the endpoints, breaking the authentication in the security protocol, reducing the security and of the users and services affected, and interfering with private communication.

If the intermediary is unable to decrypt the security protocol, then its blocking determinations for secure sessions can only be based on unprotected attributes, such as IP addresses and port numbers. Some blocking systems today still attempt to block based on these attributes, for example, blocking TLS traffic to known proxies that could be used to tunnel through the blocking system.

However, as the Telex project recently demonstrated, if an endpoint cooperates with a server, it can create a TLS tunnel that is indistinguishable from legitimate traffic [Telex]. For example, if a banking website operated a Telex server, then a blocking system would be unable to distinguish legitimate encrypted banking traffic from Telex-tunneled traffic to that server (potentially carrying content that the blocking system would have blocked).

Thus, in principle it is impossible to prevent tunnelling through an intermediary device without blocking all secure traffic. (The only limitation in practice is the requirement for special software on the client.) In most cases, blocking all secure traffic is an unacceptable consequence of blocking, since security is often required for services such as online commerce, enterprise VPNs, and management of critical infrastructure. If governments or network operators were to force these services to use insecure protocols so as to effectuate blocking, they would expose their users to the various attacks that the security protocols were put in place to prevent.

Some network operators may assume that only blocking access to resources available via unsecure channels is sufficient for their purposes -- i.e., that the size of the user base that will be willing to use secure tunnels and/or special software to circumvent the blocking is low enough to make blocking via intermediaries worthwhile. However, the longer such a blocking system is in place, the more likely it will become that efficient and easy-to-use circumvention tools that make use of secure tunnelling will become widespread.

It may be tempting for those operating blocking systems to assume that tunneling through intermediaries is sufficiently difficult that the average user will not attempt it. Under that assumption, one might decide that there is no need to control secure traffic, and thus that intermediary-based blocking is an attractive option. However, the longer such blocking systems are in place, the more likely it is that efficient and easy-to-use tunnelling tools will become available. The proliferation of the ToR network, for example, and its increasingly sophisticated blocking-avoidance techniques demonstrate that there is energy behind this trend [Tor].

Blocking via intermediaries is thus only effective in a fairly constrained set of circumstances. First, the routing structure of the network needs to be such that the intermediary has access to any communications it intends to block. Second, the blocking system needs an out-of-band mechanism to mitigate the risk of secure protocols being used to avoid blocking (e.g., human analysts identifying IP addresses of tunnel endpoints), which may be resource-prohibitive, especially if tunnel endpoints begin to change frequently. If the network is sufficiently complex, or the risk of tunneling too high, then intermediary-based blocking is unlikely to be effective.

4.2. Server-Based Blocking

Internet services are driven by physical devices such as web servers, DNS servers, certificate authorities, or WHOIS databases. These devices control the structure and availability of Internet applications by providing data elements that are used by application code. For example, changing an A or AAAA record on a DNS server will change the IP address that is bound to a given domain name; applications trying to communicate with the host at that name will then communicate with the host at the new address.

As physical objects, the servers that underlie Internet applications exist within the jurisdiction of governments, and their operators are thus subject to certain local laws. It is thus possible for laws to be structured to facilitate blocking of Internet services operated within a jurisdiction, either via direct government action or by allowing private actors to demand blocking (e.g., through lawsuits).

The "seizure" of domain names discussed above is an example of this type of blocking. Government officials required the operators of the parent zones of a target name (e.g., "com" for "") to direct queries for that name to a set of government-operated name servers. Users of services under a target name would thus be unable to locate the correct servers for that name, denying them the ability to access these services. The action of the Dutch police against the RIPE NCC is of a similar character, limiting the ability of certain ISPs to manage their Internet services by controlling their WHOIS information.

Blocking services by disabling or manipulating servers does respect the end-to-end principle, since the affected server is one end of the blocked communication. However, its iteractions with layering, resource mobility, and autonomy can limit its effectiveness and cause undesirable consequences.

The layered architecture of the Internet means that there are several points at which access to a service can be blocked. The service can be denied Internet access (via control of routers), DNS services (DNS servers), or application-layer services (application servers, e.g, web servers). Blocking via these channels, however, is both amplified and limited by the global nature of the Internet.

On the one hand, the global nature of Internet resources amplifies blocking actions, in the sense that it increases the risk of overblocking -- collateral damage to legitimate use of a resource. A given network or domain name might host both legitimate services and services that governments desire to block. A service hosted under a domain name and operated in a jurisdiction where it is considered undesirable might be considered legitimate in another jurisdiction; a blocking action in the host jurisdiction would deny legitimate services in the other.

On the other hand, the distributed and mobile nature of Internet resources limits the effictiveness of blocking actions. Because an Internet service can be reached from anywhere on the Internet, a service that is blocked in one jurisdiction can often be moved or re-instantiated in another jurisdiction. Likewise, services that rely on blocked resources can often be rapidly re-configured to use non-blocked resources. For example, the technique of "snowshoe spamming" is already widely used to spread spam generation across a variety of resources and jursidictions to prevent spam blocking from being effective.

The efficacy of server-based blocking is further limited by the autonomy principle discussed above. If the Internet community realizes that a blocking decision has been made and wishes to counter it, then local networks can "patch" the authoritative data to avoid the blocking. For example, in 2008, Pakistan Telecom attempted to deny access to YouTube within Pakistan by announcing bogus routes for YouTube address space to peers in Pakistan. YouTube was temporarily denied service on a global basis due to a route leak, but service was restored in approximately two hours because network operators around the world re-configured their routers to ignore the blocking routes [RenesysPK]. In the context of SIDR and secure routing, a similar re-configuration could be done if a resource certificate were to be revoked in order to block routing to a given network.

In the DNS context, similar work-arounds are available. If a domain name were blocked by changing authoritative records, network operators can restore service simply by extending TTLs on cached pre-blocking records in recursive resolvers, or by statically configuring resolvers to return un-blocked results for the affected name. Indeed these techniques are commonly used in practice to provide service to domains that have been disrupted, such as the .ht domain during the 2010 earthquake in Haiti [EarthquakeHT].

Server-based blocking also has a variety of non-technical implications. The considerations discussed in ISOC's whitepaper on DNS filtering [ISOCFiltering] also apply to other global Internet resources.

In summary, server-based blocking can sometimes be used to immediately block a target service by removing some of the resources it depends on. However, such blocking actions often have harmful side effects due to the global nature of Internet resources. The global mobility of Internet resources, together with the autonomy of the networks that comprise the Internet, can mean that the effects of server-based blocking can be quickly be negated. To adapt a quote by John Gilmore, "The Internet treats blocking as damage and routes around it".

4.3. Endpoint-Based Blocking

Internet users and their devices make thousands of decisions every day as to whether to engage in particular Internet communications. Users decide whether to click on links in suspect email messages; browsers advise users on sites that have suspicious characteristics; spam filters evaluate the validity of senders and messages. If the hardware and software making these decisions can be instructed not to engage in certain communications, then the communications are effectively blocked because they never happen.

There are several systems in place today that advise user systems about which communications they should engage in. As discussed above, several modern browsers consult with "Safe Browsing" services before loading a web site in order to determine whether the site could potentially be harmful. Spam filtering is one of the oldest blocking systems in the Internet; modern blocking systems typically make use of one or more "reputation" or "blacklist" databases in order to make decisions about whether a given message or sender should be blocked. These systems typically have the property that many blocking systems (browsers, MTAs) share a single reputation service.

This approach to blocking is coherent with the Internet architectural principles discussed above, dealing well with the end-to-end principle, layering, mobility, and locality/autonomy.

Much like server-based blocking, endpoint-based blocking is performed at one end of an Internet communication, and thus avoids the problems related to end-to-end security mechanisms that intermediary-based blocking runs into. Endpoint-based blocking also lacks some of the limitations of server-based blocking: While server-based blocking can only see and affect the portion of an application that happens at a given server (e.g., DNS name resolution), endpoint-based blocking has visibility into the entire application, across all layers and transactions. This visibility provides endpoint-based blocking systems with a much richer set of information on which to make blocking decisions.

In particular, endpoint-based blocking deals well with adversary mobility. If a blocked service relocates resources or uses different resources, a server-based blocking approach may not be able to affect the new resources. An intermediary-based blocking system may not even be able to tell whether the new resources are being used, if the blocked service uses secure protocols. By contrast, endpoint-based blocking systems can detect when a blocked service's resources have changed (because of their full visibility into transactions) and adjust blocking as quickly as new blocking data can be sent out through a reputation system.

Finally, in an endpoint-based blocking system, blocking actions are performed autonomously, by individual endpoints or their delegates. The effects of blocking are thus local in scope, minimizing the effects on other users or other, legitimate services.

The primary challenge to endpoint-based blocking is that it requires the cooperation of endpoints. Where this cooperation is willing, this is a fairly low barrier, requiring only reconfiguration or software update. Where cooperation is unwilling, it can be challenging to enforce cooperation for large numbers of endpoints. If cooperation can be achieved, endpoint-based blocking can be much more effective than other approaches because it is so coherent with the Internet's architectural principles.

5. Summary of Trade-offs

Intermediary-based blocking is a relatively low-cost blocking solution in some cases, but a poor fit with the Internet architecture, especially the end-to-end principle. It thus suffers from several limitations.

Server-based blocking can provide rapid effects for resources under the control of the blocking entity, but can have limited effects due to the global, autonomous nature of Internet resources and networks.

Endpoint-based blocking matches well with the overall design of the Internet.

Because it agrees so well with Internet architectural principles, endpoint-based blocking is the most effective form of Internet service blocking, and the least harmful to the Internet.

While this document has focused on technical mechanisms used to filter Internet content, a variety of non-technical mechanisms may also be available depending on the particular context and goals of the public or private entity seeking to restrict access to content. For example, purveyors of illegal online content can be pursued through the criminal justice system, and the funding that supports their activities can be targeted through cooperation with financial services companies [click-trajectories]. Thus even in cases where endpoint-based filtering is not viewed as a viable means of restricting access to content, entities seeking to filter may find other strategies for achieving their goals that do not involve interfering with the architecture or operation of the Internet.

6. Security Considerations

The primary security concern related to Internet service blocking is the affect that it has on the end-to-end security model of many Internet security protocols. When blocking is enforced by an intermediary with respect to a given communication, the blocking system may need to obtain access to confidentiality-protected data to make blocking decisions. Mechanisms for obtaining such access typically require the blocking system to defeat the authentication mechanisms built into security protocols.

For example, some enterprise firewalls will dynamically create TLS certificates under a trust anchor recognized by endpoints subject to blocking. These certificates allow the firewall to authenticate as any website, so that it can act as a man-in-the-middle on TLS connections passing through the firewall.

Modifications such as these obviously make the firewall itself a point of weakness. If an attacker can gain control of the firewall or compromise the key pair used by the firewall to sign certificates, he will have access to the plaintext of all TLS sessions for all users behind that firewall, in a way that is undetectable to users.

When blocking systems are unable to inspect and block secure protocols, it is tempting to simply block those protocols. For example, a web blocking system that is unable to hijack HTTPS connections might simply block any attempted HTTPS connection. However, since Internet security protocols are commonly used for critical services such as online commerce and banking, blocking these protocols would block access to these services as well, or worse, force them to be conducted over insecure protocols.

Security protocols can, of course, also be used a mechanism for blocking services. For example, if a blocking system can insert invalid credentials for one party in an authentication protocol, then the other end will typically terminate the connection based on the authentication failure. However, it is typically much simpler to simply block secure protocols than to exploit those protocols for service blocking.

7. References

[RFC1122] Braden, R., "Requirements for Internet Hosts - Communication Layers", STD 3, RFC 1122, October 1989.
[RFC2775] Carpenter, B.E., "Internet Transparency", RFC 2775, February 2000.
[RFC3724] Kempf, J., Austein, R., IAB, "The Rise of the Middle and the Future of End-to-End: Reflections on the Evolution of the Internet Architecture", RFC 3724, March 2004.
[RFC4033] Arends, R., Austein, R., Larson, M., Massey, D. and S. Rose, "DNS Security Introduction and Requirements", RFC 4033, March 2005.
[RFC4084] Klensin, J., "Terminology for Describing Internet Connectivity", BCP 104, RFC 4084, May 2005.
[RFC4301] Kent, S. and K. Seo, "Security Architecture for the Internet Protocol", RFC 4301, December 2005.
[RFC4924] Aboba, B. and E. Davies, "Reflections on Internet Transparency", RFC 4924, July 2007.
[RFC5246] Dierks, T. and E. Rescorla, "The Transport Layer Security (TLS) Protocol Version 1.2", RFC 5246, August 2008.
[RFC5782] Levine, J., "DNS Blacklists and Whitelists", RFC 5782, February 2010.
[RFC6480] Lepinski, M. and S. Kent, "An Infrastructure to Support Secure Internet Routing", RFC 6480, February 2012.
[RojaDirecta] Masnick, M.M., "Homeland Security Seizes Spanish Domain Name That Had Already Been Declared Legal", 2011.
[US-ICE] U.S. Immigration and Customs Enforcement, "Operation in Our Sites", 2011.
[SafeBrowsing] Google, "Safe Browsing API", 2012.
[GhostClickRIPE] RIPE NCC, "RIPE NCC Blocks Registration in RIPE Registry Following Order from Dutch Police", 2012.
[Telex] Wustrow, E., Wolchok, S., Goldberg, I. and J.A. Halderman, "Telex: Anticensorship in the Network Infrastructure", August 2011.
[RenesysPK] Brown, M., "Pakistan hijacks YouTube", February 2008.
[EarthquakeHT] Raj Upadhaya, G., ".ht: Recovering DNS from the Quake", March 2010.
[ISOCFiltering] , , "DNS: Finding Solutions to Illegal On-line Activities", 2012.
[Tor] , , "Tor Project: Anonymity Online", 2012.
[click-trajectories] Levchenko, K., Pitsillidis, A., Chacra, N., Enright, B., Felegyhazi, M., Grier, C., Halvorson, T., Kreibich, C., Liu, H., McCoy, D., Weaver, N., Paxson, V., Voelker, G.M. and S. Savage, "Click Trajectories: End-to-End Analysis of the Spam Value Chain", 2011.

Authors' Addresses

Richard Barnes BBN Technologies 1300 N. 17th St Arlington, VA 22209 USA Phone: +1 703 284 1340 EMail:
Alissa Cooper CDT EMail:
Olaf Kolkman NLnet Labs EMail: