Skip to content

Instantly share code, notes, and snippets.

@JohnONolan
Created September 25, 2018 22:10
Show Gist options
  • Select an option

  • Save JohnONolan/ca3b3bbff071f3b18c0d7da5171248f3 to your computer and use it in GitHub Desktop.

Select an option

Save JohnONolan/ca3b3bbff071f3b18c0d7da5171248f3 to your computer and use it in GitHub Desktop.
<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[The Cloudflare Blog]]></title><description><![CDATA[Helping Build a Better Internet]]></description><link>https://blog.cloudflare.com/</link><generator>Ghost 2.1</generator><lastBuildDate>Tue, 25 Sep 2018 20:12:48 GMT</lastBuildDate><atom:link href="https://blog.cloudflare.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Get a head start with QUIC]]></title><description><![CDATA[Today Cloudflare opened the door on our beta deployment of QUIC with the announcement of our test site: cloudflare-quic.com. ]]></description><link>https://blog.cloudflare.com/head-start-with-quic/</link><guid isPermaLink="false">5ba531559fbc7c00bf371b21</guid><category><![CDATA[Birthday Week]]></category><category><![CDATA[Product News]]></category><category><![CDATA[QUIC]]></category><category><![CDATA[Programming]]></category><category><![CDATA[Performance]]></category><dc:creator><![CDATA[Nick Jones]]></dc:creator><pubDate>Tue, 25 Sep 2018 12:01:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/QUIC-Illustration--copy@2x.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/QUIC_cloudflare.png" class="kg-image" alt="Get a head start with QUIC"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/QUIC-Illustration--copy@2x.png" alt="Get a head start with QUIC"><p></p><p>Today Cloudflare opened the door on our beta deployment of QUIC with the <a href="https://blog.cloudflare.com/the-quicening">announcement</a> of our test site: cloudflare-quic.com. It supports the latest published draft of the IETF Working Group’s <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-transport/">draft standard for QUIC</a>, which at this time is at: <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-transport/14/">draft 14</a>.</p><p>The Cloudflare Systems Engineering Team has a long history of investing time and effort to trial new technologies, often before these technologies are standardised or adopted elsewhere. We deployed early experiments in standards such as: <a href="https://blog.cloudflare.com/introducing-http2/">HTTP/2</a>, <br><a href="https://blog.cloudflare.com/why-tls-1-3-isnt-in-browsers-yet/">TLS1.3</a>, <a href="https://blog.cloudflare.com/dnssec-an-introduction/">DNSSEC</a>, <a href="https://blog.cloudflare.com/dns-resolver-1-1-1-1/">DNS over HTTP</a>, <a href="https://blog.cloudflare.com/esni">Encrypted SNI</a>, when they were still in incubation. We committed to these technologies in their very early stages because we believed that they made for a safer, faster, better internet. And now we’re excited to do the same with QUIC.</p><p>In this blog post, we will show you how you can unlock the <strong>cloudflare-quic.com</strong> achievement and be some of the first people in the world to perform a HTTP transaction over the global internet using QUIC. This will be a moment that you can tell your grandkids about - if they can stop laughing at your stories of cars with wheels and use of antiquated words like: “meme” and “phone”.</p><p>But before we begin, let’s take a little bit of time to review what QUIC is. Our previous blog post <a href="https://blog.cloudflare.com/the-road-to-quic/">The Road to QUIC</a> by my colleague <a href="https://blog.cloudflare.com/author/alessandro-ghedini/">Alessandro Ghedini</a>, gives an excellent introduction to QUIC; its goals, its challenges, and many of the technical advantages that will come with it. It is good background reading for this article and a great introduction to the topic of QUIC in general.</p><p>If you visit <a href="https://cloudflare-quic.com">cloudflare-quic.com</a> with your regular web browser, you will be presented with an informative landing page. However, what you see will not be delivered using QUIC, because at the time this blog is posted, your browser doesn’t support IETF QUIC. No graphical browser does.</p><p>Some may point out that Google Chrome has had support for QUIC for many years, but we must re-iterate that the protocol supported by Chrome is Google’s own UDP based transport layer protocol. That protocol was once called QUIC but has forfeited that label and now goes by the name gQUIC, and what’s more, the mechanics of gQUIC are now significantly different to IETF QUIC.</p><h3 id="getting-quic">Getting QUIC</h3><p>The only way to access <strong>cloudflare-quic.com</strong> using the QUIC protocol is to use a command line client from one of the various implementations of QUIC that are actively evolving alongside the IETF standard. Most of these implementations can be found <a href="https://github.com/quicwg/base-drafts/wiki/Implementations">here</a>. If you are familiar with any of these, you are welcome to try them against <strong>cloudflare-quic.com</strong> however please note that your client of choice must support <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-transport/14/"><strong>draft 14</strong></a> of the IETF QUIC standard.</p><p>Our preferred QUIC client, and the one whose output we will be analysing in this blog, comes as part of the ngtcp2 project. The original project is located here: <a href="https://github.com/ngtcp2/ngtcp2">github.com/ngtcp2/ngtcp2</a>, but we are hosting our own copy here: <a href="https://github.com/cloudflare/ngtcp2/tree/quic-draft-14">github.com/cloudflare/ngtcp2/tree/quic-draft-14</a> so that we may be sure you get the exact resources you need for this demonstration.</p><p>Before proceeding please be aware that the following instructions will require you to build software from source code. ngtcp2 and its dependencies are buildable on multiple Operating System platforms, however, the processes described below are more likely to succeed on Linux. To start with, you will need:</p><ul><li>A POSIX-flavoured operating system, for example: Ubuntu Linux</li><li>To install core software development tools: gcc or clang, libc development packages, make, autoconf, automake, autotools-dev, libtool, pkg-config, git</li><li>To install some additional software dependencies: C Unit tester: (cunit &gt;=2.1), libev development packages. Check the homepage of Cloudflare ngtcp2 copy if you are unsure.</li></ul><p>Once you are confident with your setup, run the following commands to retrieve and build the ngtcp2 client and its major dependency OpenSSL:</p><pre><code>$ git clone --depth 1 -b quic-draft-14 https://github.com/tatsuhiro-t/openssl
$ cd openssl
$ ./config enable-tls1_3 --prefix=$PWD/build
$ make
$ make install_sw
$ cd ..
$ git clone -b quic-draft-14 https://github.com/cloudflare/ngtcp2
$ cd ngtcp2
$ autoreconf -i
$ ./configure PKG_CONFIG_PATH=$PWD/../openssl/build/lib/pkgconfig LDFLAGS=&quot;-Wl,-rpath,$PWD/../openssl/build/lib&quot;
$ make check
</code></pre>
<h3 id="testing-quic">Testing QUIC</h3><p>If you are still with me, congratulations! The next step is to pre-fabricate a HTTP/1.1 request that we can pass to our QUIC client, in order to avoid typing it out repeatedly. From your ngtcp2 directory, invoke the command:</p><pre><code>$ echo -ne “GET / HTTP/1.1\r\nHost: cloudflare-quic.com\r\n\r\n” &gt; cloudflare-quic.req
</code></pre>
<p>One of the promises of QUIC is the new <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-http/">QUIC HTTP</a> protocol, which is another IETF standard being developed in conjunction with the QUIC transport layer. It is a re-engineering of the HTTP/2 protocol to allow it to benefit from the many advantages of QUIC.</p><p>The design of QUIC HTTP is in a high state of flux at this time and is an elusive target for software implementors, but it is clearly on the Cloudflare product roadmap. For now, <strong>cloudflare-quic.com</strong> will use HTTP/1.1 for utility and simplicity.</p><p>Now it’s time to invoke the ngtcp2 command line client and establish your QUIC connection to <strong>cloudflare-quic.com</strong>:</p><pre><code>$ examples/client cloudflare-quic.com 443 -d cloudflare-quic.req
</code></pre>
<p>To be perfectly honest, the debugging output of the ngtcp2 client is not particularly pretty, but who cares! You are now a QUIC pioneer, riding the crest of a new technological wave! Your reward will be the eye-rolls of the teenagers of 2050.</p><h3 id="the-handshake">The HANDSHAKE</h3><p>Let’s go over some of the ngtcp2 debugging output that you hopefully can see after invoking your HTTP request over QUIC, and at the same time, let’s relate this output back to some important features of the QUIC protocol.</p><h4 id="client-hello">Client HELLO</h4><pre><code>01 I00000000 0x07ff706cb107568ef7116f5f58a9ed9010 pkt tx pkt 0 dcid=0xba006470cf7c05009e219ff201e4adbef8a3 scid=0x07ff706cb107568ef7116f5f58a9ed9010 type=Initial(0x7f) len=0
02 I00000000 0x07ff706cb107568ef7116f5f58a9ed9010 frm tx 0 Initial(0x7f) CRYPTO(0x18) offset=0 len=309
03 I00000000 0x07ff706cb107568ef7116f5f58a9ed9010 frm tx 0 Initial(0x7f) PADDING(0x00) len=878
04 I00000000 0x07ff706cb107568ef7116f5f58a9ed9010 rcv loss_detection_timer=1537267827966896128 last_hs_tx_pkt_ts=1537267827766896128 timeout=200
</code></pre>
<p>Above is what the QUIC protocol calls the client initial packet. It is the packet that is sent to establish a completely new connection between the client and the QUIC server.</p><p>The element: <code>scid</code> on line <code>01</code> is an example of a source connection ID. This is the unique number that the client chooses for itself when sending an initial packet. In the example output above, the value of the client scid is: <code>0x07ff706cb107568ef7116f5f58a9ed9010</code> but you will see a different value. In the ngtcp2 client utility, this number is purely random, as the QUIC connection will only last as long as the command runs, and therefore doesn’t need to carry much meaning. In future, more complex QUIC clients (such as web browsers) will likely choose their source connection IDs more carefully. Future QUIC servers will certainly do this, as connection IDs are a good place to encode information.</p><p>Encoding information in source connection ids is of particular interest to an organisation like Cloudflare, where a single IP address can represent thousands of physical servers. To support QUIC in an infrastructure like ours, routing of UDP QUIC packets will need to be done to a greater level of precision than can be represented in an IP address, and large, data packed connection IDs will be very useful for this. But enough about us, this blog is about you!</p><p>The element: <code>dcid</code>, also on line <code>01</code>, is the destination connection ID. In the client initial phase, this is always random as per the QUIC specification, because the client wants to be sure that it is treated as ‘new’ by the QUIC server. A random <code>dcid</code>, particularly one that is the maximum allowed length of 144bits, combined with a large source connection id, has a sufficiently high statistical chance of being unique so as to not clash with a connection id that the QUIC server has already registered. Later we will see what the QUIC server does with the random destination connection ID that the client has chosen.</p><p>On line <code>02</code>, we see that the client initial packet includes a <code>CRYPTO</code> frame that contains the TLS client hello message. This clearly demonstrates one of the significant advantages in the design of QUIC: the overlapping of transport layer connection establishment and TLS layer negotiation. Both of these processes necessitate some back and forth between a client and server for both TLS over TCP and for QUIC.</p><p>In TLS over TCP the two processes happen one after the other:</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/http-request-over-tcp-tls@2x.png" class="kg-image" alt="Get a head start with QUIC"></figure><p></p><p>You can count a total of FOUR round trips between the client &amp; the server before a HTTP request can be made! Now compare that with QUIC, where they happen at the same time:</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/http-request-over-quic@2x.png" class="kg-image" alt="Get a head start with QUIC"></figure><p></p><p>That’s a 50% reduction! With just TWO round trips before you can start making HTTP requests.</p><p>Returning to the analysis of the ngtcp2 debug output, we can see the client initial packet adds a <code>PADDING</code> frame in order to bring the packet to a minimum size mandated by the QUIC specification. The reason for this is twofold:</p><p>Firstly, to ensure that the network between the QUIC client and server can support satisfactorily large UDP packets. Sadly UDP is a second class citizen on the wide internet, generally only being used to transmit single, small, unrelated packets. QUIC counters all three of these patterns, so the rationale here is: if it’s going to get stuck, better to find out early. The quality of network support for streams of UDP will hopefully evolve alongside QUIC.</p><p>Secondly, to reduce the effectiveness of amplification attacks. This type of attack is where bad actors take advantage of network services that produce server responses vastly greater in size than the barely-validated request that solicited them. By spoofing the address of a victim, a bad actor can bombard the victim with large volumes of server responses given a relatively small volume of requests to the server. By requiring that an initial request be large, QUIC helps to make the amplification value much lower. UDP based amplification attacks are a very real issue, and you can read Cloudflare's account of a such an attack <a href="https://blog.cloudflare.com/memcrashed-major-amplification-attacks-from-port-11211/">here</a>.</p><p>QUIC defines a number of other mechanisms to protect against amplification attacks as well as DDoS attacks and you will see some of these a bit later.</p><h4 id="server-hello">Server HELLO</h4><p>Further down you will see the first packet returned from the server, which is the server initial packet:</p><pre><code>01 I00000160 0x07ff706cb107568ef7116f5f58a9ed9010 pkt rx pkt 0 dcid=0x07ff706cb107568ef7116f5f58a9ed9010 scid=0x3afafde2c24248817832ffe545d874a2a01f type=Initial(0x7f) len=111
02 I00000160 0x07ff706cb107568ef7116f5f58a9ed9010 frm rx 0 Initial(0x7f) CRYPTO(0x18) offset=0 len=90
03 I00000314 0x07ff706cb107568ef7116f5f58a9ed9010 cry remote transport_parameters negotiated_version=0xff00000e
04 I00000314 0x07ff706cb107568ef7116f5f58a9ed9010 cry remote transport_parameters supported_version[0]=0xff00000e
05 I00000314 0x07ff706cb107568ef7116f5f58a9ed9010 frm tx 3 Initial(0x7f) ACK(0x0d) largest_ack=1 ack_delay=0(0) ack_block_count=0
06 I00000314 0x07ff706cb107568ef7116f5f58a9ed9010 frm tx 3 Initial(0x7f) ACK(0x0d) block=[1..0] block_count=1
</code></pre>
<p>The response destination connection id (<code>dcid</code> on line <code>01</code>) is the client’s original source connection ID (<code>scid</code>) which in this example is: <code>0x07ff706cb107568ef7116f5f58a9ed9010</code>.</p><p>The server has now discarded the client’s randomly-chosen <code>dcid</code> after finding that the client is ‘new’, and replaced it with its own connection ID which you can see as the packet source connection ID <code>scid</code> on line <code>01</code>, which in this example is: <code>0x3afafde2c24248817832ffe545d874a2a01</code>.</p><p>Starting from this point, both the QUIC client and server recognise each other’s connection IDs, opening the door to a powerful feature of QUIC: connection migration. Connection migration will allow QUIC clients and servers to change their IP addresses and ports, but still maintain the QUIC connection. QUIC packets arriving from or to the new IP/port can continue to be handled because the connection ID, which has not changed, will act as the primary identifier of the connection context. For our first <strong>cloudflare-quic.com</strong> demonstration, connection migration is not supported, but we’ll be working on this as we develop our QUIC offerings.</p><p>The server initial packet contains the next part of the TLS handshake, found in the <code>CRYPTO</code> frame on line <code>01</code>, which is the first part of the TLS server hello and may contain elements such as handshake key material and the beginning of the server’s certificate chain.</p><p>Lines <code>03</code> and <code>04</code> show the exchange of <code>transport parameters</code>, which are QUIC specific configuration values declared by one side to the other and used to control various aspects of the connection. These parameters are encoded and transmitted within the TLS handshake. This not only reiterates the close relationship between the TLS and transport layers in QUIC, but also demonstrates QUIC’s focus on security, as the exchange of these parameters will be protected against outside tampering as part of the TLS handshake.</p><p>Lines <code>05</code> and <code>06</code> show an example of some acknowledgement frames being sent from the client to the server. Acknowledgements are part of the QUIC loss detection mechanism that deals with data losses that inevitably happen on large networks, however during the handshake phase, acknowledgements also have another use: to hint at the validity of the client by proving to the server that a client is truly interested in communicating with the server and is not at a spoofed address.</p><p>Without any form of source validation, QUIC servers will severely limit the amount of data that they send to clients. This protects helpless, spoofed victims of amplification attacks (in conjunction with the client initial packet minimum size requirement described above), and also helps protect the QUIC server from the equivalent of a TCP SYN attack, by constraining the commitment that the QUIC server will make to an unvalidated client.</p><p>For Cloudflare, there are vast unknowns in regard to DDoS and SYN style attacks against QUIC and it is a subject we are supremely interested in. While these threat models remain unknown, our protections around <strong>cloudflare-quic.com</strong> will be effective but… remorseless.</p><h4 id="the-request">The REQUEST</h4><p>Once the TLS handshake is complete, we can see the transmission of the first layer 7 data:</p><pre><code>01 I00000315 0xac791937b009b7a61927361d9d453b48e0 pkt tx pkt 0 dcid=0x3afafde2c24248817832ffe545d874a2a01f scid=0x07ff706cb107568ef7116f5f58a9ed9010 type=Short(0x00) len=0
02 I00000315 0xac791937b009b7a61927361d9d453b48e0 frm tx 0 Short(0x00) STREAM(0x13) id=0x0 fin=1 offset=0 len=45 uni=0
03 Ordered STREAM data stream_id=0x0
04 00000000 47 45 54 20 2f 20 48 54 54 50 2f 31 2e 31 0d 0a |GET / HTTP/1.1..|
05 00000010 48 6f 73 74 3a 20 63 6c 6f 75 64 66 6c 61 72 65 |Host: cloudflare|
06 00000020 2d 71 75 69 63 2e 63 6f 6d 0d 0a 0d 0a |-quic.com....|
</code></pre>
<p>This fragment of the HTTP transaction is transmitted inside what is called a QUIC <code>STREAM</code>, seen on line <code>02</code>. QUIC streams are one or more communication channels multiplexed within a QUIC connection. QUIC streams are analogous to discrete TCP connections in that they provide data ordering and reliability guarantees, as well as data exchange that is independent from one another. But QUIC streams have some other advantages:</p><p>Firstly, QUIC streams are extremely fast to create as they rely on the authenticated client server relationship previously established by the QUIC connection. Evidence of this can be seen in the example above where a stream’s data is transmitted in the same packet that the stream was established.</p><p>Secondly, because ordering and reliability are independent for each QUIC stream, the loss of data belonging to one stream will not affect any other streams, providing a solution to the head of line blocking problem that affects protocols that multiplex over TCP, like HTTP/2.</p><h4 id="the-response">The RESPONSE</h4><p>Now you should be able to see the fruit of your QUIC toil: the HTTP response!</p><pre><code>01 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 con recv packet len=719
02 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 pkt rx pkt 3 dcid=0x07ff706cb107568ef7116f5f58a9ed9010 scid=0x type=Short(0x00) len=0
03 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 frm rx 3 Short(0x00) MAX_DATA(0x04) max_data=1048621
04 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 frm rx 3 Short(0x00) STREAM(0x12) id=0x0 fin=0 offset=0 len=675 uni=0
Ordered STREAM data stream_id=0x0
05 00000000 48 54 54 50 2f 31 2e 31 20 32 30 30 20 4f 4b 0d |HTTP/1.1 200 OK.|
06 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 con recv packet len=45
07 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 pkt rx pkt 4 dcid=0x07ff706cb107568ef7116f5f58a9ed9010 scid=0x type=Short(0x00) len=0
08 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 frm rx 4 Short(0x00) STREAM(0x16) id=0x0 fin=0 offset=675 len=5 uni=0
Ordered STREAM data stream_id=0x0
09 00000000 31 63 65 0d 0a |1ce..|
10 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 con recv packet len=503
11 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 pkt rx pkt 5 dcid=0x07ff706cb107568ef7116f5f58a9ed9010 scid=0x type=Short(0x00) len=0
12 I00001755 0x07ff706cb107568ef7116f5f58a9ed9010 frm rx 5 Short(0x00) STREAM(0x16) id=0x0 fin=0 offset=680 len=462 uni=0
Ordered STREAM data stream_id=0x0
13 00000000 3c 21 44 4f 43 54 59 50 45 20 68 74 6d 6c 3e 0a |&lt;!DOCTYPE html&gt;.|
</code></pre>
<p>As can be seen on line <code>04</code>, the response arrives on the same QUIC <code>STREAM</code> on which it was sent: (<code>0x0</code>).</p><p>Many other familiar faces can be seen: line <code>05</code>: the start of the response headers, line <code>09</code>: the chunked encoding header and line <code>13</code>: the start of the response body. It looks almost… normal!</p><h3 id="summary">Summary</h3><p>Thank you for following us on this QUIC odyssey! We understand that the process of building the ngtcp2 example client may be new for some people, but we urge you to keep trying and make use of online resources to help you if you come up against anything unexpected.</p><p>But if all went well, and you managed to see the HTTP response from <strong>cloudflare-quic.com</strong>, then: <strong>Congratulations!</strong> You and your screen full of debugging gibberish are on the crest of a new wave of internet communication.</p><ul><li>Please take a screenshot or a selfie!</li><li>Please tell us about it in the comments below!</li><li>Please take some time to compare the output you see with the points of interest that I have highlighted above.</li><li>And...please visit our blog again to keep up with our developments with QUIC, as support for this exciting new protocol develops.<br></li></ul><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on all our Birthday Week announcements.</em></p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Birthday-Week-6.png" class="kg-image" alt="Get a head start with QUIC"></figure><p><br><br></p>]]></content:encoded></item><item><title><![CDATA[The QUICening]]></title><description><![CDATA[Six o’clock already, I was just in the middle of a dream, now I’m up, awake, looking at my Twitter stream. As I do that the Twitter app is making multiple API calls over HTTPS to Twitter’s servers somewhere on the Internet.]]></description><link>https://blog.cloudflare.com/the-quicening/</link><guid isPermaLink="false">5ba8f9af9fbc7c00bf371b7b</guid><category><![CDATA[Birthday Week]]></category><category><![CDATA[Product News]]></category><category><![CDATA[QUIC]]></category><category><![CDATA[HTTPS]]></category><category><![CDATA[Security]]></category><category><![CDATA[Beta]]></category><dc:creator><![CDATA[John Graham-Cumming]]></dc:creator><pubDate>Tue, 25 Sep 2018 12:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/full-commute-share_1@2x.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cloudflare.com/content/images/2018/09/full-commute-share_1@2x.png" alt="The QUICening"><p>Six o’clock already, I was just in the middle of a dream, now I’m up, awake, looking at my Twitter stream. As I do that the Twitter app is making multiple API calls over HTTPS to Twitter’s servers somewhere on the Internet.</p><p>Those HTTPS connections are running over TCP via my home WiFi and broadband connection. All’s well inside the house, the WiFi connection is interference free thanks to my eero system, the broadband connection is stable and so there’s no packet loss, and my broadband provider’s connection to Twitter’s servers is also loss free.</p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/happy-home-.svg" class="kg-image" alt="The QUICening"></figure><p></p><p>Those are the perfect conditions for HTTPS running over TCP. Not a packet dropped, not a bit of jitter, no congestion. It’s even the perfect conditions for HTTP/2 where multiple streams of requests and responses are being sent from my phone to websites and APIs as I boot my morning. Unlike HTTP/1.1, HTTP/2 is able to use a single TCP connection for multiple, simultaneously in flight requests. That has a significant speed advantage over the old way (one request after another per TCP connection) when conditions are good.</p><p>But I have to catch an early train, got to be to work by nine, so I step out of the front door and my phone silently and smoothly switches from my home WiFi to 4G. All’s not well inside the phone’s apps though. The TCP connections in use between Chrome and apps, and websites and APIs are suddenly silent. Those HTTPS connections are in trouble and about to fail; errors are going to occur deep inside apps. I’m going to see sluggish response from my phone.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/dropped-tcp-.svg" class="kg-image" alt="The QUICening"></figure><p></p><p>The IP address associated with my phone has abruptly changed as I go from home to roam. TCP connections either stall or get dropped resulting in a weird delay while internal timers inform apps that connections have disappeared or as connections are re-established. It’s irritating, because it takes me so long just to figure out what I'm gonna wear, and now I’m waiting for an app that worked fine moments ago.</p><p>The same thing will happen multiple times on my trip as I jump around the cell towers and service providers along the route. It might be tempting to blame it on the train, but it’s really that the Internet was never meant to work this way. We weren’t meant to be carrying around pocket supercomputers that roam across lossy, noisy networks all the while trying to remain productive while complaining about sub-second delays in app response time.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/full-commute.svg" class="kg-image" alt="The QUICening"></figure><p></p><p>One proposed solution to these problems is QUIC: a new way to send packets across the Internet that takes into account what a messy place the Internet really is. A place where people don’t stand still and use the same IP address all the time (the horror!), a place where packets get lost because of radio reflections off concrete buildings (the madness!), a place with no Waze (the insanity!) where congestion comes and goes without a live map.</p><p>QUIC tries to make an HTTPS connection between a computer (phone) and server work reliably despite the poor conditions, it does this with a collection of technologies.</p><p>The first is UDP to replace TCP. UDP is widely used for fire-and-forget protocols where packets are sent but their arrival or ordering is not guaranteed (TCP provides the opposite: it guarantees arrival order and delivery but at a cost). Because UDP doesn’t have TCP’s guarantees it allows developers to innovate new protocols that do guarantee delivery and ordering (on top of UDP) that can incorporate features that TCP lacks.</p><p>One such feature is end-to-end encryption. All QUIC connections are fully encrypted. Another proposed feature is forward-error correction or FEC. When NASA’s Deep Space Network talks to the Voyager 2 spacecraft (which recently left our solar system) it transmits messages that become garbled crossing 17.6 billion km of space (that’s about 11 billion miles). Voyager 2 can’t send back the equivalent of “Say again?” when it receives a garbled message so the messages sent to Voyager 2 contain error-correcting codes that allow it to reconstruct the message from the mess.</p><p>Similarly, QUIC plans to incorporate error-correcting codes that allow missing data to be reconstructed. Although an app or server can send the “Say again?” message, it’s faster if an error-correcting code stops that being needed. The result is snappy apps and websites even in difficult Internet conditions.</p><p>QUIC also solves the HTTP/2 HoL problem. HoL is head of line blocking: because HTTP/2 sits on top of TCP and TCP guarantees delivery order if a packet gets lost the entire TCP connection has to wait while the missing packet is retransmitted. That’s OK if only one stream of data is passing over the TCP connection, but for efficiency it’s better to have multiple streams per connection. Sadly that means all streams wait when a packet gets lost. QUIC solves that because it doesn’t rely on TCP for delivery and ordering and can make an intelligent decision about which streams need to wait and which can continue when a packet goes astray.</p><p>Finally, one of the slower parts of a standard HTTP/2 over TCP connection is the very beginning. When the app or browser makes a connection there’s an initial handshake at the TCP level followed by a handshake to establish encryption. Over a high latency connection (say on a mobile phone on 3G) that creates a noticeable delay. Since QUIC controls all aspects of the connect it merges together connection and encryption into a single handshake.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/full-commute-copy.svg" class="kg-image" alt="The QUICening"></figure><p></p><p>Hopefully, this blog post has helped you see the operation of HTTPS on the real, messy, roaming Internet in a different light. Nick’s more <a href="https://blog.cloudflare.com/head-start-with-quic/">technical blog</a> will tell you how to test out QUIC for yourself. Visit <a href="https://cloudflare-quic.com">https://cloudflare-quic.com</a> to get started.</p><p>If you want to join the early access program for QUIC from Cloudflare you’ll find a button on the <a href="https://dash.cloudflare.com?zone=network">Network</a> tab in the Cloudflare Dashboard.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/image4-1.png" class="kg-image" alt="The QUICening"></figure><p></p><p>As we did with TLS 1.3 we’ll be working closely with IETF as QUIC develops and be continually rolling out the latest versions of the standard as they are created. We look forward to the day when all your connections are QUIC!</p><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on all our Birthday Week announcements.</em></p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Birthday-Week-7.png" class="kg-image" alt="The QUICening"></figure>]]></content:encoded></item><item><title><![CDATA[Encrypt it or lose it: how encrypted SNI works]]></title><description><![CDATA[Today we announced support for encrypted SNI, an extension to the TLS 1.3 protocol that improves privacy of Internet users.]]></description><link>https://blog.cloudflare.com/encrypted-sni/</link><guid isPermaLink="false">5ba13abdc24d3800bf438c3a</guid><category><![CDATA[Birthday Week]]></category><category><![CDATA[Product News]]></category><category><![CDATA[Cryptography]]></category><category><![CDATA[Security]]></category><category><![CDATA[TLS]]></category><category><![CDATA[TLS 1.3]]></category><category><![CDATA[DNS]]></category><category><![CDATA[Reliability]]></category><dc:creator><![CDATA[Alessandro Ghedini]]></dc:creator><pubDate>Mon, 24 Sep 2018 12:01:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/esni-3@3.5x-2.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/esni-3@3.5x-1.png" class="kg-image" alt="Encrypt it or lose it: how encrypted SNI works"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/esni-3@3.5x-2.png" alt="Encrypt it or lose it: how encrypted SNI works"><p></p><p>Today we announced <a href="https://blog.cloudflare.com/esni">support for encrypted SNI</a>, <a href="https://tools.ietf.org/html/draft-ietf-tls-esni">an extension</a> to the <a href="https://blog.cloudflare.com/rfc-8446-aka-tls-1-3/">TLS 1.3</a> protocol that improves privacy of Internet users by preventing on-path observers, including ISPs, coffee shop owners and firewalls, from intercepting the TLS Server Name Indication (SNI) extension and using it to determine which websites users are visiting.</p><p>Encrypted SNI, together with other Internet security features already offered by Cloudflare for free, will make it harder to censor content and track users on the Internet. Read on to learn how it works.</p><h3 id="snwhy">SNWhy?</h3><p>The TLS Server Name Indication (SNI) extension, <a href="https://tools.ietf.org/html/rfc3546">originally standardized back in 2003</a>, lets servers host multiple TLS-enabled websites on the same set of IP addresses, by requiring clients to specify which site they want to connect to during the initial TLS handshake. Without SNI the server wouldn’t know, for example, which certificate to serve to the client, or which configuration to apply to the connection.</p><p>The client adds the SNI extension containing the hostname of the site it’s connecting to to the ClientHello message. It sends the ClientHello to the server during the TLS handshake. Unfortunately the ClientHello message is sent unencrypted, due to the fact that client and server don’t share an encryption key at that point.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/tls13_unencrypted_server_name_indication-2.png" class="kg-image" alt="Encrypt it or lose it: how encrypted SNI works"><figcaption><em>TLS 1.3 with Unencrypted SNI</em></figcaption></figure><p></p><p>This means that an on-path observer (say, an ISP, coffee shop owner, or a firewall) can intercept the plaintext ClientHello message, and determine which website the client is trying to connect to. That allows the observer to track which sites a user is visiting.</p><p>But with SNI encryption the client encrypts the SNI even though the rest of the ClientHello is sent in plaintext. </p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/tls13_encrypted_server_name_indication-1.png" class="kg-image" alt="Encrypt it or lose it: how encrypted SNI works"><figcaption><em>TLS 1.3 with Encrypted SNI</em></figcaption></figure><p></p><p>So how come the original SNI couldn’t be encrypted before, but now it can? Where does the encryption key come from if client and server haven’t negotiated one yet?</p><h3 id="if-the-chicken-must-come-before-the-egg-where-do-you-put-the-chicken">If the chicken must come before the egg, where do you put the chicken?</h3><p>As with <a href="https://datatracker.ietf.org/meeting/101/materials/slides-101-dnsop-sessa-the-dns-camel-01">many other Internet features</a> the answer is simply “DNS”. </p><p>The server publishes a <a href="https://en.wikipedia.org/wiki/Public-key_cryptography">public key</a> on a well-known DNS record, which can be fetched by the client before connecting (as it already does for A, AAAA and other records). The client then replaces the SNI extension in the ClientHello with an “encrypted SNI” extension, which is none other than the original SNI extension, but encrypted using a symmetric encryption key derived using the server’s public key, as described below. The server, which owns the private key and can derive the symmetric encryption key as well, can then decrypt the extension and therefore terminate the connection (or forward it to a backend server). Since only the client, and the server it’s connecting to, can derive the encryption key, the encrypted SNI cannot be decrypted and accessed by third parties.</p><p>It’s important to note that this is an extension to TLS version 1.3 and above, and doesn’t work with previous versions of the protocol. The reason is very simple: one of the changes introduced by TLS 1.3 (<a href="https://blog.cloudflare.com/you-get-tls-1-3-you-get-tls-1-3-everyone-gets-tls-1-3/">not without problems</a>) meant moving the Certificate message sent by the server to the encrypted portion of the TLS handshake (before 1.3, it was sent in plaintext). Without this fundamental change to the protocol, an attacker would still be able to determine the identity of the server by simply observing the plaintext certificate sent on the wire.</p><p>The underlying cryptographic machinery involves using the <a href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman key exchange algorithm</a> which allows client and server to generate a shared encryption key over an untrusted channel. The encrypted SNI encryption key is thus calculated on the client-side by using the server’s public key (which is actually the public portion of a Diffie-Hellman semi-static key share) and the private portion of an ephemeral Diffie-Hellman share generated by the client itself on the fly and discarded immediately after the ClientHello is sent to the server. Additional data (such as some of the cryptographic parameters sent by the client as part of its ClientHello message) is also mixed into the cryptographic process for good measure.</p><p>The client’s ESNI extension will then include, not only the actual encrypted SNI bits, but also the client’s public key share, the cipher suite it used for encryption and the digest of the server’s ESNI DNS record. On the other side, the server uses its own private key share, and the public portion of the client’s share to generate the encryption key and decrypt the extension.</p><p>While this may seem overly complicated, this ensures that the encryption key is cryptographically tied to the specific TLS session it was generated for, and cannot be reused across multiple connections. This prevents an attacker able to observe the encrypted extension sent by the client from simply capturing it and replaying it to the server in a separate session to unmask the identity of the website the user was trying to connect to (this is known as “cut-and-paste” attack).</p><p>However a compromise of the server’s private key would put all ESNI symmetric keys generated from it in jeopardy (which would allow observers to decrypt previously collected encrypted data), which is why Cloudflare’s own SNI encryption implementation rotates the server’s keys every hour to improve forward secrecy, but keeps track of the keys for the previous few hours to allow for DNS caching and replication delays, so that clients with slightly outdated keys can still use ESNI without problems (but eventually all keys are discarded and forgotten).</p><h3 id="but-wait-dns-for-real">But wait, DNS? For real?</h3><p>The observant reader might have realized that simply using DNS (which is, by default, unencrypted) would make the whole encrypted SNI idea completely pointless: an on-path observer would be able to determine which website the client is connecting to by simply observing the plaintext DNS queries sent by the client itself, whether encrypted SNI was used or not.</p><p>But with the introduction of DNS features such as DNS over TLS (DoT) and DNS over HTTPS (DoH), and of public DNS resolvers that provide those features to their users (such as Cloudflare’s own <a href="https://blog.cloudflare.com/announcing-1111/">1.1.1.1</a>), DNS queries can now be encrypted and protected by the prying eyes of censors and trackers alike.</p><p>However, while responses from DoT/DoH DNS resolvers can be trusted, to a certain extent (evil resolvers notwithstanding), it might still be possible for a determined attacker to poison the resolver’s cache by intercepting its communication with the authoritative DNS server and injecting malicious data. That is, unless both the authoritative server and the resolver support <a href="https://www.cloudflare.com/dns/dnssec/">DNSSEC</a><sub>[1]</sub>. Incidentally, Cloudflare’s authoritative DNS servers can sign responses returned to resolvers, and the 1.1.1.1 resolver can verify them.</p><h3 id="what-about-the-ip-address">What about the IP address?</h3><p>While both DNS queries and the TLS SNI extensions can now be protected by on-path attackers, it might still be possible to determine which websites users are visiting by simply looking at the destination IP addresses on the traffic originating from users’ devices. Some of our customers are protected by this to a certain degree thanks to the fact that many Cloudflare domains share the same sets of addresses, but this is not enough and more work is required to protect end users to a larger degree. Stay tuned for more updates from Cloudflare on the subject in the future.</p><h3 id="where-do-i-sign-up">Where do I sign up?</h3><p>Encrypted SNI is now enabled for free on all Cloudflare zones using our name servers, so you don’t need to do anything to enable it on your Cloudflare website. On the browser side, our friends at Firefox tell us that they expect to add encrypted SNI support this week to <a href="https://www.mozilla.org/firefox/channel/desktop/">Firefox Nightly</a> (keep in mind that the encrypted SNI spec is still under development, so it’s not stable just yet).</p><p>By visiting <a href="https://encryptedsni.com">encryptedsni.com</a> you can check how secure your browsing experience is. Are you using secure DNS? Is your resolver validating DNSSEC signatures? Does your browser support TLS 1.3? Did your browser encrypt the SNI? If the answer to all those questions is “yes” then you can sleep peacefully knowing that your browsing is protected from prying eyes.</p><h3 id="conclusion">Conclusion</h3><p>Encrypted SNI, along with TLS 1.3, DNSSEC and DoT/DoH, plugs one of the few remaining holes that enable surveillance and censorship on the Internet. More work is still required to get to a surveillance-free Internet, but we are (slowly) getting there.</p><p><sub>[1]: It's important to mention that DNSSEC could be disabled by BGP route hijacking between a DNS resolver and the TLD server. Last week we <a href="https://blog.cloudflare.com/rpki/">announced</a> our commitment to RPKI and if DNS resolvers and TLDs also implement RPKI, this type of hijacking will be much more difficult</sub>.</p><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on all our Birthday Week announcements.</em></p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Birthday-Week-3.png" class="kg-image" alt="Encrypt it or lose it: how encrypted SNI works"></figure>]]></content:encoded></item><item><title><![CDATA[Encrypting SNI: Fixing One of the Core Internet Bugs]]></title><description><![CDATA[Cloudflare launched on September 27, 2010. Since then, we've considered September 27th our birthday. This Thursday we'll be turning 8 years old.
Ever since our first birthday, we've used the occasion to launch new products or services.]]></description><link>https://blog.cloudflare.com/esni/</link><guid isPermaLink="false">5ba718109fbc7c00bf371b57</guid><category><![CDATA[Birthday Week]]></category><category><![CDATA[Product News]]></category><category><![CDATA[Security]]></category><category><![CDATA[Privacy]]></category><category><![CDATA[HTTPS]]></category><category><![CDATA[Reliability]]></category><category><![CDATA[1.1.1.1]]></category><category><![CDATA[DNS]]></category><dc:creator><![CDATA[Matthew Prince]]></dc:creator><pubDate>Mon, 24 Sep 2018 12:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/Cloudflare_esni-1.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Birthday-Week-2.png" class="kg-image" alt="Encrypting SNI: Fixing One of the Core Internet Bugs"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare_esni-1.png" alt="Encrypting SNI: Fixing One of the Core Internet Bugs"><p></p><p>Cloudflare <a href="https://www.youtube.com/watch?v=bAc_5gMwzuM">launched</a> on September 27, 2010. Since then, we've considered September 27th our birthday. This Thursday we'll be turning 8 years old.</p><p>Ever since our first birthday, we've used the occasion to launch new products or services. Over the years we came to the conclusion that the right thing to do to celebrate our birthday wasn't so much about launching products that we could make money from but instead to do things that were gifts back to our users and the Internet in general. My cofounder Michelle <a href="https://blog.cloudflare.com/cloudflare-turns-8/">wrote about this tradition in a great blog post yesterday</a>.</p><p>Personally, one of my proudest moments at Cloudflare came on our birthday in 2014 when we made <a href="https://blog.cloudflare.com/introducing-universal-ssl/">HTTPS support free for all our users</a>. At the time, people called us crazy — literally and repeatedly. Frankly, internally we had significant debates about whether we were crazy since encryption was the primary reason why people upgraded from a free account to a paid account.</p><p>But it was the right thing to do. The fact that encryption wasn't built into the web from the beginning was, in our mind, a bug. Today, almost exactly four years later, the web is nearly 80% encrypted thanks to leadership from great projects like Let's Encrypt, the browser teams at Google, Apple, Microsoft, and Mozilla, and the fact that more and more hosting and SaaS providers have built in support for HTTPS at no cost. I'm proud of the fact that we were a leader in helping start that trend.</p><p>Today is another day I expect to look back on and be proud of because today we hope to help start a new trend to make the encrypted web more private and secure. To understand that, you have to understand a bit about why the encrypted web as exists today still leaks a lot of your browsing history.</p><h3 id="how-private-is-your-browsing-history">How Private Is Your Browsing History?</h3><p>The expectation when you visit a site over HTTPS is that no one listening on the line between you and where your connection terminates can see what you're doing. And to some extent, that's true. If you visit your bank's website, HTTPS is effective at keeping the contents sent to or from the site (for example, your username and password or the balance of your bank account) from being leaked to your ISP or anyone else monitoring your network connection.</p><p>While the contents sent to or received from a HTTPS site are protected, the fact that you visited the site can be observed easily in a couple of ways. Traditionally, one of these has been via DNS. DNS queries are, by default, unencrypted so your ISP or anyone else can see where you're going online. That's why last April, we launched <a href="https://one.one.one.one/">1.1.1.1</a> — a free (and <a href="https://www.dnsperf.com/#!dns-resolvers">screaming fast</a>) public DNS resolver with support for DNS over TLS and DNS over HTTPS.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare_resolver-1111-april-to-sept-2018.png" class="kg-image" alt="Encrypting SNI: Fixing One of the Core Internet Bugs"></figure><p></p><p><a href="https://one.one.one.one/">1.1.1.1</a> has been a huge success and we've significantly increased the percentage of DNS queries sent over an encrypted connection. Critics, however, rightly pointed out that the identity of the sites that you visit still can leak in other ways. The most problematic is something called the Server Name Indication (SNI) extension.</p><h3 id="why-sni">Why SNI?</h3><p>Fundamentally, SNI exists in order to allow you to host multiple encrypted websites on a single IP address. Early browsers didn't include the SNI extension. As a result, when a request was made to establish a HTTPS connection the web server didn't have much information to go on and could only hand back a single SSL certificate per IP address the web server was listening on.</p><p>One solution to this problem was to create certificates with multiple Subject Alternative Names (SANs). These certificates would encrypt traffic for multiple domains that could all be hosted on the same IP. This is how Cloudflare handles HTTPS traffic from older browsers that don't support SNI. We limit that feature to our paying customers, however, for the same reason that SANs aren't a great solution: they're a hack, a pain to manage, and can slow down performance if they include too many domains.</p><p>The more scalable solution was SNI. The analogy that makes sense to me is to think of a postal mail envelope. The contents inside the envelope are protected and can't be seen by the postal carrier. However, outside the envelope is the street address which the postal carrier uses to bring the envelope to the right building. On the Internet, a web server's IP address is the equivalent of the street address.</p><p>However, if you live in a multi-unit building, a street address alone isn't enough to get the envelope to the right recipient. To supplement the street address you include an apartment number or recipient's name. That's the equivalent of SNI. If a web server hosts multiple domains, SNI ensures that a request is routed to the correct site so that the right SSL certificate can be returned to be able to encrypt and decrypt any content.</p><h3 id="nosey-networks">Nosey Networks</h3><p>The specification for SNI was introduced by the IETF in 2003 and browsers rolled out support over the next few years. At the time, it seemed like an acceptable tradeoff. The vast majority of Internet traffic was unencrypted. Adding a TLS extension that made it easier to support encryption seemed like a great trade even if that extension itself wasn't encrypted.</p><p>But, today, as HTTPS covers nearly 80% of all web traffic, the fact that SNI leaks every site you go to online to your ISP and anyone else listening on the line has become a glaring privacy hole. Knowing what sites you visit can build a very accurate picture of who you are, creating both privacy and security risks.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare_https_with_plaintext_dns_tls12_plaintext_sni.png" class="kg-image" alt="Encrypting SNI: Fixing One of the Core Internet Bugs"></figure><p></p><p>In the United States, ISPs were briefly restricted in their ability to gather customer browsing data under FCC rules passed at the end of the Obama administration. ISPs, however, lobbied Congress and, in April 2017, President Trump signed a Congressional Resolution repealing those protections. As ISPs increasingly <a href="https://arstechnica.com/information-technology/2017/06/oath-verizon-completes-4-5-billion-buy-of-yahoo-and-merges-it-with-aol/">acquire media companies</a> and <a href="https://www.appnexus.com/company/pressroom/att-to-acquire-appnexus">ad targeting businesses</a>, being able to mine the data flowing through their pipes is an increasingly attractive business for them and an increasingly troubling privacy threat to all of us.</p><h3 id="closing-the-sni-privacy-hole">Closing the SNI Privacy Hole</h3><p>On May 3, about a month after we launched <a href="https://one.one.one.one/">1.1.1.1</a>, I was reading a review of our new service. While the article praised the fact that <a href="https://one.one.one.one/">1.1.1.1</a> was privacy-oriented, it somewhat nihilistically concluded that it was all for naught because ISPs could still spy on you by monitoring SNI. Frustrated, I dashed off an email to some of Cloudflare's engineers and the senior team at Mozilla, who we'd been working on a project to help encrypt DNS. I concluded my email:</p><blockquote>My simple PRD: if Firefox connects to a Cloudflare IP then we'd give you a public key to use to encrypt the SNI entry before sending it to us. How does it scale to other providers? Dunno, but we have to start somewhere. Rough consensus and running code, right?</blockquote><p>It turned out to be <a href="https://blog.cloudflare.com/encrypted-sni">a bit more complex than that</a>. However, today I'm proud to announce that Encrypted SNI (ESNI) is live across Cloudflare's network. Later this week we expect Mozilla's Firefox to become the first browser to support the new protocol in their Nightly release. In the months to come, the plan is for it go mainstream. And it's not just Mozilla. There's been significant interest from all the major browser makers and I'm hopeful they'll all add support for ESNI over time.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare_https_with_secure_dns_tls13_encrytped_sni-1.png" class="kg-image" alt="Encrypting SNI: Fixing One of the Core Internet Bugs"></figure><p></p><h3 id="hoping-to-start-another-trend">Hoping to Start Another Trend</h3><p>While we're the first to support ESNI, we haven't done this alone. We worked on ESNI with great teams from Apple, Fastly, Mozilla, and others across the industry who, like us, are concerned about Internet privacy. While Cloudflare is the first content network to support ESNI, this isn't a proprietary protocol. It's being worked on as an <a href="https://datatracker.ietf.org/doc/draft-ietf-tls-esni/?include_text=1">IETF Draft RFC</a> and we are hopeful others will help us formalize the draft and implement the standard as well. If you're curious about the technical details behind ESNI, you can learn more from the <a href="https://blog.cloudflare.com/encrypted-sni/">great blog post just published by my colleague Alessandro Ghedini</a>. Finally, when browser support starts to launch later this week you can test this from our <a href="https://encryptedsni.com">handy ESNI testing tool</a>.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare_esni-2.png" class="kg-image" alt="Encrypting SNI: Fixing One of the Core Internet Bugs"></figure><p></p><p>Four years ago I'm proud that we helped start a trend that today has led to nearly all the web being encrypted. Today, I hope we are again helping start a trend — this time to make the encrypted web even more private and secure.</p><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on all our Birthday Week announcements.</em></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Cloudflare Turns 8 — here’s what we mean by a “better Internet”]]></title><description><![CDATA[I have always loved birthdays. It is a chance to get together with loved ones, a chance to have fun and a chance to reflect on anything you want to keep doing or change in the upcoming year.]]></description><link>https://blog.cloudflare.com/cloudflare-turns-8/</link><guid isPermaLink="false">5ba55fc39fbc7c00bf371b34</guid><category><![CDATA[Birthday Week]]></category><category><![CDATA[Product News]]></category><category><![CDATA[Our History]]></category><category><![CDATA[Performance]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Michelle Zatlyn]]></dc:creator><pubDate>Sun, 23 Sep 2018 12:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/Screenshot-2018-09-22-21.32.31.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Birthday-Week.png" class="kg-image" alt="Cloudflare Turns 8 — here’s what we mean by a “better Internet”"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/Screenshot-2018-09-22-21.32.31.png" alt="Cloudflare Turns 8 — here’s what we mean by a “better Internet”"><p></p><p>I have always loved birthdays. It is a chance to get together with loved ones, a chance to have fun and a chance to reflect on anything you want to keep doing or change in the upcoming year. At Cloudflare, we’ve embraced celebrating our birthday as well. </p><p>This week, Cloudflare turns 8 years old. It feels like just yesterday that Matthew, Lee, Matthieu, Ian, Sri, Chris, Damon and I stepped on<a href="https://blog.cloudflare.com/reflections-on-techcrunch-disrupt-launch/"> stage at Techcrunch Disrupt to launch Cloudflare to the world</a>. Since then, we have celebrated our birthday every year by giving a gift back to our customers and the Internet. This year, we plan to celebrate each day with a new product benefiting our community. Or in other words, it is a weeklong birthday celebration. Like I said, I love birthdays! </p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Team.jpg" class="kg-image" alt="Cloudflare Turns 8 — here’s what we mean by a “better Internet”"></figure><p><small><em>The Cloudflare team when we launched the service at Techcrunch Disrupt during September 27 to 29, 2010 – Matthieu, Chris, Sri, Ian, Lee, Matthew, Michelle and Damon.</em></small></p>
<p>While I can’t share exactly what we’re releasing every day — after all who doesn’t like a surprise? — I wanted to share some thoughts on how we decide what to release birthday week. </p><p>Our mission at Cloudflare is to help build a better Internet. That is a big, broad mission that means many things. It means that we push to make Internet properties faster. It means respecting individual’s privacy. It means making it harder for malicious actors to do bad things. It means helping to make the Internet more reliable. It means supporting new Internet standards and protocols, and making sure they are accessible to everyone. It means democratizing technology and making sure the widest possible group has access to it. It means increasing value for our community, while decreasing their costs. Here is more color on each: </p><h4 id="it-means-that-we-push-to-make-the-internet-faster">It means that we push to make the Internet faster</h4><p>As more applications go online, users expect the interactions to be fast. It is hard to imagine a world where people want a slower Internet experience. It’s the exact opposite — and will only continue. </p><p>Speed means high bandwidth and low latency. As we move along these two axes, more applications emerge. Music on the Internet was unlocked at a certain level of bandwidth. Video required more. Videoconferencing has both bandwidth and latency requirements. These technologies are reshaping entire industries — and having a impact on societies globally. </p><p>What’s exciting to me is that there are a whole host of further applications that will be unlocked as we continue to increase the speed of the internet. One of the things that will enable this is edge computing — moving the cloud closer to the internet visitors. Cloudflare released Workers a year ago (<a href="https://blog.cloudflare.com/code-everywhere-cloudflare-workers/">on our 7th Birthday</a>), and we are so excited by what developers around the world are doing with it. We know a whole new set of applications are being planted right now and will emerge over the next 18 months because of this gained speed. </p><h4 id="it-means-respecting-individual-s-privacy">It means respecting individual’s privacy</h4><p>When <a href="https://blog.cloudflare.com/announcing-1111/">we announced 1.1.1.1</a>, our fast and private DNS service for consumers, we were blown away by the reception in the marketplace. People do care about their privacy and they are looking for solutions that understand that. When we build a product, we always ask ourselves how does this impact an individual’s privacy? We want to be a leader in terms of privacy.</p><h4 id="it-means-making-it-harder-for-malicious-actors-to-do-bad-things">It means making it harder for malicious actors to do bad things</h4><p>The promise of Cloudflare has been to band businesses, people and organizations together to be stronger than the malicious actors. It’s the first time where the resources for the good guys have outweighed resources for the bad guys. Today, Cloudflare offers a broad security portfolio to its customers and we constantly work to make the services we have better, and to expand our scope. You will see our development in new areas on the security front this upcoming week.</p><h4 id="it-means-helping-to-make-the-internet-more-reliable">It means helping to make the Internet more reliable</h4><p>While speed matters in unlocking new applications, so does reliability. There are a whole host of applications that can only be unlocked if they can depend on the internet being there. Transportation is one example; health care is another. If the internet breaks for these applications, life threatening things can start to happen very quickly, just as they would be if power was lost to these applications. But it’s not just examples where lives can get lost — if you’ve been in an office when the wifi has gone out, you’ll know that more and more businesses depend on the internet just to get day to day operations done. Cloudflare is committed to being at the forefront of a more reliable internet. </p><h4 id="it-means-supporting-new-standards-and-protocols">It means supporting new standards and protocols</h4><p>The original internet was designed as a decentralized network. One of the principles that enabled this to happen was to have a series of open standards that everyone agreed upon, as opposed to a series of balkanized networks that were all talking their own language. The original set of principles gave everyone a common language. This open set of standards let thousands of ideas bloom, and it is part of what has made the internet so great. We’re committed to that idea. </p><p>At the same time, the Internet is over 35 years old. Many smart, talented engineers around the world have come up with new protocols and standards that are faster and safer than the original protocols. But, getting these new protocols and standards distributed is difficult. We want to help distribute and drive adoption of new standards and protocols, and make access easier for our customers. We’ve done it with HTTPS, SPDY, HTTP2, DNSSEC and there are more to come. </p><h4 id="it-means-making-the-internet-more-accessible-to-everyone">It means making the internet more accessible to everyone</h4><p>It is kind of crazy to think about the amount of timely information that we have access to today because of the Internet. And by and large, how it’s possible to communicate with any other person on the planet. But this only holds true if everyone is able to access the Internet. What do we mean by that? Well, it in turn breaks down into two further principles: democratization and affordability. </p><h4 id="it-means-democratizing-technology-and-making-sure-the-widest-possible-group-has-access-to-it">It means democratizing technology and making sure the widest possible group has access to it</h4><p>It’s one thing to have an open standard. That, in theory, allows anyone who understands the standard to participate. But go back to the early days of the web, and you really had to be a “techie” to be able to participate. </p><p>We’ve come a long way since those days; in terms of user clients, we’ve gone from a command line interface to a supercomputer with touch screens in our pockets. But there’s more to democratizing technology than just making it easier from the perspective of a consumer. There are also all the small businesses that are now possible, that were not previously so, because these entrepreneurs can use the internet to directly reach customers. It’s enabled all sorts of products and services that were not previously possible. </p><p>Many of those businesses would not be able to start if the tools and infrastructure required to get going are beyond their technical grasp. One of the things that Cloudflare has been committed to from the start is taking complicated and technical solutions and making it easy enough for a non-technical person to use. We have wanted to expand the number of Internet properties who have access to these services. Millions of customers around the world fit this profile. We might have one of the fastest and most secure networks on the web fit for enterprises like New York Stock Exchange and IBM. But if you’re a one man shop just getting started, you shouldn’t need an IT team to be able to make your website fast and secure. With Cloudflare, you don’t have to. </p><h4 id="it-means-increasing-value-for-our-community-while-decreasing-their-costs">It means increasing value for our community, while decreasing their costs</h4><p>As the Internet grows, it becomes more valuable, and capabilities become lower cost. This is one of the powers of network effects. We have many examples of this at Cloudflare. We want more connections to other Internet providers around the world so that we can pass bandwidth savings along to our customers. Or, last year during our 7th Birthday, we pushed our <a href="https://blog.cloudflare.com/unmetered-mitigation/">DDoS mitigation technology to all of our plans</a>, including the Free plan. This is technology that used to cost at least $10K/month. We are always looking to deliver more value to our customers. It is a daily topic around Cloudflare.</p><p>So, back to our Birthday Week. Every announcement this week ties back to helping to build a better Internet in some way. Here is a preview of this week’s releases: </p><ul><li>On Monday, we are releasing something that will make the Internet more private and secure for every user. </li><li>On Tuesday, we are leading the way democratizing a new Internet standard, while also making the Internet faster. </li><li>On Wednesday, we are bringing together a coalition of partners to help our customers lower their infrastructure costs — dramatically. </li><li>On Thursday, our actual birthday, we are releasing a new service we hope you’ll love that provides something that every one of our customers needs, but now with the best security and lowest price. </li><li>On Friday, we are releasing a new product that pushes the power of the Internet forward by making it more programmable. </li></ul><p>I often get asked what makes Cloudflare special? My answer always comes back to the people I work with and our partners who work passionately to delight our customers. The Cloudflare team comes to work every day to solve the tough challenges of the Internet to ultimately help build a better Internet going forward. This week, I am excited to share our work with all of you. </p><p>Every day, we will be posting a blog post at 1200 UTC with that day’s announcement. We will do a round up at the end of the week as well. I can’t wait to hear what you think! </p><hr><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/image5-1.jpg" class="kg-image" alt="Cloudflare Turns 8 — here’s what we mean by a “better Internet”"></figure><p><small><em>The three Cloudflare co-founders: Matthew Prince, Michelle Zatlyn and Lee Holloway</em></small></p>
<hr><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/image7-1.jpg" class="kg-image" alt="Cloudflare Turns 8 — here’s what we mean by a “better Internet”"></figure><p><small><em>Launching Cloudflare at Techcrunch Disrupt in September 2010 to a panel of esteemed judges</em></small></p>
<hr><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/image6-1.jpg" class="kg-image" alt="Cloudflare Turns 8 — here’s what we mean by a “better Internet”"></figure><p><small><em>Matthew Prince, our CEO, presenting Cloudflare to a group of entrepreneurs.</em></small></p>
<hr><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/image3-1.jpg" class="kg-image" alt="Cloudflare Turns 8 — here’s what we mean by a “better Internet”"></figure><p><small><em>The three co-founders, Michelle Zatlyn, Lee Holloway and Matthew Prince, at one of our office openings early on</em></small></p>
<p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on all our Birthday Week announcements.</em></p>]]></content:encoded></item><item><title><![CDATA[Roughtime: Securing Time with Digital Signatures]]></title><description><![CDATA[When you visit a secure website, it offers you a TLS certificate that asserts its identity. Every certificate has an expiration date, and when it’s passed due, it is no longer valid.]]></description><link>https://blog.cloudflare.com/roughtime/</link><guid isPermaLink="false">5ba16b8ac24d3800bf438c51</guid><category><![CDATA[Crypto Week]]></category><category><![CDATA[Crypto]]></category><category><![CDATA[Security]]></category><category><![CDATA[TLS]]></category><category><![CDATA[OCSP]]></category><dc:creator><![CDATA[Christopher Patton]]></dc:creator><pubDate>Fri, 21 Sep 2018 12:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/roughtime-copy@3.5x-3.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Roughtime-.svg" class="kg-image" alt="Roughtime: Securing Time with Digital Signatures"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/roughtime-copy@3.5x-3.png" alt="Roughtime: Securing Time with Digital Signatures"><p></p><p>When you visit a secure website, it offers you a TLS certificate that asserts its identity. Every certificate has an expiration date, and when it’s passed due, it is no longer valid. The idea is almost as old as the web itself: limiting the lifetime of certificates is meant to reduce the risk in case a TLS server’s secret key is compromised.</p><p>Certificates aren’t the only cryptographic artifacts that expire. When you visit a site protected by Cloudflare, we also tell you whether its certificate has been revoked (see our <a href="https://blog.cloudflare.com/high-reliability-ocsp-stapling/">blog post</a> on OCSP stapling) — for example, due to the secret key being compromised — and this value (a so-called OCSP staple) has an expiration date, too.</p><p>Thus, to determine if a certificate is valid and hasn’t been revoked, your system needs to know the current time. Indeed, time is crucial for the security of TLS and myriad other protocols. To help keep clocks in sync, we are announcing a free, high-availability, and low-latency authenticated time service called <a href="https://roughtime.googlesource.com/roughtime">Roughtime</a>, available at <a href="https://roughtime.cloudflare.com">roughtime.cloudflare.com</a> on port 2002.</p><h2 id="time-is-tricky">Time is tricky</h2><p>It may surprise you to learn that, in practice, clients’ clocks are heavily skewed. A <a href="https://acmccs.github.io/papers/p1407-acerA.pdf">recent study of Chrome users</a> showed that a significant fraction of reported TLS-certificate errors are caused by client-clock skew. During the period in which error reports were collected, 6.7% of client-reported times were behind by more than 24 hours. (0.05% were ahead by more than 24 hours.) This skew was a causal factor for at least 33.5% of the sampled reports from Windows users, 8.71% from Mac OS, 8.46% from Android, and 1.72% from Chrome OS. These errors are usually presented to users as warnings that the user can click through to get to where they’re going. However, showing too many warnings makes users grow accustomed to clicking through them; <a href="https://en.wikipedia.org/wiki/Alarm_fatigue">this is risky</a>, since these warnings are meant to keep users away from malicious websites.</p><p>Clock skew also holds us back from improving the security of certificates themselves. We’d like to issue certificates with shorter lifetimes because the less time the certificate is valid, the lower the risk of the secret key being exposed. (This is why Let’s Encrypt issues certificates valid for just <a href="https://letsencrypt.org/2015/11/09/why-90-days.html">90 days by default</a>.) But the long tail of skewed clocks limits the effective lifetime of certificates; shortening the lifetime too much would only lead to more warnings.</p><p>Endpoints on the Internet often synchronize their clocks using a protocol like the <a href="https://en.wikipedia.org/wiki/Network_Time_Protocol">Network Time Protocol</a> (NTP). NTP aims for precise synchronization, and even accounts for network latency. However, it is usually deployed without security features, as the added overhead on high-load servers <a href="https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/dowling">degrades precision significantly</a>. As a result, a man-in-the-middle attacker between the client and server can easily influence the client’s clock. By moving the client back in time, the attacker can force it to accept expired (and possibly compromised) certificates; by moving forward in time, it can force the client to accept a certificate that is <em>not yet</em> valid.</p><p>Fortunately, for settings in which both security and precision are paramount, workable solutions are <a href="https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/dowling">on the horizon</a>. But for many applications, precise network time isn’t essential; it suffices to be <em>accurate</em>, say, within 10 seconds of real time. This observation is the primary motivation of Google’s <a href="https://roughtime.googlesource.com/roughtime">Roughtime</a> protocol, a simple protocol by which clients can synchronize their clocks with one or more authenticated servers. Roughtime lacks the precision of NTP, but aims to be accurate enough for cryptographic applications, and since the responses are authenticated, man-in-the-middle attacks aren’t possible.</p><p>The protocol is designed to be simple and flexible. A client can get Roughtime from just one server it trusts, or it may contact many servers to make its calculation more robust. But its most distinctive feature is that it adds <em>accountability</em> to time servers. If a server misbehaves by providing the wrong time, then the protocol allows clients to produce publicly verifiable, cryptographic proof of this misbehavior. Making servers auditable in this manner makes them accountable to provide accurate time.</p><p>We are deploying a Roughtime service for two reasons.</p><p>First, the clock we use for this service is the same as the clock we use to determine whether our customers’ certificates are valid and haven’t been revoked; as a result, exposing this service makes us accountable for the validity of TLS artifacts we serve to clients on behalf of our customers.</p><p>Second, Roughtime is a great idea whose time has come. But it is only useful if several independent organizations participate; the more Roughtime servers there are, the more robust the ecosystem becomes. Our hope is that putting our weight behind it will help the Roughtime ecosystem grow.</p><h2 id="the-roughtime-protocol">The Roughtime protocol</h2><p>At its most basic level, Roughtime is a one-round protocol in which the client requests the current time and the server sends a signed response. The response is comprised of a timestamp (the number of microseconds since the Unix epoch) and a <em>radius</em> (in microseconds) used to indicate the server’s certainty about the reported time. For example, a radius of 1,000,000μs means the server is reasonably sure that the true time is within one second of the reported time.</p><p>The server proves freshness of its response as follows. The request consists of a short, random string commonly called a <em>nonce</em> (pronounced /<a href="https://www.merriam-webster.com/dictionary/nonce">nän(t)s</a>/, or sometimes /ˈen wən(t)s/). The server incorporates the nonce into its signed response so that it’s needed to verify the signature. If the nonce is sufficiently long (say, 16 bytes), then the number of possible nonces is so large that it’s extremely unlikely the server has encountered (or will ever encounter) a request with the same nonce. Thus, a valid signature serves as cryptographic proof that the response is fresh.</p><p>The client uses the server’s <em>root public key</em> to verify the signature. (The key is obtained out-of-band; you can get our key <a href="https://developers.cloudflare.com/roughtime/docs/usage/">here</a>.) When the server starts, it generates an online public/secret key pair; the root secret key is used to create a delegation for the online public key, and the online secret key is used to sign the response. The delegation serves the same function as a traditional <a href="https://en.wikipedia.org/wiki/X.509">X.509</a> certificate on the web: as illustrated in the figure below, the client first uses the root public key to verify the delegation, then uses the online public key to verify the response. This allows for operational separation of the delegator and the server and limits exposure of the root secret key.</p><hr><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Roughtime-1.png" class="kg-image" alt="Roughtime: Securing Time with Digital Signatures"><figcaption>Simplified Roughtime (without delegation)</figcaption></figure><hr><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Roughtime-2.png" class="kg-image" alt="Roughtime: Securing Time with Digital Signatures"><figcaption>Roughtime with delegation</figcaption></figure><hr><p>Roughtime offers two features designed to make it scalable. First, when the volume of requests is high, the server may batch-sign a number of clients’ requests by constructing a <a href="https://en.wikipedia.org/wiki/Merkle_tree">Merkle tree</a> from the nonces. The server signs the root of the tree and sends in its response the information needed to prove to the client that its request is in the tree. (The data structure is a binary tree, so the amount of information is proportional to the base-2 logarithm of the number of requests in the batch; see the figure below) Second, the protocol is executed over UDP. In order to prevent the Roughtime server from being an amplifier for <a href="https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/">DDoS attacks</a>, the request is padded to 1KB; if the UDP packet is too short, then it’s dropped without further processing. Check out <a href="https://int08h.com/post/to-catch-a-lying-timeserver/">this blog post</a> for a more in-depth discussion.</p><hr><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Roughtime-3.png" class="kg-image" alt="Roughtime: Securing Time with Digital Signatures"><figcaption>Roughtime with batching</figcaption></figure><hr><h3 id="using-roughtime">Using Roughtime</h3><p>The protocol is flexible enough to support a variety of use cases. A web browser could use a Roughtime server to proactively synchronize its clock when validating TLS certificates. It could also be used retroactively to avoid showing the user too many warnings: when a certificate validation error occurs — in particular, when the browser believes it’s expired or not yet valid — Roughtime could be used to determine if the clock skew was the root cause. Instead of telling the user the certificate is invalid, it could tell the user that their clock is incorrect.</p><p>Using just one server is sufficient if that server is trustworthy, but a security-conscious user could make requests to many servers; the delta might be computed by eliminating outliers and averaging the responses, or by some <a href="https://roughtime.googlesource.com/roughtime/+/master/go/client/">more sophisticated method</a>. This makes the calculation robust to one or more of the servers misbehaving.</p><h3 id="making-servers-accountable">Making servers accountable</h3><p>The real power of Roughtime is that it’s auditable. Consider the following mode of operation. The client has a list of servers it will query in a particular order. The client generates a random string — called a blind in the parlance of Roughtime — hashes it, and uses the output as the nonce for its request to the server. For subsequent requests, it computes the nonce as follows: generate a blind, compute the hash of this string and the response from the previous server (including the timestamp and signature), and use this hash as the nonce for the next request.</p><hr><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Cloudflare-Roughtime-4.png" class="kg-image" alt="Roughtime: Securing Time with Digital Signatures"><figcaption>Chaining multiple Roughtime servers</figcaption></figure><hr><p>Creating a chain of timestamps in this way binds each response to the response that precedes it. Thus, the sequence of blinds and signatures constitute a publicly verifiable, cryptographic proof that the timestamps were requested in order (a “clockchain” if you will 😉). If the servers are roughly synchronized, then we expect that the sequence to monotonically increase, at least roughly. If one of the servers were consistently behind or ahead of the others, then this would be evident in the sequence. Suppose you get the following sequence of timestamps, each from different servers:</p><style type="text/css">
.table-with-last-column-right-aligned tr td:last-child {
text-align: left;
}
</style>
<table class="table-with-last-column-left-aligned">
<tbody>
<tr>
<th width="33%">Server
</th><th width="67%">Timestamp
</th></tr>
<tr>
<td>ServerA-Roughtime</td>
<td>2018-08-29 14:51:50 -0700 PDT</td>
</tr>
<tr>
<td>ServerB-Roughtime</td>
<td>2018-08-29 14:51:51 -0700 PDT +0:00:01</td>
</tr>
<tr>
<td>Cloudflare-Roughtime</td>
<td>2018-08-29 12:51:52 -0700 PDT -1:59:59</td>
</tr>
<tr>
<td>ServerC-Roughtime</td>
<td>2018-08-29 14:51:53 -0700 PDT +2:00:01</td>
</tr>
</tbody>
</table>
<p>Servers B and C corroborate the time given by server A, but — oh no! Cloudflare is two hours behind! Unless servers A, B, and C are in cahoots, it’s likely that the time offered by Cloudflare is incorrect. Moreover, you have verifiable, cryptographic proof. In this way, the Roughtime protocol makes our server (and all Roughtime servers) accountable to provide accurate time, or, at least, to be in sync with the others.</p><h2 id="the-roughtime-ecosystem">The Roughtime ecosystem</h2><p>The infrastructure for monitoring and auditing the <a href="https://roughtime.googlesource.com/roughtime/+/HEAD/ECOSYSTEM.md">Roughtime ecosystem</a> hasn’t been built yet. Right now there’s only a handful of servers: in addition to Cloudflare’s and <a href="https://roughtime.googlesource.com/roughtime/+/master/roughtime-servers.json">Google’s</a>, there’s also a really nice <a href="https://github.com/int08h/roughenough">Rust implementation</a>. The more diversity there is, the healthier the ecosystem becomes. We hope to see more organizations adopt this protocol.</p><h3 id="cloudflare-s-roughtime-service">Cloudflare’s Roughtime service</h3><p>For the initial deployment of this service, our primary goals are to ensure high availability and minimal maintenance overhead. Each machine at each Cloudflare location executes an instance of the service and responds to queries using its system clock. The server signs each request individually rather than batch-signing them as described above; we rely on our load balancer to ensure no machine is overwhelmed. There are three ways in which we envision this service could be used:</p><ol><li><em>TLS authentication</em>. When a TLS application (a web browser for example) starts, it could make a request to roughtime.cloudflare.com and compute the difference between the reported time and its system time. Whenever it authenticates a TLS server, it would add this difference to the system time to get the current time.</li><li><em>Roughtime daemon</em>. One could implement an OS daemon that periodically requests the current time. If the reported time differs from the system time by more than a second, it might issue an alert.</li><li><em>Server auditing</em>. As the <a href="https://roughtime.googlesource.com/roughtime/+/HEAD/ECOSYSTEM.md">Roughtime ecosystem</a> grows, it will be important to ensure that all of the servers are in sync. Individuals or organizations may take it upon themselves to monitor the ecosystem and ensure that the servers are in sync with one another.</li></ol><p>The service is reachable wherever you are via our anycast network. This is important for a service like Roughtime, because minimizing network latency helps improve accuracy. For information about how to configure a client to use Cloudflare-Roughtime, check out the <a href="https://developers.cloudflare.com/roughtime/">developer documentation</a>. Note that our initial release is somewhat experimental. As such, our root public key may change in the future. See the developer docs for information on obtaining the current public key.</p><p>If you want to see what time our Roughtime server returns, click the button below!</p><script type="text/javascript">
function getTime() {
document.getElementById("time-txt-box").innerHTML = "Loading..."
fetch("/cdn-cgi/trace").then((res) => res.text()).then((txt) => {
let ts = txt.match(/ts=([0-9\.]+)/)[1]
let str = new Date(parseFloat(ts) * 1000)
document.getElementById("time-txt-box").innerHTML = str
})
.catch((err) => {
console.log(err)
document.getElementById("time-txt-box").innerHTML = "Request failed."
})
}
</script>
<p><button onclick="getTime()">Get Time!</button> <br>
<span id="time-txt-box"></span></p>
<p></p><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on our announcements.</em></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Crypto-Week-1-1-3.png" class="kg-image" alt="Roughtime: Securing Time with Digital Signatures"></figure><h4></h4>]]></content:encoded></item><item><title><![CDATA[Introducing the Cloudflare Onion Service]]></title><description><![CDATA[Two years ago this week Cloudflare introduced Opportunistic Encryption, a feature that provided additional security and performance benefits to websites that had not yet moved to HTTPS.]]></description><link>https://blog.cloudflare.com/cloudflare-onion-service/</link><guid isPermaLink="false">5ba15aaec24d3800bf438c47</guid><category><![CDATA[Crypto Week]]></category><category><![CDATA[Crypto]]></category><category><![CDATA[Security]]></category><category><![CDATA[Tor]]></category><category><![CDATA[Privacy]]></category><category><![CDATA[Privacy Pass]]></category><dc:creator><![CDATA[Mahrud Sayrafi]]></dc:creator><pubDate>Thu, 20 Sep 2018 12:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/unnamed-2.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/unnamed-1.png" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure><ul><li><strong>When</strong>: a cold San Francisco summer afternoon</li><li><strong>Where</strong>: Room <a href="https://httpstat.us/305">305</a>, Cloudflare</li><li><strong>Who</strong>: 2 from Cloudflare + 9 from the Tor Project </li></ul><img src="https://blog.cloudflare.com/content/images/2018/09/unnamed-2.png" alt="Introducing the Cloudflare Onion Service"><p>What could go wrong?</p><h3 id="bit-of-background">Bit of Background</h3><p>Two years ago this week Cloudflare introduced <a href="https://blog.cloudflare.com/opportunistic-encryption-bringing-http-2-to-the-unencrypted-web/">Opportunistic Encryption</a>, a feature that provided additional security and performance benefits to websites that had not yet moved to HTTPS. Indeed, back in the old days some websites only used HTTP --- weird, right? “Opportunistic” here meant that the server advertised support for HTTP/2 via an <a href="https://tools.ietf.org/html/rfc7838">HTTP Alternative Service</a> header in the hopes that any browser that recognized the protocol could take advantage of those benefits in subsequent requests to that domain. </p><p>Around the same time, CEO Matthew Prince <a href="https://blog.cloudflare.com/the-trouble-with-tor/">wrote</a> about the importance and challenges of privacy on the Internet and tasked us to find a solution that provides <strong>convenience</strong>, <strong>security</strong>, and <strong>anonymity</strong>. </p><p>From neutralizing fingerprinting vectors and everyday browser trackers that <a href="https://www.eff.org/privacybadger">Privacy Badger</a> feeds on, all the way to mitigating correlation attacks that only big actors are capable of, guaranteeing privacy is a complicated challenge. Fortunately, the <a href="https://www.torproject.org/">Tor Project</a> addresses this extensive <a href="https://www.torproject.org/projects/torbrowser/design/#adversary">adversary model</a> in Tor Browser. </p><p>However, the Internet is full of bad actors, and distinguishing legitimate traffic from malicious traffic, which is one of Cloudflare’s core features, becomes much more difficult when the traffic is anonymous. In particular, many features that make Tor a great tool for privacy also make it a tool for hiding the source of malicious traffic. That is why many resort to using CAPTCHA challenges to make it more expensive to be a bot on the Tor network. There is, however, a collateral damage associated with using CAPTCHA challenges to stop bots: humans eyes also have to deal with them. </p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Captcha-Example.png" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure><p></p><p>One way to minimize this is using privacy-preserving cryptographic signatures, aka blinded tokens, such as those that power <a href="https://blog.cloudflare.com/privacy-pass-the-math/">Privacy Pass</a>. </p><p>The other way is to use onions. </p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Onion-Cloudflare.png" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure><p></p><h3 id="here-come-the-onions">Here Come the Onions</h3><p>Today’s edition of the Crypto Week introduces an “opportunistic” solution to this problem, so that under suitable conditions, anyone using <a href="https://blog.torproject.org/new-release-tor-browser-80">Tor Browser 8.0</a> will benefit from improved security and performance when visiting Cloudflare websites without having to face a CAPTCHA. At the same time, this feature enables more fine-grained rate-limiting to prevent malicious traffic, and since the mechanics of the idea described here are not specific to Cloudflare, anyone can <a href="https://github.com/mahrud/caddy-altonions">reuse this method</a> on their own website.</p><p>Before we continue, if you need a refresher on what Tor is or why we are talking about onions, check out the <a href="https://www.torproject.org/about/overview.html.en">Tor Project</a> website or our own blog post on the <a href="https://blog.cloudflare.com/welcome-hidden-resolver/">DNS resolver onion</a> from June.</p><p>As Matthew mentioned in his blog post, one way to sift through Tor traffic is to use the <a href="https://www.torproject.org/docs/onion-services.html.en">onion service</a> protocol. Onion services are Tor nodes that advertise their public key, encoded as an address with .onion TLD, and use “rendezvous points” to establish connections entirely within the Tor network: </p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Tor-network-example-1.png" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure><p></p><p>While onion services are designed to provide anonymity for content providers, <a href="https://securedrop.org/directory/">media organizations</a> use them to allow whistleblowers to communicate securely with them and <a href="https://www.facebook.com/notes/protect-the-graph/making-connections-to-facebook-more-secure/1526085754298237">Facebook</a> uses one to tell Tor users from bots.</p><p>The technical reason why this works is that from an onion service’s point of view each individual Tor connection, or circuit, has a unique but ephemeral number associated to it, while from a normal server’s point of view all Tor requests made via one exit node share the same IP address. Using this circuit number, onion services can distinguish individual circuits and terminate those that seem to behave maliciously. To clarify, this does not mean that onion services can identify or track Tor users.</p><p>While bad actors can still establish a fresh circuit by repeating the rendezvous protocol, doing so involves a cryptographic key exchange that costs time and computation. Think of this like a cryptographic <a href="https://en.wikipedia.org/wiki/File:Dial_up_modem_noises.ogg">dial-up</a> sequence. Spammers can dial our onion service over and over, but every time they have to repeat the key exchange.</p><p>Alternatively, finishing the rendezvous protocol can be thought of as a small proof of work required in order to use the Cloudflare Onion Service. This increases the cost of using our onion service for performing denial of service attacks.</p><h3 id="problem-solved-right">Problem solved, right?</h3><p>Not quite. As discussed when we introduced the <a href="https://blog.cloudflare.com/welcome-hidden-resolver/">hidden resolver</a>, the problem of ensuring that a seemingly random .onion address is correct is a barrier to usable security. In that case, our solution was to purchase an <a href="https://www.digicert.com/extended-validation-ssl.htm">Extended Validation</a> (EV) certificate, which costs considerably more. Needless to say, this limits who can buy an HTTPS certificate for their onion service to a <a href="https://crt.sh/?Identity=%25.onion">privileged few</a>. </p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Address-Bar.png" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure><p></p><p> Some people <a href="https://cabforum.org/pipermail/public/2017-November/012451.html">disagree</a>. In particular, the <a href="https://blog.torproject.org/tors-fall-harvest-next-generation-onion-services">new generation</a> of onion services resolves the weakness that Matthew pointed to as a possible reason why the CA/B Forum <a href="https://cabforum.org/2015/02/18/ballot-144-validation-rules-dot-onion-names/">only permits</a> EV certificates for onion services. This could mean that getting Domain Validation (DV) certificates for onion services could be possible soon. We certainly hope that’s the case. </p><p>Still, DV certificates lack the organization name (e.g. “Cloudflare, Inc.”) that appears in the address bar, and cryptographically relevant numbers are nearly impossible to remember or distinguish for humans. This brings us back to the problem of usable security, so we came up with a different idea.</p><h3 id="looking-at-onion-addresses-differently">Looking at onion addresses differently</h3><p>Forget for a moment that we’re discussing anonymity. When you type “cloudflare.com” in a browser and press enter, your device first resolves that domain name into an IP address, then your browser asks the server for a certificate valid for “cloudflare.com” and attempts to establish an encrypted connection with the host. As long as the certificate is trusted by a certificate authority, there’s no reason to mind the IP address.</p><p>Roughly speaking, the idea here is to simply switch the IP address in the scenario above with an .onion address. As long as the certificate is valid, the .onion address itself need not be manually entered by a user or even be memorable. Indeed, the fact that the certificate was valid indicates that the .onion address was correct.</p><p>In particular, in the same way that a single IP address can serve millions of domains, a single .onion address should be able to serve any number of domains.</p><p>Except, DNS doesn’t work this way.</p><h3 id="how-does-it-work-then">How does it work then?</h3><p>Just as with Opportunistic Encryption, we can point users to the Cloudflare Onion Service using <a href="https://tools.ietf.org/html/rfc7838">HTTP Alternative Services</a>, a mechanism that allows servers to tell clients that the service they are accessing is available at another network location or over another protocol. For instance, when Tor Browser makes a request to “cloudflare.com,” Cloudflare adds an Alternative Service header to indicate that the site is available to access over HTTP/2 via our onion services.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/tor-resquest@2x.png" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure><p></p><p>In the same sense that Cloudflare owns the IP addresses that serve our customers’ websites, we run 10 .onion addresses. Think of them as 10 Cloudflare points of presence (or PoPs) within the Tor network. The exact header looks something like this, except with all 10 .onion addresses included, each starting with the prefix “cflare”:</p><pre><code>alt-svc: h2=&quot;cflare2nge4h4yqr3574crrd7k66lil3torzbisz6uciyuzqc2h2ykyd.onion:443&quot;; ma=86400; persist=1
</code></pre>
<p>This simply indicates that the “cloudflare.com” can be authoritatively accessed using HTTP/2 (“h2”) via the onion service “cflare2n[...].onion”, over virtual port 443. The field “ma” (max-age) indicates how long in seconds the client should remember the existence of the alternative service and “persist” indicates whether alternative service cache should be cleared when the network is interrupted.</p><p>Once the browser receives this header, it attempts to make a new Tor circuit to the onion service advertised in the alt-svc header and confirm that the server listening on virtual port 443 can present a valid certificate for “cloudflare.com” — that is, the original hostname, not the .onion address.</p><p>The onion service then relays the Client Hello packet to a local server which can serve a certificate for “cloudflare.com.” This way the Tor daemon itself can be very minimal. Here is a sample configuration file:</p><pre><code>SocksPort 0
HiddenServiceNonAnonymousMode 1
HiddenServiceSingleHopMode 1
HiddenServiceVersion 3
HiddenServicePort 443
SafeLogging 1
Log notice stdout
</code></pre>
<p>Be careful with using the configuration above, as it enables a non-anonymous setting for onion services that do not require anonymity for themselves. To clarify, this does not sacrifice privacy or anonymity of Tor users, just the server. Plus, it improves latency of the circuits.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Tor-Onion-Service-Cloudflare.png" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure><p></p><p>If the certificate is signed by a trusted certificate authority, for any subsequent requests to “cloudflare.com” the browser will connect using HTTP/2 via the onion service, sidestepping the need for going through an exit node.</p><p>Here are the steps summarized one more time:</p><ol><li>A new Tor circuit is established;</li><li>The browser sends a Client Hello to the onion service with SNI=cloudflare.com;</li><li>The onion service relays the packet to a local server;</li><li>The server replies with Server Hello for SNI=cloudflare.com;</li><li>The onion service relays the packet to the browser;</li><li>The browser verifies that the certificate is valid.</li></ol><p>To reiterate, the certificate presented by the onion service only needs to be valid for the original hostname, meaning that the onion address need not be mentioned anywhere on the certificate. This is a huge benefit, because it allows you to, for instance, present a free <a href="https://letsencrypt.org">Let’s Encrypt</a> certificate for your .org domain rather than an expensive EV certificate.</p><p>Convenience, ✓</p><h3 id="distinguishing-the-circuits">Distinguishing the Circuits</h3><p>Remember that while one exit node can serve many many different clients, from Cloudflare’s point of view all of that traffic comes from one IP address. This pooling helps cover the malicious traffic among legitimate traffic, but isn’t essential in the security or privacy of Tor. In fact, it can potentially hurt users by exposing their traffic to <a href="https://trac.torproject.org/projects/tor/wiki/doc/ReportingBadRelays">bad exit nodes</a>.</p><p>Remember that Tor circuits to onion services carry a circuit number which we can use to rate-limit the circuit. Now, the question is how to inform a server such as nginx of this number with minimal effort. As it turns out, with only a <a href="https://github.com/torproject/tor/pull/343/">small tweak</a> in the Tor binary, we can insert a <a href="https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt">Proxy Protocol</a> header in the beginning of each packet that is forwarded to the server. This protocol is designed to help TCP proxies pass on parameters that can be lost in translation, such as source and destination IP addresses, and is already supported by nginx, Apache, Caddy, etc.</p><p>Luckily for us, the IPv6 space is so vast that we can encode the Tor circuit number as an IP address in an unused range and use the Proxy Protocol to send it to the server. Here is an example of the header that our Tor daemon would insert in the connection:</p><pre><code>PROXY TCP6 2405:8100:8000:6366:1234:ABCD ::1 43981 443\r\n
</code></pre>
<p>In this case, 0x1234ABCD encodes the circuit number in the last 32 bits of the source IP address. The local Cloudflare server can then transparently use that IP to assign reputation, show CAPTCHAs, or block requests when needed.</p><p>Note that even though requests relayed by an onion service don’t carry an IP address, you will see an IP address like the one above with country code “T1” in your logs. This IP only specifies the circuit number seen by the onion service, not the actual user IP address. In fact, 2405:8100:8000::/48 is an unused subnet allocated to Cloudflare that we are not routing globally for this purpose.</p><p>This enables customers to continue detecting bots using IP reputation while sparing humans the trouble of clicking on CAPTCHA street signs over and over again.</p><p>Security, ✓</p><h3 id="why-should-i-trust-cloudflare">Why should I trust Cloudflare?</h3><p>You don’t need to. The Cloudflare Onion Service presents the exact same certificate that we would have used for direct requests to our servers, so you could audit this service using Certificate Transparency (which includes <a href="https://blog.cloudflare.com/introducing-certificate-transparency-and-nimbus/">Nimbus</a>, our certificate transparency log), to reveal any potential cheating.</p><p>Additionally, since Tor Browser 8.0 makes a new circuit for each hostname when connecting via an .onion alternative service, the circuit number cannot be used to link connections to two different sites together.</p><p>Note that all of this works without running any entry, relay, or exit nodes. Therefore the only requests that we see as a result of this feature are the requests that were headed for us anyway. In particular, since no new traffic is introduced, Cloudflare does not gain any more information about what people do on the internet.</p><p>Anonymity, ✓</p><h3 id="is-it-faster">Is it faster?</h3><p>Tor isn’t known for being fast. One reason for that is the physical cost of having packets bounce around in a decentralized network. Connections made through the Cloudflare Onion Service don’t add to this cost because the number of hops is no more than usual.</p><p>Another reason is the bandwidth costs of exit node operators. This is an area that we hope this service can offer relief since it shifts traffic from exit nodes to our own servers, reducing exit node operation costs along with it.</p><p>BONUS: Performance, ✓</p><h3 id="how-do-i-enable-it">How do I enable it?</h3><p>Onion Routing is now available to all Cloudflare customers, enabled by default for Free and Pro plans. The option is available in the Crypto tab of the Cloudflare dashboard.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Screen-Shot-2018-09-20-at-7.36.11-AM.jpg" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure><p></p><h3 id="browser-support">Browser support</h3><p>We recommend using <a href="https://blog.torproject.org/new-release-tor-browser-80">Tor Browser 8.0</a>, which is the first stable release based on Firefox 60 ESR, and supports .onion Alt-Svc headers as well as HTTP/2. The new Tor Browser for Android (alpha) also supports this feature. You can check whether your connection is routed through an onion service or not in the Developer Tools window under the Network tab. If you're using the Tor Browser and you don't see the Alt-Svc in the response headers, that means you're already using the .onion route. In future versions of Tor Browser you'll be able to see this <a href="https://trac.torproject.org/projects/tor/ticket/27590">in the UI</a>.</p><p></p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">We&#39;ve got BIG NEWS. We gave Tor Browser a UX overhaul. <br><br>Tor Browser 8.0 has a new user onboarding experience, an updated landing page, additional language support, and new behaviors for bridge fetching, displaying a circuit, and visiting .onion sites.<a href="https://t.co/fpCpSTXT2L">https://t.co/fpCpSTXT2L</a> <a href="https://t.co/xbj9lKTApP">pic.twitter.com/xbj9lKTApP</a></p>&mdash; The Tor Project (@torproject) <a href="https://twitter.com/torproject/status/1037397236257366017?ref_src=twsrc%5Etfw">September 5, 2018</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
</figure><p></p><p>There is also interest from other privacy-conscious browser vendors. Tom Lowenthal, Product Manager for Privacy &amp; Security at <a href="https://brave.com/">Brave</a> said:</p><blockquote>Automatic upgrades to `.onion` sites will provide another layer of safety to Brave’s Private Browsing with Tor. We’re excited to implement this emerging standard.</blockquote><h3 id="any-last-words">Any last words?</h3><p>Similar to Opportunistic Encryption, Opportunistic Onions do not fully protect against attackers who can simply remove the alternative service header. Therefore it is important to use <a href="https://www.eff.org/https-everywhere">HTTPS Everywhere</a> to secure the first request. Once a Tor circuit is established, subsequent requests should stay in the Tor network from source to destination.</p><p>As we maintain and <a href="https://trac.torproject.org/projects/tor/ticket/27502">improve</a> this service we will share what we learn. In the meanwhile, feel free to try out this idea on <a href="https://github.com/mahrud/caddy-altonions">Caddy</a> and reach out to us with any comments or suggestions that you might have.</p><h3 id="acknowledgments">Acknowledgments</h3><p>Patrick McManus of Mozilla for enabling support for .onion alternative services in Firefox; Arthur Edelstein of the Tor Project for reviewing and enabling HTTP/2 and HTTP Alternative Services in Tor Browser 8.0; Alexander Færøy and George Kadianakis of the Tor Project for adding support for Proxy Protocol in onion services; the entire Tor Project team for their invaluable assistance and discussions; and last, but not least, many folks at Cloudflare who helped with this project.</p><h4 id="addresses-used-by-the-cloudflare-onion-service">Addresses used by the Cloudflare Onion Service</h4><pre><code>cflarexljc3rw355ysrkrzwapozws6nre6xsy3n4yrj7taye3uiby3ad.onion
cflarenuttlfuyn7imozr4atzvfbiw3ezgbdjdldmdx7srterayaozid.onion
cflares35lvdlczhy3r6qbza5jjxbcplzvdveabhf7bsp7y4nzmn67yd.onion
cflareusni3s7vwhq2f7gc4opsik7aa4t2ajedhzr42ez6uajaywh3qd.onion
cflareki4v3lh674hq55k3n7xd4ibkwx3pnw67rr3gkpsonjmxbktxyd.onion
cflarejlah424meosswvaeqzb54rtdetr4xva6mq2bm2hfcx5isaglid.onion
cflaresuje2rb7w2u3w43pn4luxdi6o7oatv6r2zrfb5xvsugj35d2qd.onion
cflareer7qekzp3zeyqvcfktxfrmncse4ilc7trbf6bp6yzdabxuload.onion
cflareub6dtu7nvs3kqmoigcjdwap2azrkx5zohb2yk7gqjkwoyotwqd.onion
cflare2nge4h4yqr3574crrd7k66lil3torzbisz6uciyuzqc2h2ykyd.onion
</code></pre>
<p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on our announcements.</em></p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Crypto-Week.png" class="kg-image" alt="Introducing the Cloudflare Onion Service"></figure>]]></content:encoded></item><item><title><![CDATA[RPKI and BGP: our path to securing Internet Routing]]></title><description><![CDATA[This article will talk about our approach to network security using technologies like RPKI to sign Internet routes and protect our users and customers from route hijacks and misconfigurations.]]></description><link>https://blog.cloudflare.com/rpki-details/</link><guid isPermaLink="false">5ba07de2c24d3800bf438c0c</guid><category><![CDATA[Crypto Week]]></category><category><![CDATA[Crypto]]></category><category><![CDATA[Security]]></category><category><![CDATA[Product News]]></category><category><![CDATA[RPKI]]></category><category><![CDATA[BGP]]></category><dc:creator><![CDATA[Jérôme Fleury]]></dc:creator><pubDate>Wed, 19 Sep 2018 12:01:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/rpki-copy@6.5x-1.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/rpki-copy@6.5x-2.png" class="kg-image" alt="RPKI and BGP: our path to securing Internet Routing"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/rpki-copy@6.5x-1.png" alt="RPKI and BGP: our path to securing Internet Routing"><p>This article will talk about our approach to network security using technologies like RPKI to sign Internet routes and protect our users and customers from route hijacks and misconfigurations. We are proud to announce we have started deploying active filtering by using RPKI for routing decisions and signing our routes.</p><p>Back in April, articles including our blog post on <a href="https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/">BGP and route-leaks</a> were reported in the news, highlighting how IP addresses can be redirected maliciously or by mistake. While enormous, the underlying routing infrastructure, the bedrock of the Internet, has remained mostly unsecured.</p><p>At Cloudflare, we decided to secure our part of the Internet by protecting our customers and everyone using our services including our recursive resolver <a href="https://www.cloudflare.com/learning/dns/what-is-1.1.1.1/">1.1.1.1</a>.</p><h3 id="from-bgp-to-rpki-how-do-we-internet">From BGP to RPKI, how do we Internet ?</h3><p>A prefix is a range of IP addresses, for instance, <code>10.0.0.0/24</code>, whose first address is <code>10.0.0.0</code> and the last one is <code>10.0.0.255</code>. A computer or a server usually have one. A router creates a list of reachable prefixes called a routing table and uses this routing table to transport packets from a source to a destination. </p><p>On the Internet, network devices exchange routes via a protocol called <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/">BGP</a> (Border Gateway Protocol). BGP enables a map of the interconnections on the Internet so that packets can be sent across different networks to reach their final destination. BGP binds the separate networks together into the Internet.</p><p>This dynamic protocol is also what makes Internet so resilient by providing multiple paths in case a router on the way fails. A BGP announcement is usually composed of a <em>prefix</em> which can be reached at a <em>destination</em> and was originated by an <em>Autonomous System Number</em> (ASN).</p><p>IP addresses and Autonomous Systems Numbers are allocated by five Regional Internet Registries (RIR): <a href="https://afrinic.net/">Afrinic</a> for Africa, <a href="https://www.apnic.net/">APNIC</a> for Asia-Pacific, <a href="https://www.arin.net">ARIN</a> for North America, <a href="https://www.lacnic.net">LACNIC</a> for Central and South America and <a href="https://www.ripe.net">RIPE</a> for Europe, Middle-East and Russia. Each one operates independently.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/rirs-01.png" class="kg-image" alt="RPKI and BGP: our path to securing Internet Routing"></figure><p></p><p>With more than 700,000 IPv4 routes and 40,000 IPv6 routes announced by all Internet actors, it is difficult to know who owns which resource.</p><p>There is no simple relationship between the entity that has a prefix assigned, the one that announces it with an ASN and the ones that receive or send packets with these IP addresses. An entity owning <code>10.0.0.0/8</code> may be delegating a subset <code>10.1.0.0/24</code> of that space to another operator while being announced through the AS of another entity.</p><p>Thereby, a route leak or a route hijack is defined as the illegitimate advertisement of an IP space. The term <em>route hijack</em> implies a malicious purpose while a route leak usually happens because of a misconfiguration.</p><p>A change in route will cause the traffic to be redirected via other networks. Unencrypted traffic can be read and modified. HTTP webpages and DNS without DNSSEC are sensitive to these exploits.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/bgp-hijacking-technical-flow-1.png" class="kg-image" alt="RPKI and BGP: our path to securing Internet Routing"></figure><p></p><p>You can learn more about BGP Hijacking in our <a href="https://www.cloudflare.com/learning/security/glossary/bgp-hijacking/">Learning Center</a>.</p><p>When an illegitimate announcement is detected by a peer, they usually notify the origin and reconfigure their network to reject the invalid route.Unfortunately, the time to detect and act upon may take from a few minutes up to a few days, more than enough to steal cryptocurrencies, <a href="https://en.wikipedia.org/wiki/DNS_spoofing">poison a DNS</a> cache or make a website unavailable.</p><p>A few systems exist to document and prevent illegitimate BGP announcements.</p><p><strong>The Internet Routing Registries (IRR)</strong> are semi-public databases used by network operators to register their assigned Internet resources. Some database maintainers do not check whether the entry was actually made by the owner, nor check if the prefix has been transferred to somebody else. This makes them prone to error and not completely reliable.</p><p><strong>Resource Public Key Infrastructure (RPKI)</strong> is similar to the IRR “route” objects, but adding the authentication with cryptography.</p><p>Here’s how it works: each RIR has a root certificate. They can generate a signed certificate for a Local Internet Registry (LIR, a.k.a. a network operator) with all the resources they are assigned (IPs and ASNs). The LIR then signs the prefix containing the origin AS that they intend to use: a ROA (Route Object Authorization) is created. ROAs are just simple X.509 certificates.</p><p>If you are used to SSL/TLS certificates used by browsers to authenticate the holder of a website, then ROAs are the equivalent in the Internet routing world.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/roas@3x-1.png" class="kg-image" alt="RPKI and BGP: our path to securing Internet Routing"></figure><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/routing-rpki-2-01.png" class="kg-image" alt="RPKI and BGP: our path to securing Internet Routing"></figure><p></p><h3 id="signing-prefixes">Signing prefixes</h3><p>Each network operator owning and managing Internet resources (IP addresses, Autonomous System Numbers) has access to their Regional Internet Registry portal. Signing their prefixes through the portal or the API of their RIR is the easiest way to start with RPKI.</p><p>Because of our global presence, Cloudflare has resources in each of the 5 RIR regions. With more than 800 prefix announcements over different ASNs, the first step was to ensure the prefixes we were going to sign were correctly announced.</p><p>We started by signing our less used prefixes, checked if the traffic levels remained the same and then signed more prefixes. Today about 25% of Cloudflare prefixes are signed. This includes our critical DNS servers and our <a href="https://one.one.one.one">public 1.1.1.1 resolver</a>.</p><h3 id="enforcing-validated-prefixes">Enforcing validated prefixes</h3><p>Signing the prefixes is one thing. But ensuring that the prefixes we receive from our peers match their certificates is another.</p><p>The first part is validating the certificate chain. It is done by synchronizing the RIR databases of ROAs through rsync (although there are some new proposals regarding <a href="https://tools.ietf.org/html/rfc8182">distribution over HTTPS</a>), then check the signature of every ROA against the RIR’s certificate public key. Once the valid records are known, this information is sent to the routers.</p><p>Major vendors support a protocol called <a href="https://tools.ietf.org/html/rfc6810">RPKI to Router Protocol</a> (abbreviated as RTR). This is a simple protocol for passing a list of valid prefixes with their origin ASN and expected mask length. However, while the RFC defines 4 different secure transport methods, vendors have only implemented the insecure one. Routes sent in clear text over TCP can be tampered with.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/RPKI-diagram-@3x-2.png" class="kg-image" alt="RPKI and BGP: our path to securing Internet Routing"></figure><p></p><p>With more than 150 routers over the globe, it would be unsafe to rely on these cleartext TCP sessions over the insecure and lossy Internet to our validator. We needed local distribution on a link we know secure and reliable.</p><p>One option we considered was to install an RPKI RTR server and a validator in each of our 150+ datacenters, but doing so would cause a significant increase in operational cost and reduce debugging capabilities.</p><h4 id="introducing-gortr">Introducing GoRTR</h4><p>We needed an easier way of passing an RPKI cache securely. After some system design sessions, we settled on distributing the list of valid routes from a central validator securely, distribute it via our own Content Delivery Network and use a lightweight local RTR server. This server fetches the cache file over HTTPS and passes the routes over RTR.</p><p>Rolling out this system on all our PoPs using automation was straightforward and we are progressively moving towards enforcing strict validation of RPKI signed routes everywhere.<br></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/gortr-2-01.png" class="kg-image" alt="RPKI and BGP: our path to securing Internet Routing"></figure><h4></h4><p>To encourage adoption of Route Origin Validation on the Internet, we also want to provide this service to everyone, for free. You can already download our <a href="https://github.com/cloudflare/gortr">RTR server</a> with the cache behind Cloudflare. Just configure your <a href="https://www.juniper.net/documentation/en_US/junos/topics/topic-map/bgp-origin-as-validation.html">Juniper</a> or <a href="https://www.cisco.com/c/en/us/td/docs/routers/asr9000/software/asr9k_r6-1/routing/configuration/guide/b-routing-cg-asr9k-61x/b-routing-cg-asr9k-61x_chapter_010.html#concept_A84818AD41744DFFBD094DA7FCD7FE8B">Cisco</a> router. And if you do not want to use our file of prefixes, it is compatible with the RIPE RPKI Validator Export format.</p><p>We are also working on providing a public RTR server using our own <a href="https://www.cloudflare.com/products/cloudflare-spectrum/">Spectrum service</a> so that you will not have to install anything, just make sure you peer with us! Cloudflare is present on many Internet Exchange Points so we are one hop away from most routers.</p><h3 id="certificate-transparency">Certificate transparency</h3><p>A few months ago, <a href="https://blog.cloudflare.com/author/nick-sullivan/">Nick Sullivan</a> introduced our new <a href="https://blog.cloudflare.com/introducing-certificate-transparency-and-nimbus/">Nimbus Certificate Transparency Log</a>.</p><p>In order to track the emitted certificates in the RPKI, our Crypto team created a new Certificate Transparency Log called <a href="https://ct.cloudflare.com/logs/cirrus">Cirrus</a> which includes the five RIRs root certificates as trust anchors. Certificate transparency is a great tool for detecting bad behavior in the RPKI because it keeps a permanent record of all valid certificates that are submitted to it in an append-only database that can’t be modified without detection. It also enables users to download the entire set of certificates via an HTTP API.</p><h3 id="being-aware-of-route-leaks">Being aware of route leaks</h3><p>We use services like <a href="https://www.bgpmon.net">BGPmon</a> and other public observation services extensively to ensure quick action if some of our prefixes are leaked. We also have internal BGP and BMP collectors, aggregating more than 60 millions routes and processing live updates.</p><p>Our filters make use of this live feed to ensure we are alerted when a suspicious route appears.</p><h3 id="the-future">The future</h3><p>The <a href="https://blog.benjojo.co.uk/post/are-bgps-security-features-working-yet-rpki">latest statistics</a> suggest that around 8.7% of the IPv4 Internet routes are signed with RPKI, but only 0.5% of all the networks apply strict RPKI validation.<br>Even with RPKI validation enforced, a BGP actor could still impersonate your origin AS and advertise your BGP route through a malicious router configuration.</p><p>However that can be partially solved by denser interconnections, that Cloudflare already has through an extensive network of private and public interconnections. <br>To be fully effective, RPKI must be deployed by multiple major network operators.</p><p>As said by <a href="http://instituut.net/~job/">Job Snijders</a> from NTT Communications, who’s been at the forefront of the efforts of securing Internet routing:</p><blockquote>If the Internet's major content providers use RPKI and validate routes, the impact of BGP attacks is greatly reduced because protected paths are formed back and forth. It'll only take a small specific group of densely connected organizations to positively impact the Internet experience for billions of end users.</blockquote><p>RPKI is not a bullet-proof solution to securing all routing on the Internet, however it represents the first milestone in moving from trust based to authentication based routing. Our intention is to demonstrate that it can be done simply and cost efficiently. We are inviting operators of critical Internet infrastructure to follow us in a large scale deployment.</p><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on our announcements.</em></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/CRYPTO-WEEK-banner@2x.png" class="kg-image" alt="RPKI and BGP: our path to securing Internet Routing"></figure>]]></content:encoded></item><item><title><![CDATA[RPKI - The required cryptographic upgrade to BGP routing]]></title><description><![CDATA[We have talked about the BGP Internet routing protocol before. We have talked about how we build a more resilient network and how we can see outages at a country-level via BGP. We have even talked about the network community that is vital to the operation of the global Internet.]]></description><link>https://blog.cloudflare.com/rpki/</link><guid isPermaLink="false">5b98a64cc24d3800bf438b2e</guid><category><![CDATA[Crypto Week]]></category><category><![CDATA[BGP]]></category><category><![CDATA[RPKI]]></category><category><![CDATA[Security]]></category><category><![CDATA[Crypto]]></category><category><![CDATA[Product News]]></category><dc:creator><![CDATA[Martin J Levy]]></dc:creator><pubDate>Wed, 19 Sep 2018 12:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/33673974352_3085c34cb5_o-1.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cloudflare.com/content/images/2018/09/33673974352_3085c34cb5_o-1.jpg" alt="RPKI - The required cryptographic upgrade to BGP routing"><p>We have talked about the BGP Internet routing protocol before. We have talked about how we build a <a href="https://blog.cloudflare.com/the-internet-is-hostile-building-a-more-resilient-network/">more resilient network</a> and how we can see <a href="https://blog.cloudflare.com/the-story-of-two-outages/">outages at a country-level</a> via BGP. We have even talked about the <a href="https://blog.cloudflare.com/nanog-the-art-of-running-a-network-and-discussing-common-operational-issues/">network community</a> that is vital to the operation of the global Internet.</p><p>Today we need to talk about why existing operational practices for BGP routing and filtering have to significantly improve in order to finally stop route leaks and hijacks; which are sadly pervasive in today’s Internet routing world. In fact, the subtle art of running a BGP network and the various tools (both online and within your a networks subsystems) that are vital to making the Internet routing world a safe and reliable place to operate need to improve.</p><p>Internet routing and BGP and security along with its operational expertise must improve globally.</p><p></p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.cloudflare.com/content/images/2018/09/33673974352_3085c34cb5_o.jpg" class="kg-image" alt="RPKI - The required cryptographic upgrade to BGP routing"><figcaption><a href="https://www.flickr.com/photos/30478819@N08/33673974352/">photo</a> by <a href="https://www.flickr.com/photos/30478819@N08/">Marco Verch</a> by/2.0</figcaption></figure><p></p><p>Nothing specific triggered today’s writing except the fact that Cloudflare has decided that it's high-time we took a leadership role to finally secure BGP routing. We believe that each and every network needs to change its mindset towards BGP security both on a day-by-day and a long-term basis.</p><p>It's time to stop BGP route leaks and hijacks by deploying operationally-excellent RPKI!</p><h3 id="cloudflare-commits-to-rpki">Cloudflare commits to RPKI</h3><p>Resource Public Key Infrastructure (RPKI) is a cryptographic method of signing records that associate a BGP route announcement with the correct originating AS number. RPKI is defined in <a href="https://tools.ietf.org/html/rfc6480">RFC6480</a> (An Infrastructure to Support Secure Internet Routing). Cloudflare commits to RPKI.</p><p>Because any route can be originated and announced by any random network, independent of its rights to announce that route, there needs to be an out-of-band method to help BGP manage which network can announce which route. That system exists today. It's part of the <a href="http://www.irr.net/">IRR</a> (Internet Routing Registry) system. Many registries exist, some run by networks, some by RIRs (Regional Internet Registries) and the grand daddy of IRRs, Merit's <a href="https://radb.net">RADB</a> service. This service provides a collective method to allow one network to filter another networks routes.</p><p>This works somewhat. An invalid announcement is normally squashed near-instantly as the route crosses an ASN boundary because one network is meant to filter the other network (based on rules created from the IRR database). This of course doesn’t happen perfectly - in fact, far from it. Route leaks or route hijacks happen more often than they should. A fact that is well documented. Here’s the highlights:</p><ul>
<li>1997 - AS7007 mistakenly (re)announces 72,000+ routes (becomes the poster-child for route filtering).</li>
<li>2008 - ISP in Pakistan <a href="https://www.wired.com/2008/02/pakistans-accid/">accidentally</a> announces IP routes for YouTube by blackholing the video service internally to their network.</li>
<li>2017 - Russian ISP leaks 36 prefixes for payments services owned by Mastercard, Visa, and major banks.</li>
<li>2018 - BGP hijack of Amazon DNS to <a href="https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/">steal crypto currency</a>.</li>
</ul>
<p>That’s just a partial list! Each route leak or hijack exposes a lack of route filtering by the network that peers or transits the offending network.</p><p>RPKI comes into the picture because the existing IRR system lacks any form of cryptographic signing for its data. In fact, today the IRR databases contain plenty of invalid data (both stale data and typo’ed data). There's very little control over the creation of invalid data.</p><p>Implementing RPKI is just the first step in better BGP route security because RPKI only secures the route origin; it doesn't secure the path. (Sadly the same is true for IRR data). When we want to secure the path; we are going to need something else; but that comes later.</p><h3 id="the-rpki-tl-dr">The RPKI TL;DR</h3><p>BGP routing isn’t secure. Its main hope, RPKI, uses a certificate system that’s akin to secure web browsing (or at-least its early days). While secure web browsing has moved on and is far more secure and is somewhat the <a href="https://blog.cloudflare.com/today-chrome-takes-another-step-forward-in-addressing-the-design-flaw-that-is-an-unencrypted-web/">default</a> these days, the state of BGP route validation has not moved forward. To secure BGP routing, all networks would need to be embrace RPKI (and more). Cloudflare proposes to plot a course to improve BGP routing-security globally by setting an example and implementing best practices, installing operationally excellent software and promoting its RPKI effort worldwide. RPKI is one of our focuses in 2018 and beyond!</p><h3 id="the-simplest-introduction-to-bgp-possible">The simplest introduction to BGP possible</h3><p>BGP isn’t simple. BGP on the global Internet doubly-so. This fact should not deter either the casual reader or the seasoned network engineer. What is important is to place the limit around what is worth knowing about and discarding all the minor items that make up the very complex world of BGP networking. In fact, to operate a BGP enabled network connected to a telco or ISP isn’t that complicated. It turns out that in the world of BGP, security is an afterthought.</p><p>Lets begin.</p><p>I’m going to pick a hypothetical example. The configuration of a single university within a country that operates an NREN (National Research &amp; Education Network) for all it's universities. This is not uncommon. The university in this case is connected via a single telecommunications link and (using BGP terminology) has a single upstream. The NREN provides all the connectivity to the local and global Internet for its countries universities, along with connectivity to other NRENs in other countries.</p><p>We start with some basics. BGP is about numbers. First off is a unique number called the Autonomous System Number or ASN. This number comes from a range of numbers that are managed by the RIRs (Regional Internet Registries). For example, Cloudflare has the AS number 13335 allocated for its network. ASNs were just 16-bit numbers, but are now 32-bit numbers (because the internet grew to the point of running out of the 65,536 or 2^16 initial allocation). For our university, we will use 65099 as our example ASN. This is from the reserved block of ASNs and used here for documentation reasons only.</p><p>The second number is the IP addresses allocated to the university. Most reader are familiar with IP addresses; however in the BGP world we use IP blocks called CIDRs (Classless Inter-Domain Routing). This is a range of IP addresses that are sequential and bonded on binary boundaries. Within Cloudflare, we have quite a few IP blocks allocated by the RIRs. For our example, we will assume the university has two blocks allocated. 10.0.0.0/8 and 2001:db8::/32 . Both these are private or documentation addresses and later-on you’ll see these show up again in a different manner when we talk about filtering.</p><p>This is enough for us to get this university ready to connect to the NREN. Or maybe not.</p><h3 id="ready-to-connect">Ready to connect</h3><p>Hold on a second - there’s paperwork to fill-in. Not actual paper; but close enough. While the internet is build on the concept of <a href="https://www.ietf.org/blog/2013/05/permissionless-innovation/">permissionless innovation</a>, there’s still good practices that still need to be adhered too.</p><p>Before you can announce a route via your BGP speaking router, you need to setup either an IRR route object or an RPKI ROA (or both).</p><h3 id="internet-routing-registries">Internet Routing Registries</h3><p>The IRR (Internet Routing Registries) is used to record a route that will be announced on the Internet and associate it with the ASN that will announce it. In this example we will use the private or documentation ranges of 10.0.0.0/8 and 2001:db8::/32 along with ASN 65099. The simplest IRR routing record looks like this:</p><pre><code>route: 10.0.0.0/8
origin: AS65099
</code></pre>
<p>In reality, we need a lot more to make it fully-functional and we need a place to upload this routing record. You could use your RIR to host your IRR data, or you could use global services like <a href="https://radb.net/">RADB</a> or <a href="https://altdb.net/">ALTDB</a>. You can also use your transit provider in some cases. Once you have an account setup on one of these services, you will be ready to upload these routing record (how you upload it is very specific to the IRR chosen).</p><pre><code>route: 10.0.0.0/8
descr: University of Blogging
descr: Anytown, USA
origin: AS65099
mnt-by: MNT-UNIVERSITY
notify: person@example.com
changed: person@example.com 20180101
source: RADB
</code></pre>
<p>That last line reflects where you store your IRR routing record.</p><h3 id="irr-for-your-asn">IRR for your ASN</h3><p>Just like your IP network blocks, its also good to place a record for your ASN in the IRR. When you networking gets more complex, this will be solidly needed. It doesn’t hurt to add it now.</p><pre><code>aut-num: AS65099
as-name: UNIVERSITY-OF-BLOGGING-AS
descr: University of Blogging
descr: Anytown, USA
mnt-by: MNT-UNIVERSITY
notify: person@example.com
changed: person@example.com 20180101
source: RADB
</code></pre>
<p>You can check for their existence using the classic command line whois command (or the RADB website).</p><p>One last item needs to be completed; but not by you.</p><p>Your ASN needs to be placed in the as-set of your upstream ISP (or service provider). The entry in there will provide the rest of the global Internet an indication that your ASN is allowed to be routed via your upstream (the NREN in this case). If all goes well, something like this will show up in the IRRs.</p><pre><code>as-set: AS-NREN
descr: NREN of country XX
members: ...
members: AS65099
members: ...
mnt-by: MNT-NREN
notify: person@example.edu
changed: person@example.edu 20180101
source: RADB
</code></pre>
<p>The members area of this as-set provides a list of ASNs that are announced by the upstream (the ASN). We have not defined the upstreams ASN yet, so lets pretend they are ASN 65001 (this ASN is still from the documentation range).</p><h3 id="getting-the-university-online">Getting the university online</h3><p>BGP (like everything in networking) needs some configuration setup. This configuration would exist on a network router at the edge of your network, or whatever device is being used to connect the local network to the upstream (the NREN). We are using a very simple router config here to show the minimum configuration needed. Your configuration language could be different.</p><pre><code>router bgp 65099
neighbor 192.168.0.2 remote-as 65001
neighbor 192.168.0.2 prefix-list as65001-listen in
neighbor 192.168.0.2 route-map as65001-listen in
</code></pre>
<p>This is a very trivial example (it’s missing a complete filter configuration that’s normally required). The key point is that the router doesn’t contain any code or language regarding the IRR entries shown above. That’s because the IRR entries are out-of-band. They exist outside of the BGP protocol. In other words, it takes more than just configuring a BGP session in order to actually connect to the global Internet.</p><p>The key filtering comes into play on the upstream (the NREN in this example). It’s the job of that network to confirm everything heard from its customer. </p><h3 id="rpki-vs-irr-why-is-it-so-important">RPKI vs IRR - why is it so important?</h3><p>Two global databases are being discussed today. IRR &amp; RPKI. While IRR is clearly in use today; it’s not the primary focus herein. However, it’s the de-facto bridging option for route filtering today.</p><p>As stated above, Internet Routing Registries (IRRs) have a very loose security model. This has been known for a long time. Records exist within IRRs that are both clearly wrong and/or are clearly missing. There’s no cryptographic signing of records. There are multiple suppliers of IRR data; some better than others. IRR still has some proponents that want to clean up its operational data (including the author of this blog). Efforts like <a href="https://github.com/irrdnet/irrd4/">IRRD4</a> (by Job Snijders @ NTT) could help clean-up IRR usage. IRR is not the main focus herein.</p><p>Resource Public Key Infrastructure (RPKI) is a cryptographic method of signing records that associate a route with an originating AS number. Presently the five RIRs (AFRINIC, APNIC, ARIN, LACNIC &amp; RIPE) provide a method for members to take an IP/ASN pair and sign a ROA (Route Origin Authorization) record. The ROA record is what we need to focus on.</p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/roas@3x-2.png" class="kg-image" alt="RPKI - The required cryptographic upgrade to BGP routing"></figure><p>Once a route is signed; it can propagate to anyone that wants to use the data to filter routing or monitor this data as ROAs are public. A ROA is a digitally signed object that makes use of <a href="https://tools.ietf.org/html/rfc3852">RFC3852</a> Cryptographic Message Syntax (CMS) as a standard encapsulation format. In fact ROAs are X.509 certificates as defined in <a href="https://tools.ietf.org/html/rfc5280">RFC5280</a> (Internet X.509 Public Key Infrastructure Certificate) and <a href="https://tools.ietf.org/html/rfc3779">RFC3779</a> (X.509 Extensions for IP Addresses and AS Identifiers).</p><p>As the ROA is a digitally signed object, it provides a means of verifying that an IP address block holder has authorized an AS (Autonomous System) to originate routes to that one or more prefixes within the address block. The RPKI system provides an attestation method for BGP routing.</p><blockquote>define attestation:<br> ... the action of bearing witness<br> ... something which bears witness, confirms or authenticates</blockquote><p>The existence of routing information (an IP block plus the matching ASN) within a valid certificate (i.e. something that can be validated against the RIRs authoritative data cryptographically) is the missing part of the BGP security system and something that the IRR system can't provide. You really know who should be doing what with a BGP route.</p><h3 id="where-are-the-certificates-if-they-are-not-in-the-bgp-protocol">Where are the certificates if they are not in the BGP protocol?</h3><p>Good question. As we said above, the routing databases are outside of the BGP protocol. Both IRR and RPKI use a third-party entities to hold the database information. The difference is that with RPKI the same entity that allocated or assigned a numeric resource (like an IP address or ASN) also holds the CA (Certificate Authority) used to validate the ROAs record.</p><p>In the RPKI world; CAs are called TAs, or Trust Anchors. However, if you are familiar with the web security model, then you are familiar with what a TA is. </p><h3 id="who-could-operate-a-ta">Who could operate a TA?</h3><p>Today the five RIRs are the TAs for RPKI. This makes sense. Only the RIRs know who is an owner of IP space (and ASs). The present day RPKI systems operate in conjunction with existing RIR login credentials. Once you can login to a portal and control your IP allocations and ASN allocations; then you can also create, edit, modify, and delete RPKI data in the forms of ROAs. This is the basis of how RPKI separates itself from the IRR. You can only sign your own resources. You can’t just randomly create data. If you lose your RIR allocation, then you lose the RPKI data.<br>From a policy point of view, there are some interesting issues that become apparent pretty quickly. First off, an ISP with an allocation needs to keep its RIR membership up to date (i.e. pay its dues). Second, it needs to be aware that the RIR and the ISP could be legal entities based in different countries and hence international law plays a role in any dispute between the ISP and RIR or in fact any third party that gets involved in an IP address dispute. This has been a concern within the RIPE (Europe, the Middle East and parts of Central Asia) region as RIPE is based in The Netherlands. Similarly, ARIN (North America and parts of the Caribbean) is a US entity.</p><h3 id="which-rir-for-which-ip-address">Which RIR for which IP address?</h3><p>Presently, because of the large amount of IP address transfers occurring between some RIR regions, the RIRs changed their TA root certificates so that each RIR includes every available IP address (0.0.0.0/0 &amp; ::/0) and every available AS number (0-4,294,967,295). IP numeric space and ASN numeric space are well defined as follows:</p><pre><code>IPv4: 0.0.0.0 - 255.255.255.255
IPv6: 0000:0000:0000:0000:0000:0000:0000:0000 - ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff
ASN: 1 - 4,294,967,295 (AS 0 is unused)
</code></pre>
<p><a href="https://www.iana.org/">IANA</a> (Internet Assigned Numbers Authority) holds the master list for this space and divvies it up the five RIRs as allocations or assignments. The IPv4 and IPv6 assignments can be seen <a href="https://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml">here</a> and <a href="https://www.iana.org/assignments/ipv6-unicast-address-assignments/ipv6-unicast-address-assignments.xhtml">here</a>. ASNs can be found <a href="https://www.iana.org/assignments/as-numbers/as-numbers.xhtml">here</a>. For example, here’s an abbreviated overview into how IPv6 space is allocated to various RIRs.</p><pre><code>Prefix Designation Date WHOIS Status
2001:0000::/23 IANA 1999-07-01 whois.iana.org ALLOCATED
2001:0200::/23 APNIC 1999-07-01 whois.apnic.net ALLOCATED
2001:0400::/23 ARIN 1999-07-01 whois.arin.net ALLOCATED
...
2001:1200::/23 LACNIC 2002-11-01 whois.lacnic.net ALLOCATED
...
2001:4200::/23 AFRINIC 2004-06-01 whois.afrinic.net ALLOCATED
...
2002:0000::/16 6to4 2001-02-01 ALLOCATED
...
2a00:0000::/12 RIPE NCC 2006-10-03 whois.ripe.net ALLOCATED
2c00:0000::/12 AFRINIC 2006-10-03 whois.afrinic.net ALLOCATED
...
</code></pre>
<p>As stated above; each RIR holds a root key (a TA, or Trust Anchor) that provides them the ability to create signed records below their root. Below that TA there is a certificate that covers the exact space allocated or assigned to the specific RIR. This allows the TA to be somewhat static (or stable) and the RIR to update the underlying records as-needed.</p><h3 id="who-is-implementing-rpki-today">Who is implementing RPKI today?</h3><p>Sadly not enough people or networks. While each RIR is supporting RPKI for its members; the toolset for successfully operating a network with RPKI enabled route filtering is still very limited.</p><p>It turns out that IXP (Internet Exchange Points) have started to realize that filtering using RPKI is a valid option for their route-servers.</p><p>In addition, a handful of networks are also participating in both signing IP routes and verifying IP routes via RPKI. This isn’t quite enough to secure the global Internet yet.</p><p>Then there's the Dutch!</p><p>In early September, the NLNOG technical meeting featured a non-trivial number of RPKI-related talks. It seems that local Dutch operators and software developers are taking RPKI seriously and it’s possible that The Netherlands may contain some of the more forward-thinking RPKI networks around. Read more <a href="https://nlnog.net/nlnog-day-2018/">here</a>.</p><h3 id="mutually-agreed-norms-for-routing-security-manrs-">Mutually Agreed Norms for Routing Security (MANRS)</h3><p>The Internet Society (Cloudflare is a strong supporter of this organization) has pushed an initiative called <a href="https://www.internetsociety.org/issues/manrs/">MANRS</a> (Mutually Agreed Norms for Routing Security) in order to convince the network operator community to implement routing security. It focuses on Filtering, Anti-spoofing, Coordination, and Global Validation. The Internet Society is doing a good job in educating networks on the importance of better routing security. While they do educate networks about various aspects of running a healthy BGP environment; it's not an effort that creates any of the required new technologies. MANRS simply promotes best-practices, which is a good start and something Cloudflare can collaborate on. That all said, we think it’s simply too-polite an effort as it doesn’t have enough teeth to quickly change how networks behave.</p><p>Cloudflare also wants to move the BGP community further along the RPKI path. Our operational efforts can, and should, coexist with The Internet Society’s MANRS initiative; however, we're focusing on operationally viable solutions that help move the global network community much further along.</p><h3 id="how-is-rpki-deployed-in-a-real-operational-network">How is RPKI deployed in a real operational network</h3><p>As network operators don’t want to run an cryptographic software on the control plane of a router (or even have RPKI data anywhere near the control plane), the normal deployment is to pair routers with a server.</p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/RPKI-diagram-@3x-3.png" class="kg-image" alt="RPKI - The required cryptographic upgrade to BGP routing"></figure><p><br>The server runs all the RPKI code (including the crypto processing of the TA, the certificate tree, and the ROAs). When the router sees a new route, the router send a simple message across a communications path (that includes the origin AS plus the IP route). The server, running a validator, responds with a yes/no answer that drives the filtering of that BGP route. This lightweight protocol is defined in <a href="https://tools.ietf.org/html/rfc6810">RFC6810</a>, then updated later to include some BGPsec support in <a href="https://tools.ietf.org/html/rfc8210">RFC8210</a> (The Resource Public Key Infrastructure (RPKI) to Router Protocol). This lightweight protocol is nicknamed “RTR”.</p><p>Present implementations include https://github.com/rtrlib/rtrlib (in ‘C’) and NIST’s package https://www.nist.gov/services-resources/software/bgp-secure-routing-extension-bgp-srx-prototype which is based on quagga; hence not usable in production.</p><p>Operationally, neither are fully usable within production environments.<br>The RIPE validator https://github.com/RIPE-NCC/rpki-validator-3 (written in Java) can produce filter sets similar to IRR tools and seems to be the most prevalent tool for the limited number of RPKI setups found in networks today. There's recently a software release from NLnet Labs research group which is Rust-based. Their RPKI validator is called <a href="https://github.com/NLnetLabs/routinator">Routinator 3000</a>.</p><p>The industry still needs some more operationally-focused software!</p><h3 id="can-everyone-participate-in-rpki-routing-filtering">Can everyone participate in RPKI routing filtering?</h3><p>Yes. No. Maybe. Ask your lawyers.</p><p>For many years there’s been a solid discussion about the role of the RIRs as holders of the private key of the CA at the top of their tree. Five trees. IANA was meant to run a single root above them (similar to how DNSSEC works with one key held at the DNS root - or dot); but that didn’t happen for many reasons including the fact that IANA/ICANN was essentially reporting to the US government back when this was all being setup. The RIR setup has stuck and at this point no-one expects IANA to ever hold a single root certificate, plus it’s all historic at this point and not worth rehashing here.</p><p>This is not a major operational issue; however it does have some slight consequences. While having five roots could be considered a messy setup, it actually matches the web space CA model.</p><p>Some RIR regions have special issues. <a href="https://arin.net/">ARIN</a> (in North America and portions of the Caribbean) has a TA and ROAs; but wants full indemnification should the data be wrong or used incorrectly. In the <a href="https://ripe.net/">RIPE</a> region (Europe, ME &amp; Russia), the members voted down full support for RPKI because they didn’t want to have a Dutch entity (RIPE NCC) hold a certificate for a non-Dutch entity and have a Dutch LEA letter shutdown a network by forcing that certificate to be invalidated. Read their respective terms of service:</p><ul>
<li>ARIN at <a href="https://www.arin.net/resources/rpki/tal.html">https://www.arin.net/resources/rpki/tal.html</a> &amp; <a href="https://www.arin.net/resources/rpki/rpa.pdf">https://www.arin.net/resources/rpki/rpa.pdf</a></li>
<li>RIPE at <a href="https://www.ripe.net/manage-ips-and-asns/resource-management/certification/legal/ripe-ncc-certification-repository-terms-and-conditions">https://www.ripe.net/manage-ips-and-asns/resource-management/certification/legal/ripe-ncc-certification-repository-terms-and-conditions</a></li>
</ul>
<p>The legal issues aren’t the focus of this blog entry; but it will be obvious later when implementing RPKI as-to-why the legal issues become an impediment to successful global RPKI deployment.</p><h3 id="irr-legacy-or-bridging-solution">IRR - legacy or bridging solution?</h3><p>Everyone assumes that IRR will ultimately go away; however, that’s a long long way out. There’s efforts underway to make IRR data cleaner, and in some cases, to (finally) link the underlying RPKI &amp; IRR data together. They are very similar data; but with different security models.</p><p>This blog post was written with RPKI as the go-forward methodology in mind and hence does not need to address all the subtle issues around IRR brokenness. It would be a whole fresh blog post to address the legacy issues within IRR. That said, it’s clear that RPKI isn’t today a complete substitute for all the IRR data (and RPSL/RPSLng data) that exists today. The good news is that there’s work within the IETF and drafts in-flight to cover that. RPKI is a good protocol to base route filtering on and Cloudflare will be rolling out full support for RPKI enabled filtering and route announcements within its global network.</p><p>If you look back at the examples above of the university and its NREN, then realize that in the RPKI world the same information is being stored; however, the validity and attestation of the data increases n-fold once RPKI becomes the universal mechanism of choice.</p><p>Cloudflare wants to see this happen and will push for RPKI to become mainstream within the BGP world. We want to squash the existence of BGP route leaks and hijacks forever!</p><h3 id="next-steps">Next steps</h3><p>Read the <a href="https://blog.cloudflare.com/rpki-details/">RPKI and BGP: securing our part of the Internet</a> blog entry to follow what we are doing on the technical side for Cloudflare’s RPKI implementation.</p><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on our announcements.</em></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Crypto-Week-1-1-2.png" class="kg-image" alt="RPKI - The required cryptographic upgrade to BGP routing"></figure>]]></content:encoded></item><item><title><![CDATA[Expanding DNSSEC Adoption]]></title><description><![CDATA[Cloudflare first started talking about DNSSEC in 2014 and at the time, Nick Sullivan wrote: “DNSSEC is a valuable tool for improving the trust and integrity of DNS, the backbone of the modern Internet.”]]></description><link>https://blog.cloudflare.com/automatically-provision-and-maintain-dnssec/</link><guid isPermaLink="false">5b9c2722c24d3800bf438bc6</guid><category><![CDATA[Crypto Week]]></category><category><![CDATA[Crypto]]></category><category><![CDATA[Security]]></category><category><![CDATA[DNSSEC]]></category><category><![CDATA[DNS]]></category><category><![CDATA[Product News]]></category><category><![CDATA[Reliability]]></category><dc:creator><![CDATA[Sergi Isasi]]></dc:creator><pubDate>Tue, 18 Sep 2018 13:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/DNSSEC-1.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/DNSSEC.png" class="kg-image" alt="Expanding DNSSEC Adoption"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/DNSSEC-1.png" alt="Expanding DNSSEC Adoption"><p>Cloudflare first started talking about <a href="https://www.cloudflare.com/dns/dnssec/universal-dnssec/">DNSSEC</a> in <a href="https://blog.cloudflare.com/dnssec-an-introduction/">2014</a> and at the time, <a href="https://twitter.com/grittygrease">Nick Sullivan</a> wrote: “DNSSEC is a valuable tool for improving the trust and integrity of DNS, the backbone of the modern Internet.”</p><p>Over the past four years, it has become an even more critical part of securing the internet. While <a href="https://blog.cloudflare.com/chrome-not-secure-for-http/">HTTPS</a> has gone a long way in preventing user sessions from being hijacked and maliciously (or innocuously) redirected, not all internet traffic is HTTPS. A safer Internet should secure every possible layer between a user and the origin they are intending to visit.</p><p>As a quick refresher, DNSSEC allows a user, application, or recursive resolver to trust that the answer to their DNS query is what the domain owner intends it to be. Put another way: DNSSEC proves authenticity and integrity (though not confidentiality) of a response from the authoritative nameserver. Doing so makes it much harder for a bad actor to inject malicious DNS records into the resolution path through <a href="https://blog.cloudflare.com/bgp-leaks-and-crypto-currencies/">BGP Leaks</a> and cache poisoning. Trust in DNS matters even more when a domain is publishing <a href="https://blog.cloudflare.com/additional-record-types-available-with-cloudflare-dns/">record types</a> that are used to declare trust for other systems. As a specific example, DNSSEC is helpful for preventing malicious actors from obtaining fraudulent certificates for a domain. <a href="https://blog.powerdns.com/2018/09/10/spoofing-dns-with-fragments/">Research</a> has shown how DNS responses can be spoofed for domain validation.</p><p>This week we are announcing our full support for CDS and CDNSKEY from <a href="https://datatracker.ietf.org/doc/rfc8078/">RFC 8078</a>. Put plainly: this will allow for setting up of DNSSEC without requiring the user to login to their registrar to upload a DS record. Cloudflare customers on supported registries will be able to enable DNSSEC with the click of one button in the Cloudflare dashboard.</p><h3 id="validation-by-resolvers">Validation by Resolvers</h3><p>DNSSEC’s largest problem has been adoption. The number of DNS queries validated by recursive resolvers for DNSSEC has remained flat. Worldwide, less than 14% of DNS requests have DNSSEC validated by the resolver according to our friends at <a href="https://stats.labs.apnic.net/dnssec/XA?c=XA&amp;x=1&amp;g=1&amp;r=1&amp;w=7&amp;g=0">APNIC</a>. The blame here falls on the shoulders of the default DNS providers that most devices and users receive from DHCP via their ISP or network provider. Data shows that some countries do considerably better: Sweden, for example, has over <a href="https://stats.labs.apnic.net/dnssec/XE?o=cXAw7x1g1r1">80% of their requests validated</a>, showing that the default DNS resolvers in those countries validate the responses as they should. APNIC also has a fun <a href="https://stats.labs.apnic.net/dnssec">interactive map</a> so you can see how well your country does.</p><p>So what can we do? To ensure your resolver supports DNSSEC, visit <a href="http://brokendnssec.net/">brokendnssec.net</a> in your browser. If the page <strong>loads,</strong> you are not protected by a DNSSEC validating resolver and should <a href="https://1.1.1.1/#setup-instructions">switch your resolver</a>. However, in order to really move the needle across the internet, Cloudflare encourages network providers to either turn on the validation of DNSSEC in their software or switch to publicly available resolvers that validate DNSSEC by default. Of course we have <a href="https://one.one.one.one">a favourite</a>, but there are other fine choices as well. </p><h3 id="signing-of-zones">Signing of Zones</h3><p>Validation handles the user side, but another problem has been the signing of the zones themselves. Initially, there was concern about adoption at the TLD level given that TLD support is a requirement for DNSSEC to work. This is now largely a non-issue with over 90% of TLDs signed with DS records in the root zone, as of <a href="http://stats.research.icann.org/dns/tld_report/">2018-08-27</a>.</p><p>It’s a different story when it comes to the individual domains themselves. Per <a href="https://usgv6-deploymon.antd.nist.gov/cgi-bin/generate-com">NIST data</a>, a woefully low 3% of the Fortune 1000 sign their primary domains. Some of this is due to apathy by the domain owners. However, some large DNS operators do not yet support the option at all, requiring domain owners who want to protect their users to move to another provider altogether. If you are on a service that does not support DNSSEC, we encourage you to switch to one that does and let them know that was the reason for the switch. Other large operators, such as GoDaddy, charge for DNSSEC. Our stance here is clear: DNSSEC should be available and included at all DNS operators for free.</p><h3 id="the-ds-parent-issue">The DS Parent Issue</h3><p>In December of 2017, APNIC wrote about <a href="https://blog.apnic.net/2017/12/06/dnssec-deployment-remains-low/">why DNSSEC deployment remains so low</a> and that remains largely true today. One key point was that the number of domain owners who attempt DNSSEC activation but do not complete it is very high. Using Cloudflare as an example, APNIC measured that 40% of those who enabled DNSSEC in the Cloudflare Dash (evidenced by the presence of a DNSKEY record) were actually successful in serving a DS key from the registry. Current data over a recent 90 day period is slightly better: we are seeing just over 50% of all zones which attempted to enable DNSSEC were able to complete the process with the registry (Note: these domains still resolve, they are just still not secured). Of our most popular TLDs, .be and .nl have success rates of over 70%, but these numbers are still not where we would want them to be in an ideal world. The graph below shows the specific rates for the most popular TLDs (most popular from left to right).</p><p></p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://blog.cloudflare.com/content/images/2018/09/Graph.png" class="kg-image" alt="Expanding DNSSEC Adoption"></figure><p><br><br>This end result is likely not surprising to anyone who has tried to add a DS record to their registrar. Locating the part of the registrar UI that houses DNSSEC can be problematic, as can the UI of adding the record itself. Additional factors such as varying degrees of technical knowledge amongst users and simply having to manage multiple logins and roles can also explain the lack of completion in the process. Finally, varying levels of DNSSEC compatibility amongst registrars may prevent even knowledgeable users from creating DS records in the parent.</p><p>As an example, at Cloudflare, we took a minimalist UX approach for adding DS records for delegated child domains. A novice user may not understand the fields and requirements for the DS record:<br></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/pasted-image-0.png" class="kg-image" alt="Expanding DNSSEC Adoption"></figure><h3 id="cds-and-cdnskey"><br><br>CDS and CDNSKEY</h3><p>As mentioned in the aforementioned APNIC blog, Cloudflare is supportive of <a href="https://datatracker.ietf.org/doc/rfc8078/">RFC 8078</a> and the CDS and CDNSKEY records. This should come as no surprise given that our own <a href="https://twitter.com/OGudm">Olafur Gudmundsson</a> is a co-author of the RFC. CDS and CDNSKEY are records that mirror the DS and DNSKEY record types but are designated to signal the parent/registrar that the child domain wishes to enable DNSSEC and have a DS record presented by the registry. We have been pushing for automated solutions in this space for <a href="https://blog.cloudflare.com/updating-the-dns-registration-model-to-keep-pace-with-todays-internet/">years</a> and are encouraging the industry to move with us. </p><p>Today, we are announcing General Availability and full support for CDS and CDNSKEY records for all Cloudflare managed domains that enable DNSSEC in the Cloudflare dash.</p><h3 id="how-it-works">How It Works</h3><p>Cloudflare will publish CDS and CDNSKEY records for all domains who enable DNSSEC. Parent registries should scan the nameservers of the domains under their purview and check for these rrsets. The presence of a CDS key for a domain delegated to Cloudflare indicates that a verified Cloudflare user has enabled DNSSEC within their dash and that the parent operator (a registrar or the registry itself) should take the CDS record content and create the requisite DS record to start signing the domain. TLDs .ch and .cz already support this automated method through Cloudflare and any other DNS operators that choose to support RFC8078. The registrar <a href="https://www.gandi.net/">Gandi</a> and a number of TLDs have indicated support in the near future.<br><br></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Flow.png" class="kg-image" alt="Expanding DNSSEC Adoption"></figure><p><br>Cloudflare also supports CDS0 for the removal of the DS record in the case that the user transfers their domain off of Cloudflare or otherwise disables DNSSEC.</p><h3 id="best-practices-for-parent-operators">Best Practices for Parent Operators</h3><p>Below are a number of suggested procedures that parent registries may take to provide for the best experience for our users: </p><ul><li><em>Scan Selection</em> - Parent Operators should only scan their child domains who have nameservers pointed at Cloudflare (or other DNS operators who adopt RFC8078). Cloudflare nameservers are indicated *.ns.cloudflare.com.</li><li><em>Scan Regularly</em> - Parent Operators should scan at regular intervals for the presence and change of CDS records. A scan every 12 hours should be sufficient, though faster is better. </li><li><em>Notify Domain Contacts</em> - Parent Operators should notify their designated contacts through known channels (such as email and/or SMS) for a given child domain upon detection of a new CDS record and an impending change of their DS record. The Parent Operator may also wish to provide a standard delay (24 hours) before changing the DS record to allow the domain contact to cancel or otherwise change the operation.</li><li><em>Verify Success</em> - Parent Operators must ensure that the domain continues to resolve after being signed. Should the domain fail to resolve immediately after changing the DS record, the Parent Operator must fall back to the previous functional state and should notify designated contacts.</li></ul><h3 id="what-does-this-all-mean-and-what-s-next">What Does This All Mean and What’s Next?</h3><p>For Cloudflare customers, this means an easier implementation of DNSSEC once your registry/registrar supports CDS and CDNSKEY. Customers can also enable DNSSEC for free on Cloudflare and manually enter the DS to the parent. To check your domain’s DNSSEC status, <a href="http://dnsviz.net/d/cloudflare.com/dnssec/">DNSViz (example cloudflare.com</a>) has one of the most standards compliant tools online.</p><p>For registries and registrars, we are taking this step with the hope that more providers support RFC8078 and help increase the global adoption of technology that helps end users be less vulnerable to DNS attacks on the internet.</p><p>For other DNS operators, we encourage you to join us in supporting this method as the more major DNS operators that publish CDS and CDNSKEY, the more likely it will be that the registries will start looking for and use them.</p><p>Cloudflare will continue pushing down this path and has plans to create and open source additional tools to help registries and operators push and consume records. If this sounds interesting to you, we are <a href="https://www.cloudflare.com/careers/">hiring</a>.</p><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on our announcements.</em></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Crypto-Week-1-1-1.png" class="kg-image" alt="Expanding DNSSEC Adoption"></figure>]]></content:encoded></item><item><title><![CDATA[End-to-End Integrity with IPFS]]></title><description><![CDATA[This post describes how to use Cloudflare’s IPFS gateway to set up a website which is end-to-end secure, while maintaining the performance and reliability benefits of being served from Cloudflare’s edge network.]]></description><link>https://blog.cloudflare.com/e2e-integrity/</link><guid isPermaLink="false">5b9beb04c24d3800bf438b95</guid><category><![CDATA[Crypto Week]]></category><category><![CDATA[Crypto]]></category><category><![CDATA[Security]]></category><category><![CDATA[IPFS]]></category><category><![CDATA[Universal SSL]]></category><category><![CDATA[SSL]]></category><category><![CDATA[DNSSEC]]></category><dc:creator><![CDATA[Brendan McMillion]]></dc:creator><pubDate>Mon, 17 Sep 2018 13:02:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/Screen-Shot-2018-09-14-at-1.33.24-PM.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cloudflare.com/content/images/2018/09/Screen-Shot-2018-09-14-at-1.33.24-PM.png" alt="End-to-End Integrity with IPFS"><p>This post describes how to use Cloudflare's IPFS gateway to set up a website which is end-to-end secure, while maintaining the performance and reliability benefits of being served from Cloudflare’s edge network. If you'd rather read an introduction to the concepts behind IPFS first, you can find that in <a href="https://blog.cloudflare.com/distributed-web-gateway/">our announcement</a>. Alternatively, you could skip straight to the <a href="https://developers.cloudflare.com/distributed-web/">developer docs</a> to learn how to set up your own website.</p>
<p>By 'end-to-end security', I mean that neither the site owner nor users have to trust Cloudflare to serve the correct documents, like they do now. This is similar to how using HTTPS means you don't have to trust your ISP to not modify or inspect traffic.</p>
<p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/ipfs-blog-post-image-1-copy@3.5x--1-.png" class="kg-image" alt="End-to-End Integrity with IPFS"></figure><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/ipfs-blog-post-image-2-copy@3.5x--1-.png" class="kg-image" alt="End-to-End Integrity with IPFS"></figure><p></p><h3 id="cnamesetupwithuniversalssl">CNAME Setup with Universal SSL</h3>
<p>The first step is to choose a domain name for your website. Websites should be given their own domain name, rather than served directly from the gateway by root hash, so that they are considered a distinct origin by the browser. This is primarily to prevent cache poisoning, but there are several functional advantages as well. It gives websites their own instance of localStorage and their own cookie jar which are sandboxed from inspection and manipulation by malicious third-party documents. It also lets them run Service Workers without conflict, and request special permissions of the user like access to the webcam or GPS. But most importantly, having a domain name makes a website easier to identify and remember.</p>
<p>Now that you've chosen a domain, rather than using it as-is, you’ll need to add &quot;ipfs-sec&quot; as the left-most subdomain. So for example, you'd use &quot;ipfs-sec.example.com&quot; instead of just &quot;example.com&quot;. The ipfs-sec subdomain is special because it signals to the user and to their browser that your website is capable of being served with end-to-end integrity.</p>
<p>In addition to that, ipfs-sec domains require <a href="https://blog.cloudflare.com/dnssec-an-introduction/">DNSSEC</a> to be properly setup to prevent spoofing. Unlike with standard HTTPS, where DNS spoofing can't usually result in a man-in-the-middle attack, this is exactly what DNS spoofing does to IPFS because the root hash of the website is stored in DNS. Many registrars make enabling DNSSEC as easy as the push of a button, though some don't support it at all.</p>
<p>With the ipfs-sec domain, you can now follow the <a href="https://developers.cloudflare.com/distributed-web/ipfs-gateway/connecting-website/">developer documentation</a> on how to serve a generic static website from IPFS. Note that you'll need to use a CNAME setup and retain control of your DNS, rather than the easier method of just signing up for Cloudflare. This helps maintain a proper separation between the party managing the DNSSEC signing keys and the party serving content to end-users. Keep in mind that CNAME setups tend to be problematic and get into cases that are difficult to debug, which is why we reserve them for technically sophisticated customers.</p>
<p>You should now be able to access your website over HTTP and HTTPS, backed by our gateway.</p>
<h3 id="verifyingwhatthegatewayserves">Verifying what the Gateway Serves</h3>
<p>HTTPS helps makes sure that nobody between the user and Cloudflare's edge network has tampered with the connection, but it does nothing to make sure that Cloudflare actually serves the content the user asked for. To solve this, we built two connected projects: a modified gateway service and a browser extension.</p>
<p>First, we <a href="https://github.com/cloudflare/go-ipfs">forked the go-ipfs repository</a> and gave it the ability to offer cryptographic proofs that it was serving content honestly, which it will do whenever it sees requests with the HTTP header <code>X-Ipfs-Secure-Gateway: 1</code>. The simplest case for this is when users request a single file from the gateway by its hash -- all the gateway has to do is serve the content and any metadata that might be necessary to re-compute the given hash.</p>
<p>A more complicated case is when users request a file from a directory. Luckily, directories in IPFS are just files containing a mapping from file name to the hash of the file, and very large directories can be transparently split up into several smaller files, structured into a search tree called a <a href="https://idea.popcount.org/2012-07-25-introduction-to-hamt/">Hash Array Mapped Trie (HAMT)</a>. To convince the client that the gateway is serving the contents of the correct file, the gateway first serves the file corresponding to the directory, or every node in the search path if the directory is a HAMT. The client can hash this file (or search tree node), check that it equals the hash of the directory they asked for, and look up the hash of the file they want from within the directory's contents. The gateway then serves the contents of the requested file, which the client can now verify because it knows the expected hash.</p>
<p>Finally, the most complicated case by far is when the client wants to access content by domain name. It's complicated because the protocol for authenticating DNS, called DNSSEC, has very few client-side implementations. DNSSEC is also not widely deployed, even though some registrars make it even easier than setting up HTTPS. In the end, we ended up writing our own simple DNSSEC-validating resolver that's capable of outputting a cryptographically-convincing proof that it did some lookup correctly.</p>
<p>It works the same way as certificate validation in HTTPS: we start at the bottom, with a signature from some authority claiming to be example.com over the DNS records they want us to serve. We then lookup a delegation (DS record) from an authority claiming to be .com, that says &quot;example.com is the authority with these public keys&quot; which is in turn signed by the .com authority's private key. And finally, we lookup a delegation from the root authority, ICANN (whose public keys we already have), attesting to the public keys used by the .com authority. All of these lookups bundled together form an authenticated chain starting at ICANN and ending at the exact records we want to serve. These constitute the proof.</p>
<p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/IPFS-tech-post-@3.5x.png" class="kg-image" alt="End-to-End Integrity with IPFS"><figcaption><em>Chain of trust in DNSSEC.</em></figcaption></figure><br><p>The second project we built out was a browser extension that requests these proofs from IPFS gateways and ipfs-sec domains, and is capable of verifying them. The extension uses the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest">webRequest API</a> to sit between the browser's network stack and its rendering engine, preventing any unexpected data from being show to the user or unexpected code from being executed. The code for the browser extension is <a href="https://github.com/cloudflare/ipfs-ext">available on Github</a>, and can be installed through <a href="https://addons.mozilla.org/en-US/firefox/addon/cloudflare-ipfs-validator/">Firefox's add-on store</a>. We’re excited to add support for Chrome as well, but that can’t move forward until <a href="https://bugs.chromium.org/p/chromium/issues/detail?id=487422">this ticket</a> in their bug tracker is addressed.</p>
<p>On the other hand, if a user doesn't have the extension installed, the gateway won't see the <code>X-Ipfs-Secure-Gateway</code> header and will serve the page like a normal website, without any proofs. This provides a graceful upgrade path to using IPFS, either through our extension that uses a third-party gateway or perhaps another browser extension that runs a proper IPFS node in-browser.</p>
<h3 id="exampleapplication">Example Application</h3>
<p>My favorite website on IPFS so far has been the <a href="https://cloudflare-ipfs.com/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/">mirror of English Wikipedia</a> put up by <a href="https://ipfs.io/blog/24-uncensorable-wikipedia/">the IPFS team at Protocol Labs</a>. It's fast, fun to play with, and above all has practical utility. One problem that stands out though, is that the mirror has no search feature so you either have to know the URL of the page you want to see or try to find it through Google. The <a href="https://ipfs.io/ipfs/QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX/wiki/Anasayfa.html">Turkish-language mirror</a> has in-app search but it requires a call to a dynamic API on the same host, and doesn't work through Cloudflare's gateway because we only serve static content.</p>
<p>I wanted to provide an example of the kinds of secure, performant applications that are possible with IPFS, and this made building a search engine seem like a prime candidate. Rather than steal Protocol Labs' idea of 'Wikipedia on IPFS', we decided to take the <a href="http://www.kiwix.org/">Kiwix</a> archives of all the different StackExchange websites and build a distributed search engine on top of that. You can play with the finished product here: <a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com">ipfs-sec.stackexchange.cloudflare-ipfs.com</a>.</p>
<p>The way it's built is actually really simple, at least as far as search engines go: We build an inverted index and publish it with the rest of each StackExchange, along with a JavaScript client that can read the index and quickly identify documents that are relevant to a user's query. Building the index takes two passes over the data:</p>
<ol>
<li>The first pass decides what words/tokens we want to allow users to search by. Tokens shouldn't be too popular (like the top 100 words in English), because then the list of all documents containing that token is going to be huge and it's not going to improve the search results anyways. They also shouldn't be too rare (like a timestamp with sub-second-precision), because then the index will be full of meaningless tokens that occur in only one document each. You can get a good estimate of the most frequent K tokens, using only constant-space, with the really simple space-saving algorithm from <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.114.9563&amp;rep=rep1&amp;type=pdf">this paper</a>.</li>
<li>Now that the first pass has given us the tokens we want to track, the second pass through the data actually builds the inverted index. That is, it builds a map from every token to the list of documents that contain that token, called a postings list. When a client wants to find only documents that contain some set of words/tokens, they download the list for each individual token and intersect them. It sounds less efficient than it is -- in reality, the postings lists are unnoticeably small (&lt;30kb) even in the worst case. And the browser can 'pipeline' the requests for the postings lists (meaning, send them all off at once) which makes getting a response to several requests about as fast as getting a response to one.</li>
</ol>
<p>We also store some simple statistics in each postings list to help rank the results. Essentially, documents that contain a query token more often are ranked higher than those that don't. And among the tokens in a query, those tokens that occur in fewer documents have a stronger effect on ranking than tokens that occur in many documents. That's why when I search for <a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/crypto/search.html?q=AES+SIV">&quot;AES SIV&quot;</a> the first result that comes back is:</p>
<ul>
<li><a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/crypto/A/question/54413.html">&quot;Why is SIV a thing if MAC-and-encrypt is not the most secure way to go?&quot;</a></li>
</ul>
<p>while the following is the fourth result, even though it has a higher score and greater number of total hits than first result:</p>
<ul>
<li><a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/crypto/A/question/31835.html">&quot;Why is AES-SIV not used, but AESKW, AKW1?&quot;</a></li>
</ul>
<p>(AES is a very popular and frequently discussed encryption algorithm, while SIV is a lesser-known way of using AES.)</p>
<p>But this is what really makes it special: because the search index is stored in IPFS, the user can convince themselves that no results have been modified, re-arranged, or omitted without having to download the entire corpus. There's one small trick to making this statement hold true: All requests made by the search client must succeed, and if they don't, it outputs an error and no search results.</p>
<p>To understand why this is necessary, think about the search client when it first gets the user's query. It has to tokenize the query and decide which postings lists to download, where not all words in the user's query may be indexed. A naive solution is to try to download the postings list for every word unconditionally, and interpret a non-200 HTTP status code as &quot;this postings list must not exist&quot;. In this case, a network adversary could block the search client from being able to access postings lists that lead to undesirable results, causing the client to output misleading search results either through omission or re-arranging.</p>
<p>What we do instead is store the dictionary of every indexed token in a file in the root of the index. The client can download the dictionary once, cache it, and use it for every search afterwards. This way, the search client can consult the dictionary to find out which requests should succeed and only send those.</p>
<h3 id="fromhere">From Here</h3>
<p>We were incredibly excited when we realized the new avenues and types of applications that combining IPFS with Cloudflare open up. Of course, our IPFS gateway and the browser extension we built will need time to mature into a secure and reliable platform. But the ability to securely deliver web pages through third-party hosting providers and CDNs is incredibly powerful, and its something cryptographers and internet security professionals have been working towards for a long time. As much fun as we had building it, we're even more excited to see what you build with it.</p>
<p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on our announcements.</em></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/CRYPTO-WEEK-banner-plus-logo@2x.png" class="kg-image" alt="End-to-End Integrity with IPFS"></figure><p></p>]]></content:encoded></item><item><title><![CDATA[Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway]]></title><description><![CDATA[Today we’re excited to introduce Cloudflare’s IPFS Gateway, an easy way to access content from the the InterPlanetary File System (IPFS) that doesn’t require installing and running any special software on your computer.]]></description><link>https://blog.cloudflare.com/distributed-web-gateway/</link><guid isPermaLink="false">5b9c08e3c24d3800bf438bb6</guid><category><![CDATA[Crypto Week]]></category><category><![CDATA[Crypto]]></category><category><![CDATA[Security]]></category><category><![CDATA[IPFS]]></category><category><![CDATA[Universal SSL]]></category><category><![CDATA[SSL]]></category><category><![CDATA[DNSSEC]]></category><category><![CDATA[HTTPS]]></category><dc:creator><![CDATA[Andy Parker]]></dc:creator><pubDate>Mon, 17 Sep 2018 13:01:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/ipfs-gateway-header@3.5x-2.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/ipfs-gateway-header@3.5x-1-1.png" class="kg-image" alt="Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/ipfs-gateway-header@3.5x-2.png" alt="Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway"><p>Today we’re excited to introduce Cloudflare’s IPFS Gateway, an easy way to access content from the InterPlanetary File System (IPFS) that doesn’t require installing and running any special software on your computer. We hope that our gateway, hosted at <a href="https://cloudflare-ipfs.com">cloudflare-ipfs.com</a>, will serve as the platform for many new highly-reliable and security-enhanced web applications. The IPFS Gateway is the first product to be released as part of our <a href="https://www.cloudflare.com/distributed-web-gateway">Distributed Web Gateway</a> project, which will eventually encompass all of our efforts to support new distributed web technologies.</p><p>This post will provide a brief introduction to IPFS. We’ve also written an accompanying blog post <a href="https://blog.cloudflare.com/e2e-integrity">describing what we’ve built</a> on top of our gateway, as well as <a href="https://developers.cloudflare.com/distributed-web/">documentation</a> on how to serve your own content through our gateway with your own custom hostname.</p><h3 id="quick-primer-on-ipfs">Quick Primer on IPFS</h3><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/spaaaace-ipfs@3.5x-1.png" class="kg-image" alt="Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway"></figure><p></p><p>Usually, when you access a website from your browser, your browser tracks down the origin server (or servers) that are the ultimate, centralized repository for the website’s content. It then sends a request from your computer to that origin server, wherever it is in the world, and that server sends the content back to your computer. This system has served the Internet well for decades, but there’s a pretty big downside: centralization makes it impossible to keep content online any longer than the origin servers that host it. If that origin server is hacked or taken out by a natural disaster, the content is unavailable. If the site owner decides to take it down, the content is gone. In short, mirroring is not a first-class concept in most platforms (<a href="https://www.cloudflare.com/always-online/">Cloudflare’s Always Online</a> is a notable exception).</p><p>The InterPlanetary File System aims to change that. IPFS is a peer-to-peer file system composed of thousands of computers around the world, each of which stores files on behalf of the network. These files can be anything: cat pictures, 3D models, or even entire websites. Over 5,000,000,000 files had been uploaded to <a href="https://cloudflare-ipfs.com/ipfs/QmWimYyZHzChb35EYojGduWHBdhf9SD5NHqf8MjZ4n3Qrr/Filecoin-Primer.7-25.pdf">IPFS already</a>.</p><h3 id="ipfs-vs-traditional-web">IPFS vs. Traditional Web </h3><p>There are two key differences between IPFS and the web as we think of it today. </p><p>The first is that with IPFS anyone can cache and serve any content—for free. Right now, with the traditional web, most typically rely on big hosting providers in remote locations to store content and serve it to the rest of the web. If you want to set up a website, you have to pay one of these major services to do this for you. With IPFS, anyone can sign up their computer to be a node in the system and start serving data. It doesn’t matter if you’re working on a Raspberry Pi or running the world’s biggest server. You can still be a productive node in the system.</p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Decentralized-Network-1.png" class="kg-image" alt="Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway"></figure><p></p><p>The second key difference is that data is content-addressed, rather than location-addressed. That’s a bit of a subtle difference, but the ramifications are substantial, so it’s worth breaking down.</p><p>Currently, when you open your browser and navigate to example.com, you’re telling the browser “fetch me the data stored at example.com’s IP address” (this happens to be 93.184.216.34). That IP address marks where the content you want is stored in the network. You then send a request to the server at that IP address for the “example.com” content and the server sends back the relevant information. So at the most basic level, you tell the network where to look and the network sends back what it found.</p><p>IPFS turns that on its head.</p><p>With IPFS, every single block of data stored in the system is addressed by a cryptographic hash of its contents, i.e., a long string of letters and numbers that is unique to that block. When you want a piece of data in IPFS, you request it by its hash. So rather than asking the network “get me the content stored at 93.184.216.34,” you ask “get me the content that has a hash value of <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>.” (<code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code> happens to be the hash of a .txt file containing the string “I’m trying out IPFS”).</p>
<h3 id="how-is-this-so-different">How is this so different?</h3><p>Remember that with IPFS, you tell the network what to look for, and the network figures out where to look.</p><h3 id="why-does-this-matter">Why does this matter?</h3><p>First off, it makes the network more resilient. The content with a hash of <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code> could be stored on dozens of nodes, so if one node that was caching that content goes down, the network will just look for the content on another node.</p>
<p>Second, it introduces an automatic level of security. Let’s say you know the hash value of a file you want. So you ask the network, “get me the file with hash <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>” (the example.txt file from above). The network responds and sends the data. When you receive all the data, you can rehash it. If the data was changed at all in transit, the hash value you get will be different than the hash you asked for. You can think of the hash as like a unique fingerprint for the file. If you’re sent back a different file than you were expecting to receive, it’s going to have a different fingerprint. This means that the system has a built-in way of knowing whether or not content has been tampered with.</p>
<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/ipfs-blog-post-image-1-copy@3.5x.png" class="kg-image" alt="Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway"></figure><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/ipfs-blog-post-image-2-copy@3.5x.png" class="kg-image" alt="Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway"></figure><h3 id="a-note-on-ipfs-addresses-and-cryptographic-hashes"><br>A Note on IPFS Addresses and Cryptographic Hashes</h3><p>Since we’ve spent some time going over why this content-addressed system is so special, it’s worth talking a little bit about how the IPFS addresses are built. Every address in IPFS is a <a href="https://github.com/multiformats/multihash">multihash</a>, which means that the address combines information about both the hashing algorithm used and the hash output into one string. IPFS multihashes have three distinct parts: the first byte of the mulithash indicates which hashing algorithm has been used to produce the hash; the second byte indicates the length of the hash; and the remaining bytes are the value output by the hash function. By default, IPFS uses the <a href="https://en.wikipedia.org/wiki/SHA-2">SHA-256</a> algorithm, which produces a 32-byte hash. This is represented by the string “Qm” in <a href="https://en.wikipedia.org/wiki/Base58">Base58</a> (the default encoding for IPFS addresses), which is why all the example IPFS addresses in this post are of the form “Qm…”.</p><p>While SHA-256 is the standard algorithm used today, this multihash format allows the IPFS protocol to support addresses produced by other hashing algorithms. This allows the IPFS network to move to a different algorithm, should the world discover flaws with SHA-256 sometime in the future. If someone hashed a file with another algorithm, the address of that file would start some characters other than “Qm”. </p><p>The good news is that, at least for now, SHA-256 is believed to have a number of qualities that make it a strong cryptographic hashing algorithm. The most important of these is that SHA-256 is collision resistant. A collision occurs when there are two different files that produce the same hash when run through the SHA-256 algorithm. To understand why it’s important to prevent collisions, consider this short scenario. Imagine some IPFS user, Alice, uploads a file with some hash, and another user, Bob, uploads a different file that happens to produce the exact same hash. If this happened, there would be two different files in the network with the exact same address. So if some third person, Carol, sent out an IPFS request for the content at that address, she wouldn't necessarily know whether she was going to receive Bob’s file or Alice’s file.</p><p>SHA-256 makes collisions extremely unlikely. Because SHA-256 computes a 256-bit hash, there are 2^256 possible IPFS addresses that the algorithm could produce. Hence, the chance that there are two files in IPFS that produce a collision is low. Very low. If you’re interested in more details, the <a href="https://en.wikipedia.org/wiki/Birthday_attack#Mathematics">birthday attack</a> Wikipedia page has a cool table showing exactly how unlikely collisions are, given a sufficiently strong hashing algorithm.</p><h3 id="how-exactly-do-you-access-content-on-ipfs">How exactly do you access content on IPFS?</h3><p>Now that we’ve walked through all the details of what IPFS is, you’re probably wondering how to use it. There are a number of ways to access content that’s been stored in the IPFS network, but we’re going to address two popular ones here. The first way is to download IPFS onto your computer. This turns your machine into a node of the IPFS network, and it’s the best way to interact with the network if you want to get down in the weeds. If you’re interested in playing around with IPFS, the Go implementation can be downloaded <a href="https://ipfs.io/docs/install/">here</a>.</p><p>But what if you want access to content that’s stored on IPFS without the hassle of operating a node locally on your machine? That’s where IPFS gateways come into play. IPFS gateways are third-party nodes that fetch content from the IPFS network and serve it to you over HTTPS. To use a gateway, you don’t need to download any software or type any code. You simply open up a browser and type in the gateway’s name and the hash of the content you’re looking for, and the gateway will serve the content in your browser.</p><p>Say you know you want to access the example.txt file from before, which has the hash <code>QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy</code>, and there’s a public gateway that is accessible at <code>https://example-gateway.com</code></p>
<p>To access that content, all you need to do is open a browser and type</p><pre><code>https://example-gateway.com/ipfs/QmXnnyufdzAWL5CqZ2RnSNgPbvCc1ALT73s6epPrRnZ1Xy
</code></pre>
<p>and you’ll get back the data stored at that hash. The combination of the /ipfs/ prefix and the hash is referred to as the file path. You always need to provide a full file path to access content stored in IPFS.</p><p></p><h3 id="what-can-you-do-with-cloudflare-s-gateway">What can you do with Cloudflare’s Gateway?</h3><p>At the most basic level, you can access any of the billions of files stored on IPFS from your browser. But that’s not the only cool thing you can do. Using Cloudflare’s gateway, you can also build a website that’s hosted entirely on IPFS, but still available to your users at a custom domain name. Plus, we’ll issue any website connected to our gateway a free SSL certificate, ensuring that each website connected to Cloudflare's gateway is secure from snooping and manipulation. For more on that, check out the <a href="https://developers.cloudflare.com/distributed-web/">Distributed Web Gateway developer docs</a>.</p><p>A fun example we’ve put together using the <a href="http://www.kiwix.org/">Kiwix</a> archives of all the different StackExchange websites and build a distributed search engine on top of that using only IPFS. Check it out <a href="https://ipfs-sec.stackexchange.cloudflare-ipfs.com/">here</a>.</p><h3 id="dealing-with-abuse">Dealing with abuse</h3><p>IPFS is a peer-to-peer network, so there is the possibility of users sharing abusive content. This is not something we support or condone. However, just like how Cloudflare works with more traditional customers, Cloudflare’s IPFS gateway is simply a cache in front of IPFS. Cloudflare does not have the ability to modify or remove content from the IPFS network. If any abusive content is found that is served by the Cloudflare IPFS gateway, you can use the standard abuse reporting mechanism described <a href="https://www.cloudflare.com/abuse/">here</a>.</p><h3 id="embracing-a-distributed-future">Embracing a distributed future</h3><p>IPFS is only one of a family of technologies that are embracing a new, decentralized vision of the web. Cloudflare is excited about the possibilities introduced by these new technologies and we see our gateway as a tool to help bridge the gap between the traditional web and the new generation of distributed web technologies headlined by IPFS. By enabling everyday people to explore IPFS content in their browser, we make the ecosystem stronger and support its growth. Just like when Cloudflare launched back in 2010 and changed the game for web properties by providing the security, performance, and availability that was previously only available to the Internet giants, we think the IPFS gateway will provide the same boost to content on the distributed web.</p><p>Dieter Shirley, CTO of Dapper Labs and Co-founder of CryptoKitties said the following:</p><blockquote>We’ve wanted to store CryptoKitty art on IPFS since we launched, but the tech just wasn’t ready yet. Cloudflare’s announcement turns IPFS from a promising experiment into a robust tool for commercial deployment. Great stuff!</blockquote><p>The IPFS gateway is exciting, but it’s not the end of the road. There are other equally interesting distributed web technologies that could benefit from Cloudflare’s massive global network and we’re currently exploring these possibilities. If you’re interested in helping build a better internet with Cloudflare, <strong><a href="https://www.cloudflare.com/careers/">we’re hiring!</a> </strong></p><p><em><a href="https://blog.cloudflare.com/subscribe/" rel="noopener noreferer">Subscribe to the blog</a> for daily updates on our announcements.</em></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Crypto-Week-1-1.png" class="kg-image" alt="Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway"></figure>]]></content:encoded></item><item><title><![CDATA[Welcome to Crypto Week]]></title><description><![CDATA[The Internet is an amazing invention. We marvel at how it connects people, connects ideas, and makes the world smaller. But the Internet isn’t perfect. It was put together piecemeal through publicly funded research, private investment, and organic growth that has left us with an imperfect tapestry.]]></description><link>https://blog.cloudflare.com/crypto-week-2018/</link><guid isPermaLink="false">5b9bfe66c24d3800bf438b9a</guid><category><![CDATA[Crypto Week]]></category><category><![CDATA[Crypto]]></category><category><![CDATA[Security]]></category><category><![CDATA[IPFS]]></category><category><![CDATA[HTTPS]]></category><category><![CDATA[Tor]]></category><category><![CDATA[DNS]]></category><category><![CDATA[Reliability]]></category><dc:creator><![CDATA[Nick Sullivan]]></dc:creator><pubDate>Mon, 17 Sep 2018 13:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/Crypto-Week-2018.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/crypto-week_v02.gif" class="kg-image" alt="Welcome to Crypto Week"></figure><img src="https://blog.cloudflare.com/content/images/2018/09/Crypto-Week-2018.png" alt="Welcome to Crypto Week"><p>The Internet is an amazing invention. We marvel at how it connects people, connects ideas, and makes the world smaller. But the Internet isn’t perfect. It was put together piecemeal through publicly funded research, private investment, and organic growth that has left us with an imperfect tapestry. It’s also evolving. People are constantly developing creative applications and finding new uses for existing Internet technology. Issues like privacy and security that were afterthoughts in the early days of the Internet are now supremely important. People are being tracked and monetized, websites and web services are being attacked in interesting new ways, and the fundamental system of trust the Internet is built on is showing signs of age. The Internet needs an upgrade, and one of the tools that can make things better is cryptography.</p><p>Every day this week, Cloudflare will be announcing support for a new technology that uses cryptography to make the Internet better (hint: <a href="https://blog.cloudflare.com/subscribe/">subscribe to the blog</a> to make sure you don't miss any of the news). Everything we are announcing this week is free to use and provides a meaningful step towards supporting a new capability or structural reinforcement. So why are we doing this? Because it’s good for the users and good for the Internet. Welcome to Crypto Week! </p><h3 id="day-1-distributed-web-gateway">Day 1: Distributed Web Gateway</h3><ul><li><a href="https://blog.cloudflare.com/distributed-web-gateway/">Cloudflare goes InterPlanetary - Introducing Cloudflare’s IPFS Gateway</a> </li><li><a href="https://blog.cloudflare.com/e2e-integrity/">End-to-End Integrity with IPFS</a></li></ul><h3 id="day-2-dnssec">Day 2: DNSSEC</h3><ul><li><a href="https://blog.cloudflare.com/automatically-provision-and-maintain-dnssec/">Expanding DNSSEC Adoption</a></li></ul><h3 id="day-3-rpki">Day 3: RPKI</h3><ul><li><a href="https://blog.cloudflare.com/rpki/">RPKI - The required cryptographic upgrade to BGP routing</a></li><li><a href="https://blog.cloudflare.com/rpki-details/">RPKI and BGP: our path to securing Internet Routing</a></li></ul><h3 id="day-4-onion-routing">Day 4: Onion Routing</h3><ul><li><a href="https://blog.cloudflare.com/cloudflare-onion-service/">Introducing the Cloudflare Onion Service</a></li></ul><h3 id="day-5-roughtime">Day 5: Roughtime</h3><ul><li><a href="https://blog.cloudflare.com/roughtime/">Roughtime: Securing Time with Digital Signatures</a></li></ul><h2 id="a-more-trustworthy-internet">A more trustworthy Internet</h2><p>Everything we do online depends on a relationship between users, services, and networks that is supported by some sort of trust mechanism. These relationships can be physical (I plug my router into yours), contractual (I paid a registrar for this domain name), or reliant on a trusted third party (I sent a message to my friend on iMessage via Apple). The simple act of visiting a website involves hundreds of trust relationships, some explicit and some implicit. The sheer size of the Internet and number of parties involved make trust online incredibly complex. Cryptography is a tool that can be used to encode and enforce, and most importantly scale these trust relationships.</p><p>To illustrate this, let’s break down what happens when you visit a website. But before we can do this, we need to know the jargon.</p><ul><li><strong>Autonomous Systems (100 thousand or so active)</strong>: An AS corresponds to a network provider connected to the Internet. Each has a unique Autonomous System Number (ASN).</li><li><strong>IP ranges (1 million or so)</strong>: Each AS is assigned a set of numbers called IP addresses. Each of these IP addresses can be used by the AS to identify a computer on its network when connecting to other networks on the Internet. These addresses are assigned by the Regional Internet Registries (RIR), of which there are 5. Data sent from one IP address to another hops from one AS to another based on a “route” that is determined by a protocol called BGP.</li><li><strong>Domain names (&gt;1 billion)</strong>: Domain names are the human-readable names that correspond to Internet services (like “cloudflare.com” or “mail.google.com”). These Internet services are accessed via the Internet by connecting to their IP address, which can be obtained from their domain name via the Domain Name System (DNS).</li><li><strong>Content (infinite)</strong>: The main use case of the Internet is to enable the transfer of specific pieces of data from one point on the network to another. This data can be of any form or type.</li></ul><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/name-to-asn-@3.5x.png" class="kg-image" alt="Welcome to Crypto Week"></figure><p></p><p>When you type a website such as blog.cloudflare.com into your browser, a number of things happen. First, a (recursive) DNS service is contacted to get the IP address of the site. This DNS server configured by your ISP when you connect to the Internet, or it can be a public service such as 1.1.1.1 or 8.8.8.8. A query to the DNS service travels from network to network along a path determined by BGP announcements. If the recursive DNS server does not know the answer to the query, then it contacts the appropriate authoritative DNS services, starting with a root DNS server, down to a top level domain server (such as com or org), down to the DNS server that is authoritative for the domain. Once the DNS query has been answered, the browser sends an HTTP request to its IP address (traversing a sequence of networks), and in response, the server sends the content of the website.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/colorful-crypto-overview--copy-3@3.5x.png" class="kg-image" alt="Welcome to Crypto Week"></figure><p></p><p>So what’s the problem with this picture? For one, every DNS query and every network hop needs to be trusted in order to trust the content of the site. Any DNS query could be modified, a network could advertise an IP that belongs to another network, and any machine along the path could modify the content. When the Internet was small, there were mechanisms to combat this sort of subterfuge. Network operators had a personal relationship with each other and could punish bad behavior, but given the number of networks in existence <a href="https://www.cidr-report.org/as2.0/autnums.html">almost 400,000 as of this week</a> this is becoming difficult to scale.</p><p>Cryptography is a tool that can encode these trust relationships and make the whole system reliant on hard math rather than physical handshakes and hopes.</p><h3 id="building-a-taller-tower-of-turtles">Building a taller tower of turtles<br></h3><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/turtles.jpg" class="kg-image" alt="Welcome to Crypto Week"><figcaption><a href="https://www.flickr.com/photos/wwarby/2499825928 ">Attribution 2.0 Generic (CC BY 2.0)</a></figcaption></figure><p>The two main tools that cryptography provides to help solve this problem are cryptographic hashes and digital signatures.</p><p>A hash function is a way to take any piece of data and transform it into a fixed-length string of data, called a digest or hash. A hash function is considered cryptographically strong if it is computationally infeasible (read: very hard) to find two inputs that result in the same digest, and that changing even one bit of the input results in a completely different digest. The most popular hash function that is considered secure is SHA-256, which has 256-bit outputs. For example, the SHA-256 hash of the word “crypto” is</p><p><code>DA2F073E06F78938166F247273729DFE465BF7E46105C13CE7CC651047BF0CA4</code></p>
<p>And the SHA-256 hash of “crypt0” is</p><p><code>7BA359D3742595F38347A0409331FF3C8F3C91FF855CA277CB8F1A3A0C0829C4</code></p>
<p>The other main tool is digital signatures. A digital signature is a value that can only be computed by someone with a private key, but can be verified by anyone with the corresponding public key. Digital signatures are way for a private key holder to “sign,” or attest to the authenticity of a specific message in a way that anyone can validate it.</p><p>These two tools can be combined to solidify the trust relationships on the Internet. By giving private keys to the trusted parties who are responsible for defining the relationships between ASs, IPs, domain names and content, you can create chains of trust that can be publicly verified. Rather than hope and pray, these relationships can be validated in real time at scale.</p><p>Let’s take our webpage loading example and see where digital signatures can be applied.</p><p><strong>Routing</strong>. Time-bound delegation of trust is defined through a system called the RPKI. RPKI defines an object called a Resource Certificate, an attestation that states that a given IP range belongs to a specific ASN for this period of time, digitally signed by the RIR responsible for assigning the IP range. Networks share routes via BGP, and if a route is advertised for an IP that does not conform the the Resource Certificate, the network can choose not to accept it.</p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/roas@3x.png" class="kg-image" alt="Welcome to Crypto Week"></figure><p><br><strong>DNS.</strong> Adding cryptographic assurance to routing is powerful, but if a network adversary can change the content of the data (such as the DNS responses), then the system is still at risk. DNSSEC is a system built to provide a trusted link between names and IP addresses. The root of trust in DNSSEC is the DNS root key, which is managed with an <a href="https://www.iana.org/dnssec/ceremonies">elaborate signing ceremony</a>.</p><p><strong>HTTPS</strong>. When you connect to a site, not only do you want it to be coming from the right host, you also want the content to be private. The Web PKI is a system that issues certificates to sites, allowing you to bind the domain name to a time-bounded private key. Because there are many CAs, additional accountability systems like <a href="https://blog.cloudflare.com/introducing-certificate-transparency-and-nimbus/">certificate transparency</a> need to be involved to help keep the system in check.<br></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/connection-to-asn@3.5x.png" class="kg-image" alt="Welcome to Crypto Week"></figure><p><br>This cryptographic scaffolding turns the Internet into an encoded system of trust. With these systems in place, Internet users no longer need to trust every network and party involved in this diagram, they only need to trust the RIRs, DNSSEC and the CAs (and know the correct time).</p><p>This week we’ll be making some announcements that help strengthen this system of accountability. </p><h2 id="privacy-and-integrity-with-friends">Privacy and integrity with friends</h2><p>The Internet is great because it connects us to each other, but the details of how it connects us are important. The technical choices made when Internet was designed come with some interesting human implications.</p><p>One implication is <strong>trackability</strong>. Your IP address is contained on every packet you send over the Internet. This acts as a unique identifier for anyone (corporations, governments, etc.) to track what you do online. Furthermore, if you connect to a server, that server’s identity is sent in plaintext on the request <strong>even over HTTPS</strong>, revealing your browsing patterns to any intermediary who cares to look.</p><p>Another implication is <strong>malleability</strong>. Resources on the Internet are defined by <em>where</em> they are, not <em>what</em> they are. If you want to go to CNN or BBC, then you connect to the server for cnn.com or bbc.co.uk and validate the certificate to make sure it’s the right site. But once you’ve made the connection, there’s no good way to know that the actual content is what you expect it to be. If the server is hacked, it could send you anything, including dangerous malicious code. HTTPS is a secure pipe, but there’s no inherent way to make sure what gets sent through the pipe is what you expect.</p><p>Trackability and malleability are not inherent features of interconnectedness. It is possible to design networks that don’t have these downsides. It is also possible to build new networks with better characteristic on top of the existing Internet. The key ingredient is cryptography.</p><h3 id="tracking-resilient-networking">Tracking-resilient networking</h3><p>One of the networks built on top of the Internet that provides good privacy properties is Tor. The Tor network is run by a group of users who allow their computers to be used to route traffic for other members of the network. Using cryptography, it is possible to route traffic from one place to another without points along the path knowing both the source and the destination at the same time. This is called <a href="https://en.wikipedia.org/wiki/Onion_routing">onion routing</a> because it involves multiple layers of encryption, like an onion. Traffic coming out of the Tor network is “anonymous” because it could have come from anyone connected to the network. Everyone just blends in, making it hard to track individuals.</p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/Tor-Onion-Cloudflare.png" class="kg-image" alt="Welcome to Crypto Week"></figure><p>Similarly, web services can use onion routing to serve content inside the Tor network without revealing their location to visitors. Instead of using a hostname to identify their network location, so-called onion services use a cryptographic public key as their address. There are hundreds of onion services in use, including the one <a href="https://blog.cloudflare.com/welcome-hidden-resolver/">we use for 1.1.1.1</a> or the one in <a href="https://en.wikipedia.org/wiki/Facebookcorewwwi.onion">use by Facebook</a>.</p><p>Troubles occur at the boundary between Tor network and the rest of the Internet. This is especially true for user is attempting to access services that rely on abuse prevention mechanisms based on reputation. Since Tor is used by both privacy-conscious users and malicious bots, connections from both get lumped together and as the expression goes, one bad apple ruins the bunch. This unfortunately exposes legitimate visitors to anti-abuse mechanisms like CAPTCHAs. Tools like <a href="https://blog.cloudflare.com/cloudflare-supports-privacy-pass/">Privacy Pass</a> help reduce this burden but don’t eliminate it completely. This week we’ll be announcing a new way to improve this situation.</p><h3 id="bringing-integrity-to-content-delivery">Bringing integrity to content delivery</h3><p>Let’s revisit the issue of malleability: the fact that you can’t always trust the other side of a connection to send you the content you expect. There are technologies that allow users to insure the integrity of content without trusting the server. One such technology is a feature of HTML called <a href="https://www.w3.org/TR/SRI/">Subresource Integrity (SRI)</a>. SRI allows a webpage with sub-resources (like a script or stylesheet) to embed a unique cryptographic hash into the page so that when the sub-resource is loaded, it is checked to see that is matches the expected value. This protects the site from loading unexpected scripts from third parties, <a href="https://blog.cloudflare.com/an-introduction-to-javascript-based-ddos/">a known attack vector</a>.</p><p>Another idea is to flip this on its head: what if instead of fetching a piece of content from a specific location on the network, you asked the network to find piece of content that matches a given hash? By assigning resources based on their actual content rather than by location it’s possible to create a network in which you can fetch content from anywhere on the network and still know it’s authentic. This idea is called <em>content addressing</em> and there are networks built on top of the Internet that use it. These content addressed networks, based on protocols such <a href="https://ipfs.io/">IPFS</a> and <a href="https://datproject.org/">DAT</a>, are blazing a trail new trend in Internet applications called the Distributed Web. With the Distributed Web applications, malleability is no longer an issue, opening up a new set of possibilities.</p><h3 id="combining-strengths">Combining strengths</h3><p>Networks based on cryptographic principles, like Tor and IPFS, have one major downside compared to networks based on names: usability. Humans are exceptionally bad at remembering or distinguishing between cryptographically-relevant numbers. Take, for instance, the New York Times’ onion address:</p><p><code>https://www.nytimes3xbfgragh.onion/</code></p>
<p>This is would easily confused with similar-looking onion addresses, such as</p><p><code>https://www.nytimes3xfkdbgfg.onion/</code></p>
<p>which may be controlled by a malicious actor.</p><p>Content addressed networks are even worse from the perspective of regular people. For example, there is a snapshot of the Turkish version of Wikipedia on IPFS with the hash:</p><p><code>QmT5NvUtoM5nWFfrQdVrFtvGfKFmG7AHE8P34isapyhCxX</code></p>
<p>Try typing this hash into your browser without making a mistake.</p><p>These naming issues are things Cloudflare is perfectly positioned to help solve.<br>First, by putting the hash address of an IPFS site in the DNS (and adding DNSSEC for trust) you can give your site a traditional hostname while maintaining a chain of trust.</p><p>Second, by enabling browsers to use a traditional DNS name to access the web through onion services, you can provide safer access to your site for Tor user with the added benefit of being better able to distinguish between bots and humans.<br>With Cloudflare as the glue, is is possible to connect both standard internet and tor users to web sites and services on both the traditional web with the distributed web.</p><p></p><figure class="kg-card kg-image-card"><img src="https://blog.cloudflare.com/content/images/2018/09/bowtie-diagram-crypto-week-2018-v02_medium-1.gif" class="kg-image" alt="Welcome to Crypto Week"></figure><p></p><p>This is the promise of Crypto Week: using cryptographic guarantees to make a stronger, more trustworthy and more private internet without sacrificing usability.</p><h2 id="happy-crypto-week">Happy Crypto Week</h2><p>In conclusion, we’re working on many cutting-edge technologies based on cryptography and applying them to make the Internet better. The first announcement today is the launch of Cloudflare's <a href="https://blog.cloudflare.com/distributed-web-gateway/">Distributed Web Gateway</a> and <a href="https://blog.cloudflare.com/e2e-integrity/">browser extension</a>. Keep tabs on the Cloudflare blog for exciting updates as the week progresses. </p><p>I’m very proud of the team’s work on Crypto Week, which was made possible by the work of a dedicated team, including several brilliant interns. If this type of work is interesting to you, Cloudflare is hiring for the <a href="https://boards.greenhouse.io/cloudflare/jobs/634967?gh_jid=634967">crypto team</a> and <a href="https://www.cloudflare.com/careers/">others</a>!</p>]]></content:encoded></item><item><title><![CDATA[JAMstack podcast episode: Listen to Cloudflare's Kenton Varda speak about originless code]]></title><description><![CDATA[JAMstack Radio is a show all about the JAMstack, a new way to build fast & secure apps or websites. In the most recent episode, the host, Brian Douglas, met with our own Kenton Varda to discuss some of the infinite uses for running code at the edge.]]></description><link>https://blog.cloudflare.com/jamstack-podcast-with-kenton-varda/</link><guid isPermaLink="false">5b9ac145c24d3800bf438b65</guid><category><![CDATA[Workers]]></category><category><![CDATA[Serverless]]></category><category><![CDATA[JavaScript]]></category><category><![CDATA[Developers]]></category><dc:creator><![CDATA[Andrew Fitch]]></dc:creator><pubDate>Sat, 15 Sep 2018 15:00:00 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/maxresdefault.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cloudflare.com/content/images/2018/09/maxresdefault.jpg" alt="JAMstack podcast episode: Listen to Cloudflare's Kenton Varda speak about originless code"><p><a href="https://www.heavybit.com/library/podcasts/jamstack-radio/">JAMstack Radio</a> is a show all about the JAMstack, a new way to build fast &amp; secure apps or websites. In the most recent episode, the host, <a href="https://twitter.com/bdougieyo">Brian Douglas</a>, met with <a href="https://twitter.com/KentonVarda">Kenton Varda</a>, tech lead for <a href="https://developers.cloudflare.com/workers/">Cloudflare Workers</a> and author of <a href="https://sandstorm.io/">Sandstorm.io</a> to discuss some of the infinite uses for running code at the edge.</p><p>Listen to what Kenton had to say about serverless technology in this twenty two minute podcast here:</p><p><a href="https://www.heavybit.com/library/podcasts/jamstack-radio/ep-31-originless-code-with-cloudflares-kenton-varda/"><img src="https://blog.cloudflare.com/content/images/2018/09/Screen-Shot-2018-09-13-at-1.30.17-PM-1.png" alt="JAMstack podcast episode: Listen to Cloudflare's Kenton Varda speak about originless code"></a></p>
<p>Here's the transcript of the podcast as well:</p><p><strong><strong>Brian Douglas: </strong></strong>Welcome to another installment of JAMstack Radio. In the room I've got Kenton Varda from Cloudflare.</p><p><strong><strong>Kenton Varda: </strong></strong>Thanks for having me.</p><p><strong><strong>Brian: </strong></strong>Thanks for coming all the way across San Francisco to chat with me in person. I'm curious who Kenton is, but I'm also curious what Cloudflare is. Can you answer both questions? Let's start with, "Who is Kenton?"</p><p><strong><strong>Kenton: </strong></strong>I'm an engineer. I'm the architect of Cloudflare Workers. In a past life I worked for Google for several years. I was once known as the "protocol buffers guy," I was the one who open sourced that. And I founded a company called Sandstorm that was later acquired by Cloudflare.</p><p><strong><strong>Brian: </strong></strong>I'm familiar. I remember Sandstorm. Well, I remember the name and I vaguely remember that the acquisition happened. Interesting. You founded Sandstorm, you said?</p><p><strong><strong>Kenton: </strong></strong>Yep. Jade Wang and I.</p><p><strong><strong>Brian: </strong></strong>OK, yeah. I know Jade. I met Jade not too long ago. But that's a lot of inside baseball. How about Cloudflare? Now you're a part of Cloudflare. What is that thing?</p><p><strong><strong>Kenton: </strong></strong>We have computers in 151 locations today and rapidly expanding, thousands of locations in the future. And we let you take that and put that network in front of your website in order to provide a few things. One is it's a large HTTP cache. We can cache your static content at the "edge." We call it the edge. The locations close to the end user, so that they can receive that content quickly.</p><p>We have a web application firewall which blocks malicious traffic, we have DDoS protection. We have absorbed the largest DDoS attacks in the world without any trouble and a whole bunch of other features. There's a long list of things that are implemented as a proxy in these locations before the requests go to your "origin server," as we call it.</p><p><strong><strong>Brian: </strong></strong>OK, cool. And you guys have all these locations, do you guys own servers? Are you guys building these things out?</p><p><strong><strong>Kenton: </strong></strong>Yes. We build the hardware and we send them to a variety of different types of locations. Sometimes it's ISPs that want to have our machines there, so they can serve their customers faster and use less bandwidth upstream. Sometimes it's data centers. It's a variety.</p><p><strong><strong>Brian: </strong></strong>Cool. So, I'm curious. Your background is in a lot of infrastructure, too? Coming from Sandstorm, and now Cloudflare?</p><p><strong><strong>Kenton: </strong></strong>Yep. And at Google. I've always done a little infrastructure. Search infrastructure at Google, access control infrastructure, key management. Lots of things.</p><p><strong><strong>Brian: </strong></strong>Which makes sense on why you mentioned you're now "principal architect."</p><p><strong><strong>Kenton: </strong></strong>We don't have titles.</p><p><strong><strong>Brian: </strong></strong>You don't have titles? OK. I was going to ask if there is a Vice Principal Architect at all.</p><p><strong><strong>Kenton: </strong></strong>I've taken to calling myself the architect of Cloudflare Workers, descriptively. When Sandstorm was acquired by Cloudflare in March of 2017, we came in and I was told, "We'd like to find a way to let people run code on our servers securely and quickly. But we don't know how to do it. What do you think?" And so I started that project, and built it out, and exactly a year after I joined we launched it on March 13 of this year.</p><p><strong><strong>Brian: </strong></strong>Nice! Congratulations. This is the exact reason why I had you come on. Because Cloudflare Workers is something that I was aware of in the alpha or beta phase when it first was mentioned. I played around with the trial.</p><p>I want you to explain Cloudflare Workers, but also before you do that I want to explain what I did. Which is very trivial. Workers sit on the edge, and I made a Worker to change the word "cloud" to "butt."</p><p><strong><strong>Kenton: </strong></strong>Classic.</p><p><strong><strong>Brian: </strong></strong>It's very classic because the site that I was testing it on was Cloudflare.com. Whatever it replaced was pretty hilarious. I showed everybody in Slack, and then I moved on and never thought of it, until recently. Could you explain what Cloudflare Workers are?</p><p><strong><strong>Kenton: </strong></strong>While you were working with the preview, which lets you see what your Worker would do to any random site. But normally you'd run these on your own site. Cloudflare Worker is a piece of JavaScript that you write that can receive HTTP requests that are destined for your domain, but receives them on Cloudflare servers at the edge, close to the end user.</p><p>It can run arbitrary code there. It can forward the request on to your origin, or it can decide to respond directly, or you can even make a variety of outbound requests to third-party APIs and do whatever you want.</p><blockquote>HTTP in, HTTP out, arbitrary code in between. </blockquote><p><strong><strong>Brian: </strong></strong>Are there limitations to the JavaScript? Because when you say you could run JavaScript on the edge or on Cloudflare servers, this sounds dangerous. But you also prefaced this too, as well as the security aspect of it. People want to have that. How do you solve that problem?</p><p><strong><strong>Kenton: </strong></strong>Right. This is the reason why it is JavaScript. We have a lot of customers and they all want to run their code in every location of ours. We need to make sure that we can run lots and lots of different scripts but not allow them to interfere with each other. Each one has to be securely sandboxed.</p><p>There's a lot of technologies out there for doing that. But the one that has received by far the most scrutiny, and the most real-world battle testing over the years, would be the <a href="https://developers.google.com/v8/">V8 JavaScript engine</a> from Google Chrome. We took that and embedded it in a new server environment written in C++ from scratch.</p><p>We didn't use Node.js because Node.js is not a sandbox. Not intended for this scenario. So we built something new. The JavaScript runs in a normal JavaScript sandbox and it is limited to an API that only lets it receive HTTP requests, and send HTTP requests to the internet. It does not allow it to see the local file system or interfere with anything else that might be running on that machine.</p><p><strong><strong>Brian: </strong></strong>OK. Can we talk about use cases for Cloudflare Workers? What would somebody besides somebody like myself who spent all that time writing a joke app. Or Worker, rather. What are use cases you can do on the edge and write JavaScript in?</p><p><strong><strong>Kenton: </strong></strong>Well, it's arbitrary code. There is infinite use cases. But I can tell you some common ones.</p><blockquote> Some people just need to do some silly rewrite of some headers because it's easier to push something to Cloudflare than it is to update their own origin servers. </blockquote><p>When you write the script and you submit it through the Cloudflare UI, it is deployed globally in 30 seconds. That's it. Boom, it's done. So that's an easy way to get things done, but it's the less interesting use case.</p><p>More interesting is you can do things like route requests to say, you're hosting your website out of S3 or Google cloud storage. You can write a Worker that fetches the content from there and then serves it as your website, and not actually have an origin server.</p><p>Other things people like is to optimize their usage of Cloudflare's cache. Historically an HTTP cache is a very fixed-function thing. You can't serve cache content but also have it be personalized. So say you're on a news site, and people have to log in because it's paid content, and then you want to display the site to them but at the top you want to say, "Hi. You're logged in as..." whoever.</p><p>Your content on a news site is very cacheable. But all of a sudden it can't be cached anymore because you're personalizing it. Well, you can do that personalization in a Worker after it's already come out of cache at the edge, and therefore serve your site much faster and use much less bandwidth.</p><p>But going beyond that, we've had people do HTML template rendering darkly at the edge based on API requests. That will save a lot of bandwidth.</p><p><strong><strong>Brian: </strong></strong>That's a common use case for Apache servers where you'd take the JavaScript cookie and check to see who you are, where you came from and maybe even your location. And then be able to decide what to render based on the user. It sounds like something super complicated that was used very heavily with servers, and now you can just do it on Cloudflare's site.</p><p><strong><strong>Kenton: </strong></strong>Or A/B testing. That's another thing that doesn't play well with caches, because you're serving different people different content for the same URL. You can implement that in a Worker now and you can take advantage of the cache.</p><p><strong><strong>Brian: </strong></strong>Before we started recording I mentioned I had seen a talk at Apollo from the product manager for the team, which I escaped the name--</p><p><strong><strong>Kenton: </strong></strong>Jonathan Bruce.</p><p><strong><strong>Brian: </strong></strong>Jonathan, yes. He explained and he went through a couple of use case examples, and I saw A/B testing is one of them as well. It's nice to see a lot of this work move away from the servers. Not that they're trivial, but it sounds like it's an easier approach to do.</p><p><strong><strong>Kenton: </strong></strong>Yes. Speaking of Apollo, over time we're seeing these use cases get more and more complicated. People started out doing very simple things. But Apollo is a great case. They've taken the Apollo GraphQL, they call it Apollo Server, it's a gateway for GraphQL.</p><blockquote> Your GraphQL queries go in, and then it federates out to your Rest endpoints behind that. </blockquote><p>They've managed to run the whole thing in a Worker. Which means now that can run on Cloudflare's "edge" and take advantage of the cache, which previously GraphQL queries generally aren't cacheable. Because they're all post requests and often each one's a little bit different and not canonicalized. And now you can fix that with code running at the edge.</p><p><strong><strong>Brian: </strong></strong>Sorry to zoom back, because I have a lot of experience with Apollo and GraphQL as well. Is that what Apollo is doing personally? Or is this the preferred way for them to cache GraphQL queries when they're using Apollo Server?</p><p><strong><strong>Kenton: </strong></strong>They are working on a version of Apollo Server that runs at the edge.</p><p><strong><strong>Brian: </strong></strong>OK.</p><p><strong><strong>Kenton: </strong></strong>It's not released yet. But soon.</p><p><strong><strong>Brian: </strong></strong>I need to have them on for a follow-up conversation. They've been on this podcast quite a few episodes ago, so I definitely want to have them talk more about what they're doing on the server side, which is really cool.</p><p>You mentioned Google and you mentioned your experience, so it sounds like you've been working on the web and within servers for a while. I'm curious if we could take time to zoom out. You're working on Cloudflare Workers. What's your thought on where the web's going moving forward?</p><p>The reason we do this podcast, JAMstack Radio which is JavaScript APIs and markup, is because I personally think there's a shift of a lot of the processing and a lot of the work moving towards front end.</p><p>I would consider having Cloudflare own Workers as part of something I don't have to worry about, so I don't have to deal with it, it's an API. Do you see a shift of a lot of major companies using something like a Worker instead of running their own servers going forward?</p><p><strong><strong>Kenton: </strong></strong>Serverless has been a popular term lately.</p><p><strong><strong>Brian: </strong></strong>Very popular.</p><p><strong><strong>Kenton: </strong></strong>People have realized that it's a lot of effort to maintain a server that's not really going to anything useful. You would prefer it to be just writing your code that's specific to your application, and not thinking about, "How do I initialize my server?" or, "What dependencies do I need to build in here?" And there has been this shift towards something called serverless.</p><p>My colleague at Cloudflare, Zack Bloom, likes to talk about going a step beyond that to what we call "originless." In serverless, like with Amazon Lambda, you still choose a location in the world where you or your function is running.</p><p>You're no longer managing individual servers, but you still have an origin. Typically US East one in Virginia. What I would like to see is, you don't think about where your code runs at all.</p><blockquote>You write code and it just runs everywhere. That's what we call "originless," and that's what Cloudflare does. </blockquote><p>Because when you deploy code to Cloudflare you do not choose which of our 151 locations it runs in, it is deployed to all of them and we'll run in whichever one receives the request from the users. Whichever one is closest to the user.</p><p>And it's not just about being close to the user, but also if you have code that interacts heavily with a particular API. Say the Stripe API, or the Twilio API. It would be great if that code could automatically run next to the servers that are implementing that API without you having to think about that.</p><p>I should not decide that my servers are going to run in Virginia, when my servers are talking to people who I don't even know where they are. So that's where I'd like to see it. People have been talking a lot about "edge" compute lately. We consider Workers to be an "edge" compute platform.</p><p>But it's funny, because Peter Levine at Andreessen Horowitz said, "The cloud is dead. The new thing is edge compute." But it seems to me that this idea that your code just runs everywhere is what the cloud was always supposed to be in the first place. That's what the metaphor meant. It's not in a specific place, it's everywhere. To me, we're finally getting there.</p><p><strong><strong>Brian: </strong></strong>"Originless." I'm not sure if it's going to pick up as much steam as serverless. I think that one's got really good feet inside the marketing jaws of tech. But I like "originless," I like the fact that you can ship code and not have to worry about it. Me personally, I'm a tinkerer.</p><p>I ship a lot of JavaScript as of late, in the last couple years. I don't want to have to deal with the problems and the headaches of trying to manage my own thing. Just for example, I just cloned a project that happened to my SQL as a dependency. I had the 'brew install that thing and for whatever reason it still didn't work. Dependencies just weren't jiving together. It was a Ruby project, so they weren't jiving together.</p><blockquote>But I shouldn't have to think about this as someone coming in four years later down the road trying to commit to this project. I just want to ship code. </blockquote><p>I like the fact that I don't have to worry about things like caching and managing my headers and stuff like that. If I can tap into all that tech talent at Cloudflare, and people who are building these cool projects to complete that for me, and pay a small fee, hopefully.</p><p>Actually, that's a good question. Cloudflare Workers, is it an add-on feature once I have a Cloudflare account. How do I get access to this?</p><p><strong><strong>Kenton: </strong></strong>It's available to all Cloudflare accounts. The pricing is 50 cents per million requests handled with a minimum of $5 per month. You pay $5, you get your first 10 million, then 50 cents per million after that.</p><p><strong><strong>Brian: </strong></strong>OK. I'm really excited about the idea of Cloudflare Workers. There's a lot of ideas that can be done. Is there any getting started guides, or tutorials people can get their feet wet with? With Cloudflare Workers?</p><p><strong><strong>Kenton: </strong></strong>If you go to Developers.Cloudflare.com, or if you just go to CloudflareWorkers.com there's the preview service. The "fiddle," we call it. It's kind of like JSFiddle. You write some code, that's a Worker, and then you see in real time its effect on any web page that you choose. You can go there and try it out. You don't need a Cloudflare account and you don't need to sign in.</p><p><strong><strong>Brian: </strong></strong>Oh, very cool. Awesome. I'm going to hopefully tinker with that again and build something a little nicer than what I already touched with the Workers. Excited to try that out. Curious, is there anything else Cloudflare is working on in the upcoming future? I know you're probably super focused on Cloudflare Workers so you don't really have the whole roadmap.</p><p><strong><strong>Kenton: </strong></strong>Well, we're working on lots of things. My next goal with Workers is to introduce some storage. There's not a whole lot specific that I can say about that yet, but the challenge is interesting because we have a network of, as I said, over 150 locations today. In the next few years we expect to exponentially grow that.</p><p>We expect to have machines in every cell tower, more or less. And we want a storage system that can actually take advantage of that. A storage system where each user's data, if you've built a service on Cloudflare and you store data for users that they interact with, each user's data should live at the Cloudflare location that's closest to that user so that they can interact with it with minimal latency.</p><p>But there aren't a lot of storage technology out there that can scale to hundreds of nodes, much less thousands of nodes automatically today. It's a new and interesting challenge that I'm working on.</p><p><strong><strong>Brian: </strong></strong>Cool. Exciting. I'm probably going to keep an eye out on the cloud for a blog, hopefully you guys are keeping that up to date. I look forward to whenever that gets launched or previewed. I'm going to transition us to JAM picks. I think we had a really awesome conversation about Cloudflare Workers.</p><p>These are going to be JAM picks, anything that keeps you jamming, keeps you going. Music picks, we've had a lot of those in the past. Food and tech picks as well. But I will go first.</p><p>My pick is Pinterest. Which sounds very weird to say this out loud. We had Zach on here talking about Pinterest and how they're trying to shift towards more of a male demographic, which listeners, if you didn't know I identify as male. I've been using Pinterest mainly because I'm expecting a child.</p><p>I'm not picking a bunch of baby stuff and putting on a board. I'm picking a lot of recipes. I find that on Pinterest, if I type in something I have in my cabinet I can get a bunch of recipes for one ingredient, and it's been super useful. Because I'm going to have some leave that I'm going to be taking off.</p><p>So I want to be Mr. Mom, hopefully. I'm going to try to achieve that status and do a lot of cooking. I've been setting myself up to do a lot of Pinterest boarding. I'm not even sure if that's a thing, if that's what they call it.</p><p>My other pick is going to be meal planning. I really like cooking. I work from home a lot. I really like leveraging the idea of cooking. On top of that, I'm going to wrap in one more pick. I'm definitely going to be trying out Cloudflare Workers. I've been using a Dropbox Paper, so I have a list of all the ideas I want to ship of side projects. I have some coding goals that I have during that time.</p><p>So those are my three picks. Kenton, hopefully I stalled long enough that you have decided the things that you are jamming on.</p><p><strong><strong>Kenton: </strong></strong>I commute up here from Palo Alto on Caltrain every day, which means I get a lot of time to play video games on my Nintendo Switch.</p><p><strong><strong>Brian: </strong></strong>Oh, nice.</p><p><strong><strong>Kenton: </strong></strong>And one of my favorites lately is an indie game called Celeste. It's what I would call an agility platformer. Lots of jumping off walls, boosting and 2D side scrolling. It is a lot of fun.</p><p><strong><strong>Brian: </strong></strong>I have follow up questions about that. How long have you had your Switch?</p><p><strong><strong>Kenton: </strong></strong>Since sometime last year. Probably about a year ago.</p><p><strong><strong>Brian: </strong></strong>OK. So you're earlier on the bandwagon. No, actually you were probably a year into it. I'm curious, how do you enjoy the controller? Do you always play it connected? Or do you separate the controller?</p><p><strong><strong>Kenton: </strong></strong>Yeah, I play it connected because I'm on the on the train, so I have to hold it.</p><p><strong><strong>Brian: </strong></strong>I just think those little things that come off, those little joypad joystick things are just a little too small for my taste.</p><p><strong><strong>Kenton: </strong></strong>I do have that problem. My hands get sore, especially from this game Celeste. I had a callus on my thumb when I finished playing it.</p><p><strong><strong>Brian: </strong></strong>That's hard core.</p><p><strong><strong>Kenton: </strong></strong>It's intense.</p><p><strong><strong>Brian: </strong></strong>Either it's hard core or you have a super long commute.</p><p><strong><strong>Kenton: </strong></strong>It's about 45 minutes. And then I'm going to say I just got back from vacation last week, where I flew to my hometown of Minneapolis, and I just have to talk up the amazing park system and bike trail system there. Because all I did all week was just bike around. There's hundreds of miles of paved, dedicated, bike trails. You don't have to go on streets, and it's just amazing and beautiful in the summer.</p><p><strong><strong>Brian: </strong></strong>Nice. I didn't know that about Minneapolis. I know the whole, is it like, "10,000 lakes?"</p><p><strong><strong>Kenton: </strong></strong>"Land of 10,000 Lakes" is Minnesota. It's probably more like 100,000 lakes. There's a lot of lakes.</p><p><strong><strong>Brian: </strong></strong>And at least a few bike trails.</p><p><strong><strong>Kenton: </strong></strong>Yes.</p><p><strong><strong>Brian: </strong></strong>Awesome. Well, Kenton, thanks for coming on to talk about Cloudflare Workers and the awesome city of Minneapolis. Listeners, keep spreading the jam.</p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Cache API for Cloudflare Workers is now in Beta!]]></title><description><![CDATA[In October of last year we announced the launch of Cloudflare Workers. Workers allow you to run JavaScript from 150+ of Cloudflare’s data centers. This means that from the moment a request hits the Cloudflare network, you have full control over its destiny. ]]></description><link>https://blog.cloudflare.com/cache-api-for-cloudflare-workers-is-now-in-beta/</link><guid isPermaLink="false">5b72fd982ff3c700bfd98af4</guid><category><![CDATA[Workers]]></category><category><![CDATA[Serverless]]></category><category><![CDATA[Beta]]></category><category><![CDATA[Cache]]></category><category><![CDATA[API]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Rita Kozlov]]></dc:creator><pubDate>Fri, 14 Sep 2018 14:26:51 GMT</pubDate><media:content url="https://blog.cloudflare.com/content/images/2018/09/Screen-Shot-2018-09-14-at-10.30.08-AM.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.cloudflare.com/content/images/2018/09/Screen-Shot-2018-09-14-at-10.30.08-AM.png" alt="Cache API for Cloudflare Workers is now in Beta!"><p>In October of last year we announced the launch of Cloudflare Workers. Workers allows you to run JavaScript from 150+ of Cloudflare’s data centers. This means that from the moment a request hits the Cloudflare network, you have full control over its destiny. One of the benefits of using Workers in combination with Cloudflare’s cache is that Workers allow you to have programmatic, and thus very granular control over the Cloudflare cache. </p><p>You can choose what to cache, how long to cache it for, the source it should be cached from, and you can even modify the cached result after it is retrieved from the cache. </p><p><br>We have seen many of our existing customers use Workers to enhance their usage of the Cloudflare cache, and we have seen many new customers join Cloudflare to take advantage of these unique benefits. </p><h2 id="-re-introducing-the-cache-api">(Re-)Introducing the Cache API</h2><p>You can always have more control, so today we are announcing support for the Cache API! As some of you may know, Cloudflare Workers are built against the existing Service Worker APIs. One of the reasons we originally chose to model Cloudflare Workers after Service Workers was due to the existing familiarity and audience of Service Workers, as well as documentation. </p><p>We’ve received overwhelming feedback and evidence from customers that there are many uses for supporting an implementation modeled after the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Cache">Service Workers Cache API</a>. Today we are opening up a beta to offer our customers the ability to explicitly read and write items in our cache from within their Workers. The capability to do this will allow them to implement virtually any cache semantics they might need.</p><h2 id="so-what-can-you-do-with-the-cache-api">So what can you do with the Cache API?</h2><p></p><h4 id="cache-worker-output">Cache Worker output</h4><p>Workers allow you to fully customize and manipulate a response before it is sent back to the user. Whether you are modifying the response from your origin, or assembling a response based on calls to multiple APIs, you can use the Cache API to cache the output and serve it directly on future similar requests.</p><pre><code class="language-javascript">async function handleRequest(event) {
let cache = caches.default
let response = await cache.match(event.request)
if (!response) {
response = doSuperComputationallyHeavyThing()
event.waitUntil(cache.put(event.request, response.clone()))
}
return response
}
</code></pre>
<p></p><h4 id="cache-post-requests">Cache POST requests</h4><p>Cloudflare ordinarily doesn’t cache POST requests because they can change state on a customer’s origin. However, some APIs and frameworks like GraphQL make every call a POST request, including those that do not change state. For these APIs it’s important to enable caching to speed things up.</p><pre><code class="language-javascript">async function handleRequest(event) {
let cache = caches.default
let response = await cache.match(event.request)
if (!response){
response = await fetch(event.request)
if (response.ok) {
event.waitUntil(cache.put(event.request, response.clone()))
}
}
return response
}
</code></pre>
<p></p><h4 id="set-cache-tag-headers-from-a-worker-enterprise-only-">Set Cache-Tag headers from a Worker (Enterprise only)</h4><p>One of the ways to purge assets within the Cloudflare cache is using <a href="https://support.cloudflare.com/hc/en-us/articles/206596608-How-to-Purge-Cache-Using-Cache-Tags-Enterprise-only-">Cache-Tags</a>. Cache-Tags allow you to group assets by category, version, etc and purge them all at once using a single API call. Cache-Tags were traditionally set using an origin Cache-Tag header. Some backends, however, don’t allow you control over the response headers that are sent, which makes it challenging to set Cache-Tags at the origin. With the Cache API, you can set Cache-Tags directly from a Worker, without having to modify any code at your origin.</p><pre><code class="language-javascript">addEventListener('fetch', event =&gt; {
event.respondWith(handleRequest(event))
})
/**
* Fetch a request and add a tag
* @param {Request} request
*/
async function handleRequest(event) {
let request = event.request
let cache = caches.default
let response = await cache.match(request)
if (!response) {
response = await fetch(request)
if (response.ok) {
response = new Response(response.body, response)
response.headers.append('Cache-Tag', 'apple')
event.waitUntil(cache.put(request, response.clone()))
}
}
return response
}
</code></pre>
<p>These are just simple examples to get started, and we’ll be publishing many more in the coming weeks. We’re excited to see what everyone builds with the Cache API!</p><h2 id="how-to-get-access">How to get access</h2><p>We are super excited for you to start playing with the Cache API. <strong>You can find <a href="https://developers.cloudflare.com/workers/reference/cache-api/">documentation here</a>, and feel free to start using the APIs.</strong></p><p>We want to hear about all the cool ways you are using this. We also want to hear if you are having trouble or running into any issues.</p><p>Please feel free to contact us at <a href="mailto:cacheapibeta@cloudflare.com">cacheapibeta@cloudflare.com</a></p>]]></content:encoded></item></channel></rss>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment