You write:

import http from 'node:http';
http.createServer((req, res) => res.end('world')).listen(3000);

You curl:

curl -i http://localhost:3000/hello

And it “just works.” But between your res.end('world') and curl’s HTTP/1.1 200 OK, a lot happens. This post tells the entire story with runnable examples and Mermaid diagrams exactly where readers usually get confused.

0) Prologue | The Two Sockets You Didn’t Know You Owned

When you call listen(3000), the kernel gives your process one listening socket (one FD bound to port 3000). When a client connects, the kernel creates one accepted socket per connection (a new FD each). HTTP/1.1 keep-alive then lets many requests flow sequentially on the same FD.

sequenceDiagram
  autonumber
  participant You as Your Node Process
  participant K as Kernel
  participant C as Client
  You->>K: socket()+bind(3000)+listen()
  Note over You,K: Listening FD (e.g., #3) bound to :3000
  C->>K: TCP SYN to :3000
  K->>You: accept() becomes readable
  You->>K: accept()
  Note over You,K: New FD (e.g., #14) for this connection
  loop keep-alive
    C->>You: HTTP request over FD #14
    You->>C: HTTP response over FD #14
  end

1) Handshakes | TCP (and TLS) Before Node Exists

TCP does a 3-way handshake (SYN → SYN-ACK → ACK). If HTTPS, TLS negotiates keys/certs afterwards. Node isn’t involved yet; the kernel establishes the connection and allocates per-socket state.

sequenceDiagram
  autonumber
  participant Client
  participant ServerKernel as Server Kernel (TCP/IP)
  Client->>ServerKernel: SYN
  ServerKernel->>Client: SYN-ACK
  Client->>ServerKernel: ACK (ESTABLISHED)
  alt HTTPS
    Client->>ServerKernel: TLS ClientHello
    ServerKernel->>Client: TLS ServerHello + cert
    Note over Client,ServerKernel: Keys derived → HTTP bytes are now encrypted records
  end

Why this matters: by the time Node sees anything, the socket exists in the kernel with recv/send buffers and an associated FD your process will use.

2) FDs and Buffers | The Real Interfaces You Program Against

What is an FD?

A File Descriptor is an integer handle in your process pointing to a kernel object (here: a TCP socket).

  • Listening FD: 1 per port (listen(3000))
  • Accepted FDs: 1 per connection (not per request)

Check them:

lsof -i :3000
# node ... 3u IPv4 ... TCP *:3000 (LISTEN)
# node ... 14u IPv4 ... TCP 127.0.0.1:3000->127.0.0.1:52144 (ESTABLISHED)

Where do your bytes live?

Each socket (FD) has two kernel buffers:

  • Receive buffer → bytes in from NIC land here
  • Send buffer → your writes wait here until the peer ACKs

Node copies from/to these buffers when it read()s/write()s the FD. In JS, you see Buffer chunks (user-space copies of what was in the kernel receive buffer).

flowchart LR
  NIC[Network Interface Card] --> RCVQ[Kernel Receive Buffer]
  RCVQ -->|read copies| BUF[Node Buffer JS]
  BUF --> HANDLER[Your JS Handler]
  HANDLER -->|write copies| SNDQ[Kernel Send Buffer]
  SNDQ --> NIC

Backpressure 101: if send buffer fills, res.write() returns false. You wait for ‘drain’ before writing more. If receive buffer fills (you’re not reading), TCP window shrinks and the client naturally slows.

3) libuv | One Thread, Thousands of Sockets

Node doesn’t poll sockets in JS. libuv (C library) asks the OS: “Tell me when FD #14 is readable (recv buf has bytes) or writable (send buf has room).” On Linux it uses epoll, on macOS kqueue, on Windows IOCP.

flowchart LR
  subgraph OS [OS kernel]
    E["epoll / kqueue / IOCP"]
  end

  subgraph UV [libuv Event Loop]
    Loop["Event Loop Core"]
    W1["Watch FD #3 - LISTEN"]
    W2["Watch FD #14 - ESTABLISHED"]
  end

  subgraph APP [Node.js Layer]
    JS["JS callbacks scheduled"]
  end

  E -->|"FD readable/writable"| Loop
  W1 -->|"accept() ready"| Loop
  W2 -->|"read()/write() ready"| Loop
  Loop --> JS

Effect: One Node thread multiplexes many sockets because the OS wakes it only when an FD changes state.

4) Parsing HTTP | From Bytes to req/res

libuv says “FD readable,” Node reads bytes and feeds llhttp (native HTTP parser). When the start-line + headers are complete, Node constructs:

  • IncomingMessage (req, a Readable for the body)
  • ServerResponse (res, a Writable for your reply)
flowchart LR
  BYTES["Bytes from FD #14"] --> LLHTTP[llhttp parser]
  LLHTTP --> HDRS[Headers Map]
  LLHTTP --> REQ["IncomingMessage (Readable)"]
  LLHTTP --> RES["ServerResponse (Writable)"]

Why streams? Bodies can be arbitrarily large. Streams map to the transport and let backpressure flow naturally.

5) Running Example | Minimal GET & POST with Correct Pressure

import http from 'node:http';

const server = http.createServer(async (req, res) => {
  if (req.method === 'GET' && req.url === '/hello') {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    return res.end('world\n');
  }

  if (req.method === 'POST' && req.url === '/echo') {
    // Stream the body; don't assume size
    let body = '';
    for await (const chunk of req) body += chunk;
    res.writeHead(200, { 'Content-Type': 'application/json' });
    return res.end(JSON.stringify({ youSent: body }));
  }

  res.writeHead(404).end();
});

server.listen(3000);

Try it:

curl -i http://localhost:3000/hello
curl -i -X POST http://localhost:3000/echo -H 'content-type: application/json' -d '{"x":1}'

6) Backpressure | The Moment Everyone Trips

When you stream out and the client is slow, res.write() eventually returns false. You must pause until ‘drain’.

function onceDrain(w) { return new Promise(r => w.once('drain', r)); }

async function streamLines(res, lines) {
  for (const line of lines) {
    if (!res.write(line + '\n')) await onceDrain(res);
  }
  res.end();
}
sequenceDiagram
  autonumber
  participant JS as Your JS
  participant RES as res (Writable)
  participant K as Kernel Send Buffer

  JS->>RES: write("chunk 1")
  RES->>K: copy to send buffer

  JS->>RES: write("chunk 2") returns false
  Note over JS,RES: Buffer full → backpressure

  RES-->>JS: 'drain' event later

  JS->>RES: write("chunk 3")
  RES->>K: copy to send buffer
  JS->>RES: res.end()

Rule: Treat res.write()’s boolean as a contract. Ignore it → risk OOM under slow clients.

7) Timeouts | Guardrails Against Abusive or Broken Clients

Node gives you independent timeouts (set them deliberately):

import http from 'node:http';
const server = http.createServer({
  requestTimeout: 120_000,   // full headers+body must arrive within 120s
  headersTimeout: 60_000,    // headers alone within 60s (slowloris protection)
  keepAliveTimeout: 10_000,  // idle keep-alive sockets closed after 10s
}, handler).listen(3000);

How they play out during a slowloris header trick:

sequenceDiagram
  autonumber
  participant Client
  participant Node as Node HTTP
  Client->>Node: Opens TCP, starts sending headers very slowly
  Note over Node: headersTimeout ticking (e.g., 60s)
  Client-->>Node: Still not done...
  Node->>Client: 408 Request Timeout, close
  Note over Node: requestTimeout applies if headers ok but body stalls

Cheat sheet:

  • headersTimeout → header phase only
  • requestTimeout → full request (headers + body)
  • keepAliveTimeout → idle time between requests on same FD

8) Keep-Alive | Many Requests, One FD

HTTP/1.1 defaults to keep-alive. Multiple requests reuse the same connection sequentially:

sequenceDiagram
  autonumber
  participant C as Client
  participant FD as "Socket FD #14"
  
  C->>FD: "GET /a"
  FD->>C: "200 /a"
  
  C->>FD: "GET /b"
  FD->>C: "200 /b"
  
  C->>FD: "POST /c"
  FD->>C: "201 /c"
  
  Note over C,FD: Same FD handles multiple requests  
  Note over C,FD: Connection closed on inactivity

Implication: You do not get a new FD per request; you get one per connection. Plan capacity (and FD limits) accordingly.

9) Limits You Will Hit in Production

  • FD cap (ulimit -n): defaults like 1024 are common. Raise to tens of thousands for busy servers; otherwise you’ll see EMFILE (too many open files).
  • Kernel buffer sizes: defaults are okay for most APIs; tune only with evidence.
  • Nagle/Delayed ACK: Node HTTP sets noDelay: true by default (good latency for small writes). If you emit tons of tiny chunks, batch in app code.

10) End-to-End Recap

flowchart TB
  A[NIC] --> B["Kernel TCP (recv/send bufs)"]
  B --> C["Listening FD :3000"]
  C -->|"accept()"| D["Accepted FD per connection"]
  D --> E["libuv readiness (epoll/kqueue/IOCP)"]
  E --> F["llhttp parses headers/body"]
  F --> G["req (Readable)"]
  F --> H["res (Writable)"]
  G --> I["Your JS logic"]
  I --> H
  H --> B
  B --> A

Mental model to keep: You are not pushing bytes “to the network.” You are cooperating with buffers kernel and JS while flow control (TCP + Node backpressure) keeps the system stable.