Summary:  
If you're using snappy-java, a popular Java library for fast data compression and decompression, you need to read this. A critical bug in versions before 1.1.10.1 lets attackers crash your Java app by sending a maliciously crafted compressed file. The root cause? An unchecked chunk-length field can cause runtime exceptions — or even cause your program to run out of memory and die.

Let’s break down how CVE-2023-34455 works, look at the buggy code, and show you how to stay safe.

What’s Snappy-java, and Why Does This Matter?

Snappy-java brings Google’s super-fast Snappy compression to Java projects. Many widely used tools use it for fast data storage and networking.

But before version 1.1.10.1, snappy-java trusted a value from the input file (the “chunk length”) without checking if it made sense. This is a classic trust-the-user mistake in handling binary formats, and it can be disastrous.

The Vulnerability: Unchecked Chunk Size

The problematic code sits inside the hasNextChunk() method of SnappyInputStream.java. Here’s a simplified snippet:

// Read 4 bytes to get chunk length
int chunkSize = readLittleEndianInt(inputStream);

// If 'compressed' is not yet initialized
if (compressed == null) {
    compressed = new byte[chunkSize];  // Uh-oh!
}

What’s wrong here?

- No sanity check for chunkSize. The input controls how big, or even negative, chunkSize can be.
- If an attacker crafts an input file with a chunk size of xFFFFFFFF (which Java sees as -1), the JVM throws a NegativeArraySizeException.
- Worse, if the file sets chunk size to a *huge* positive number, like x7FFFFFFF (2,147,483,647 bytes), the line tries to allocate 2GB of memory! This usually ends with a fatal OutOfMemoryError, crashing your process.

Exploitation Demo

To exploit this issue, you’d just need to craft a Snappy-compressed file with a bogus chunk size in its header, then hand it to any Java app using a vulnerable snappy-java.

Example Snippet Showing the Problem

// Malicious input: chunk size set to xFFFFFFFF (-1), or huge positive number
byte[] chunkHeader = {-1, -1, -1, -1}; // xFFFFFFFF = -1
InputStream fakeStream = new ByteArrayInputStream(chunkHeader);

try {
    int chunkSize = readLittleEndianInt(fakeStream); // Reads -1
    byte[] compressed = new byte[chunkSize]; // Throws NegativeArraySizeException!
} catch (NegativeArraySizeException ex) {
    System.out.println("Your app just crashed! " + ex);
}

Or with x7FFFFFFF

byte[] hugeHeader = {127, -1, -1, -1}; // x7FFFFFFF = 2,147,483,647
InputStream hugeStream = new ByteArrayInputStream(hugeHeader);

try {
    int chunkSize = readLittleEndianInt(hugeStream);
    byte[] compressed = new byte[chunkSize]; // Tries to allocate 2GB!
} catch (OutOfMemoryError ex) {
    System.out.println("Out of memory! " + ex);
}

References

- Xerial snappy-java Issue #442 (Upstream Report)
- Security Advisory and Fix (GitHub PR #443)
- snappy-java 1.1.10.1 Release Notes
- CVE-2023-34455 at NIST NVD

The maintainers fixed this issue in snappy-java 1.1.10.1 by adding checks on the chunk size

if (chunkSize <  || chunkSize > MAX_CHUNK_SIZE) {
    throw new IOException("Corrupted input stream. Invalid chunk size: " + chunkSize);
}

Upgrade immediately to snappy-java 1.1.10.1 or later.

- If you maintain a Java app that lets users upload or process compressed files, check your dependencies and redeploy with the fixed version.

In Plain English

Using old snappy-java? Attackers can send you a malicious file that makes your Java program crash — no shell access needed, no code execution required. Just a bad file is enough. Web services, batch processors, anything that touches compressed data is at risk.

Timeline

Published on: 06/15/2023 18:15:09 UTC
Last modified on: 08/18/2023 14:15:23 UTC