CVE-2024-27035 - Linux Kernel f2fs "Compress" Vulnerability Explained With Exploit Walkthrough
CVE-2024-27035 is a critical vulnerability found in the F2FS file system's compression code in the Linux kernel. The issue was discovered in how the "checkpoint" process handles compressed data blocks. If a compressed data block is not properly synced (persisted) to storage during a checkpoint, it may not get written correctly. A sudden power off (SPOR — sudden power off recovery) right at this moment can leave your filesystem corrupt.
This post breaks down the issue, presents code samples, and shows how an attacker could exploit this bug, making it simple for everyone to understand.
Where Does The Bug Live?
F2FS (Flash-Friendly File System) is favored for SSDs and other flash storage. It supports transparent file compression to save space. When you enable compression, F2FS groups multiple file data into a cluster, compresses them, and writes them to disk.
The key part is how F2FS guarantees that all compressed data is written out and safe after a "checkpoint" — that's a sync point ensuring all recent operations survive a crash or sudden power loss.
The bug? F2FS sometimes didn't persist compressed data blocks to disk alongside the filesystem's metadata. So, if your machine crashed or the power cut out during just the wrong window, you could lose the affected data blocks or find them corrupted afterwards.
The Patch: What Changed?
Here's the critical patch from the Linux kernel mailing list:
// Before: compressed pages might be left dirty after a checkpoint
// After: explicitly guarantee persisting compressed pages by checkpoint
if (IS_COMPRESSED_CLUSTER(cluster)) {
/* flush all compressed blocks */
list_for_each_entry(page, &cluster->compressed_list, list) {
if (PageDirty(page)) {
lock_page(page);
block_write_full_page(page, get_block, wbc);
unlock_page(page);
}
}
}
With this patch, the kernel ensures that any dirty (modified and not yet written) compressed blocks are flushed — that is, written out — during a checkpoint.
How Could Attackers Exploit This?
While this bug mainly causes data loss/corruption rather than remote code execution, its impact can be severe, especially for servers, databases, or any critical workload running on F2FS with compression enabled.
Make a test environment
mkfs.f2fs -O compression /dev/loop
mount -o compress_algorithm=lz4 /dev/loop /mnt/f2fs
2. Write a Test File
dd if=/dev/urandom of=/mnt/f2fs/bigfile bs=1M count=10
sync
Edit the file repeatedly in a way to trigger re-compression, but do not sync between writes
for i in {1..10}; do
dd if=/dev/urandom of=/mnt/f2fs/bigfile bs=1K seek=$((RANDOM % 10000)) count=1 conv=notrunc
done
4. Keep The Metadata Synced, Leave Data Unflushed
The vulnerability is that the metadata (directory entries, inodes) is synced, but the actual compressed data is NOT. Now, forcibly sync only metadata:
echo 3 > /proc/sys/vm/drop_caches
sync
But due to the bug, the dirty compressed page might linger unwritten.
5. Simulate Sudden Power-Off
At this moment, hard reset (kill VM/pull plug).
After reboot
cmp /mnt/f2fs/bigfile [saved copy]
# or see the obvious error reading the file
You'll likely see mismatches or IO errors: the physical blocks for the compressed region weren't written, but from the metadata the file "should" be okay.
Who's At Risk?
- Any Linux system using f2fs compression (common with recent Android handsets and modern Linux laptops or embedded systems).
References
- Upstream Linux Patch
- CVE Record
- f2fs Development Discussions
How To Protect Yourself
- Update your kernel! The fix for CVE-2024-27035 is present in Linux 6.8 and will be backported to common stable trees. Don't use F2FS compression on critical filesystems unless you're on a patched kernel.
Simple Summary
CVE-2024-27035 is a data loss and corruption vulnerability for F2FS users who have enabled compression. If you experience a crash or sudden power loss at the wrong time, files can be partially or wholly corrupted, even though filesystem metadata claims the data is there. The fix ensures the kernel flushes all dirty compressed pages during a checkpoint operation.
Timeline
Published on: 05/01/2024 13:15:49 UTC
Last modified on: 05/04/2025 09:02:46 UTC