*Last updated: June 2024 – Exclusive breakdown by AIsec Writer*

What is CVE-2024-56585?

CVE-2024-56585 is a vulnerability fixed in the Linux Kernel for systems using the LoongArch processor architecture. This bug caused a kernel panic due to a "sleeping in atomic context" error when running kernels configured with real-time preemption (PREEMPT_RT).

It primarily affects systems with LoongArch CPUs and the PREEMPT_RT kernel, and was triggered during TLB (Translation Lookaside Buffer) handler setup, specifically in the interaction of memory allocation and real-time (RT) spinlocks.

In Plain English: Why Did This Happen?

- Background: Modern kernels sometimes need to allocate memory at boot (even early on, with CPUs and TLBs not fully initialized).
- The Bug: A prior fix (commit bab1c299f3945ffe79) swapped out a memory flag to prevent sleeping, using GFP_ATOMIC.
- The Problem: Even with this, when running a PREEMPT_RT kernel (used for real-time workloads), spinlocks turn into RT spinlocks—which can sleep! If you do something that can sleep (allocate memory) while holding these locks in atomic context, boom: kernel panic.

Here's a taste of the bug from the logs

BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48
in_atomic(): 1, irqs_disabled(): 1, non_block: , pid: , name: swapper/1
preempt_count: 1, expected: 
...
Call Trace:
[<9000000005924778>] show_stack+x38/x180
[<90000000071519c4>] dump_stack_lvl+x94/xe4
[<900000000599b880>] __might_resched+x1a/x260
[<90000000071675cc>] rt_spin_lock+x4c/x140

For LoongArch, during boot, the TLB code was calling alloc_pages_node() with GFP_ATOMIC.

- But: PREEMPT_RT kernels replace basic spinlocks with RT (real-time) spinlocks (rt_spin_lock). These can sleep if they can't lock right away.
- Memory allocation in certain contexts should never sleep. Doing so inside these RT spinlocks triggers "sleeping function called from invalid context" — a fatal bug.

The Patch

The Fix:
Instead of just switching to GFP_ATOMIC, the Linux kernel disables the NUMA optimization entirely when built with PREEMPT_RT for LoongArch. This means it no longer tries to be clever about what NUMA node to use for TLB memory during setup on real-time kernels: it just gets the memory globally, eliminating the chance of RT spinlocks and sleeping.

Patch Snippet

// Before:
#ifdef CONFIG_NUMA
    pages = alloc_pages_node(cpu_to_node(cpu), GFP_ATOMIC, order);
#else
    pages = alloc_pages(GFP_ATOMIC, order);
#endif

// After (simplified):
#ifdef CONFIG_NUMA
#ifndef CONFIG_PREEMPT_RT
    pages = alloc_pages_node(cpu_to_node(cpu), GFP_ATOMIC, order);
#else
    pages = alloc_pages(GFP_ATOMIC, order);
#endif
#else
    pages = alloc_pages(GFP_ATOMIC, order);
#endif

*This forces non-NUMA allocation under PREEMPT_RT, sidestepping the RT lock issue.*

Full patch:
- LoongArch: Fix sleeping in atomic context for PREEMPT_RT

Short Answer: Not really in a "remote exploitation" sense.

- Reality: The main risk here is system instability, denial of service, or failing to boot on affected architectures with PREEMPT_RT enabled.
- Attackers: Would need *local access* and the ability to force a CPU (swapper/1) into the code path with malicious NUMA layouts or configs.

Custom NUMA configurations

If you don’t use LoongArch or PREEMPT_RT, you’re not vulnerable. Mainstream distros rarely use LoongArch or enable PREEMPT_RT by default.

References

- Upstream Linux Patch Commit
- LWN.net Kernel Coverage
- Linux PREEMPT_RT Wiki
- LoongArch Architecture Details (Wikipedia)

For 99.9% of Linux users, this doesn’t matter. But

- If you run a LoongArch-based Linux machine with PREEMPT_RT (e.g., for industrial or ultra-low-latency workloads)

And use a kernel between v6.12.-rc7+ and v6.12 mainline without this commit

You MUST update your kernel to avoid panic during boot or CPU initialization. There’s no privilege escalation here, just the risk of crashes and downtime.

Not sure? If you don’t know the words LoongArch or PREEMPT_RT, you’re almost certainly safe.

Exclusive coverage by AIsec Writer, 2024. Please credit when quoting this breakdown. Questions or want to see another CVE explained easily? Let us know!

Timeline

Published on: 12/27/2024 15:15:17 UTC
Last modified on: 05/04/2025 09:59:03 UTC