kube-audit-rest is a popular tool used for logging mutation and creation requests in Kubernetes environments. It helps devops teams track changes for compliance and troubleshooting. But a critical vulnerability, now assigned CVE-2025-24884, was recently discovered. Here’s a deep dive into what happened, how this bug works, and what you need to do to stay protected.

What Happened?

When deployed with the "full-elastic-stack" example configuration, kube-audit-rest was logging previous values of Kubernetes secrets into its audit messages. That means, if someone updated a secret, the log would *include the old values*, potentially exposing sensitive data like passwords or private keys to anyone who has access to the logs.

Where’s the Risk?

- If your audit logs are sent to Elasticsearch, Kibana, or another downstream tool, anyone with log access may read old secret values.

kube-audit-rest acts as a REST endpoint to capture Kubernetes AdmissionReview events.

- It’s often used with Vector to send audit logs to external storage (like Elastic, S3, or Kafka).

Vulnerable Configuration

If you followed the kube-audit-rest "full-elastic-stack" example, you might have ended up with this:

# Example from vector.yaml
transforms:
  k8s_secrets:
    type: filter
    condition: .object.kind == "Secret"
    # THIS SENT FULL 'object' DATA, WITH OLD SECRET VALUES!

Every time a Kubernetes secret was mutated or created, the full object — including both current and old values — was sent to logs. The *problem*: secrets are stored in base64, but that's trivial to decode.

`bash

GET /kube-audit-logs/_search
{

}

}

If logs are accessible, extracting old secrets is easy

import base64
import json

# Example log message from elastic
log_msg = '''
{
  "object": {
    "kind": "Secret",
    "metadata": { "name": "db-secret" },
    "data": {
      "password": "bXlfcGFzdF9zZWNyZXQ="  # base64 for "my_past_secret"
    }
  }
}
'''

data = json.loads(log_msg)
b64_pw = data["object"]["data"]["password"]
print(base64.b64decode(b64_pw).decode())  # Prints "my_past_secret"

This could be automated for massive credential harvesting on overlooked elasticsearch logs.

Root Cause

The root issue was *overly permissive logging* — the full Kubernetes secret object, including its previous values, was sent to external systems, instead of redacting the data field or at least the 'old' value payload.

This is a classic example of a side-channel leak: audit logs, meant for tracking, became an unintentional source of secrets.

Fix: v1..16 and Beyond

According to the official changelog, version 1..16 redacts secret values from audit logs. Now, when secrets are logged, their sensitive data is stripped, greatly reducing risk.

`yaml

image: ghcr.io/flant/kube-audit-rest:v1..16

`

2. Validate your Vector/external logging config — make sure you aren't sending full Kubernetes objects unless you absolutely need to.
3. Audit your existing Elastic/audit logs for previously-leaked secrets and rotate any secrets you find.

References

- GitHub Repo: kube-audit-rest
- CVE-2025-24884 on NVD
- Full Elastic Stack Example (vector config)
- Version 1..16 Release Notes

Final Advice

Kubernetes audit tools are powerful, but can become a risk if misconfigured.

Timeline

Published on: 01/29/2025 21:15:21 UTC