Reporting

View and audit Kyverno policy results with reports.

Kyverno policy reports are Kubernetes resources that provide information about policy results, including violations. Kyverno creates policy reports for each Namespace and a single cluster-level report for cluster resources.

Result entries are added to reports whenever a resource is created which violates one or more rules where the applicable rule sets validationFailureAction=audit. Otherwise, when in enforce mode, the resource is blocked immediately upon creation and therefore no entry is created since no offending resource exists. If the created resource violates multiple rules, there will be multiple entries in the reports for the same resource. Likewise, if a resource is deleted, it will be expunged from the report simultaneously.

There are two types of reports that get created and updated by Kyverno: a ClusterPolicyReport (for cluster-scoped resources) and a PolicyReport (for Namespaced resources). The contents of these reports are determined by the violating resources and not where the rule is stored. For example, if a rule is written which validates Ingress resources, because Ingress is a Namespaced resource, any violations will show up in a PolicyReport co-located in the same Namespace as the offending resource itself, regardless if that rule was written in a Policy or a ClusterPolicy.

Kyverno uses the policy report schema published by the Kubernetes Policy WG which proposes a common policy report format across Kubernetes tools.

Viewing policy report summaries

You can view a summary of the namespaced policy report using the following command:

1kubectl get policyreport -A

For example, here are the policy reports for a small test cluster (polr is the shortname for policyreports):

1$ kubectl get polr -A
2NAMESPACE     NAME                  PASS   FAIL   WARN   ERROR   SKIP   AGE
3default       polr-ns-default       338    2      0      0       0      28h
4flux-system   polr-ns-flux-system   135    5      0      0       0      28h

Similarly, you can view the cluster-wide report using:

1kubectl get clusterpolicyreport

Viewing policy violations

Since the report provides information on all rule and resource execution, finding policy violations requires an additional filter.

Here is a command to view policy violations for the default namespace:

1kubectl describe polr polr-ns-default | grep "Result: \+fail" -B10

Running this in the test cluster shows two containers without runAsNotRoot: true.

 1$ kubectl describe polr polr-ns-default | grep "Result: \+fail" -B10
 2  Message:        validation error: Running as root is not allowed. The fields spec.securityContext.runAsNonRoot, spec.containers[*].securityContext.runAsNonRoot, and spec.initContainers[*].securityContext.runAsNonRoot must be `true`. Rule check-containers[0] failed at path /spec/securityContext/runAsNonRoot/. Rule check-containers[1] failed at path /spec/containers/0/securityContext/.
 3  Policy:         require-run-as-non-root
 4  Resources:
 5    API Version:  v1
 6    Kind:         Pod
 7    Name:         add-capabilities-init-containers
 8    Namespace:    default
 9    UID:          1caec743-faed-4d5a-90f7-5f4630febd58
10  Rule:           check-containers
11  Scored:         true
12  Result:         fail
13--
14  Message:        validation error: Running as root is not allowed. The fields spec.securityContext.runAsNonRoot, spec.containers[*].securityContext.runAsNonRoot, and spec.initContainers[*].securityContext.runAsNonRoot must be `true`. Rule check-containers[0] failed at path /spec/securityContext/runAsNonRoot/. Rule check-containers[1] failed at path /spec/containers/0/securityContext/.
15  Policy:         require-run-as-non-root
16  Resources:
17    API Version:  v1
18    Kind:         Pod
19    Name:         sysctls
20    Namespace:    default
21    UID:          b98bdfb7-10e0-467f-a51c-ac8b75dc2e95
22  Rule:           check-containers
23  Scored:         true
24  Result:         fail

To view all namespaced violations in a cluster use:

1kubectl describe polr -A | grep -i "Result: \+fail" -B10

Example: Trigger a PolicyReport

By default, a PolicyReport object exists in every Namespace regardless if there are any Kyverno Policy objects which also exist in that Namespace. The PolicyReport itself is also empty (i.e., without any results) until there are Kubernetes resources which trigger the report. A resource will appear in the report if it creates any of pass, fail, warn, skip, or error conditions.

As an example, take the default Namespace which currently has no Pods present.

1$ kubectl get pods -n default
2No resources found in default namespace.

A single Kyverno ClusterPolicy exists with a single rule which ensures Pods cannot mount Secrets as environment variables.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: secrets-not-from-env-vars
 5spec:
 6  background: true
 7  validationFailureAction: audit
 8  rules:
 9  - name: secrets-not-from-env-vars
10    match:
11      resources:
12        kinds:
13        - Pod
14    validate:
15      message: "Secrets must be mounted as volumes, not as environment variables."
16      pattern:
17        spec:
18          containers:
19          - name: "*"
20            =(env):
21            - =(valueFrom):
22                X(secretKeyRef): "null"

Creating a Pod in this Namespace which does not use any Secrets (and thereby does not violate the secrets-not-from-env-vars rule in the ClusterPolicy) will generate the first entry in the PolicyReport, but listed as a PASS.

 1$ kubectl run busybox --image busybox:1.28 -- sleep 9999
 2pod/busybox created
 3
 4$ kubectl get po
 5NAME      READY   STATUS    RESTARTS   AGE
 6busybox   1/1     Running   0          66s
 7
 8$ kubectl get polr -o wide
 9NAME              KIND   NAME   PASS   FAIL   WARN   ERROR   SKIP   AGE
10polr-ns-default                 1      0      0      0       0      28h

Inspect the PolicyReport in the default Namespace to view its contents. Notice that the busybox Pod is listed as having passed.

 1$ kubectl get polr polr-ns-default -o yaml
 2
 3<snipped>
 4results:
 5- message: validation rule 'secrets-not-from-env-vars' passed.
 6  policy: secrets-not-from-env-vars
 7  resources:
 8  - apiVersion: v1
 9    kind: Pod
10    name: busybox
11    namespace: default
12    uid: 7b71dc2a-e945-4100-b392-7c137b6f17d5
13  rule: secrets-not-from-env-vars
14  scored: true
15  status: pass
16summary:
17  error: 0
18  fail: 0
19  pass: 1
20  skip: 0
21  warn: 0

Create another Pod which violates the rule in the sample policy. Because the rule is written with validationFailureAction: audit, resources are allowed to be created which violate the rule. If this occurs, another entry will be created in the PolicyReport which denotes this condition as a FAIL. By contrast, if validationFailureAction: enforce and an offending resource was attempted creation, it would be immediately blocked and therefore would not generate another entry in the report.

 1apiVersion: v1
 2kind: Pod
 3metadata:
 4  name: secret-pod
 5spec:
 6  containers:
 7  - name: busybox
 8    image: busybox:1.28
 9    env:
10    - name: SECRET_STUFF
11      valueFrom:
12        secretKeyRef:
13          name: mysecret
14          key: mysecretname

Since the above Pod spec was allowed and it violated the rule, there should now be a failure entry in the PolicyReport in the default Namespace.

 1$ kubectl get polr polr-ns-default -o yaml
 2
 3<snipped>
 4results:
 5- message: validation rule 'secrets-not-from-env-vars' passed.
 6  policy: secrets-not-from-env-vars
 7  resources:
 8  - apiVersion: v1
 9    kind: Pod
10    name: busybox
11    namespace: default
12    uid: 7b71dc2a-e945-4100-b392-7c137b6f17d5
13  rule: secrets-not-from-env-vars
14  scored: true
15  status: pass
16- message: 'validation error: Secrets must be mounted as volumes, not as environment
17    variables. Rule secrets-not-from-env-vars failed at path /spec/containers/0/env/0/valueFrom/secretKeyRef/'
18  policy: secrets-not-from-env-vars
19  resources:
20  - apiVersion: v1
21    kind: Pod
22    name: secret-pod
23    namespace: default
24    uid: 6b6262a1-318b-45f8-979b-3e35201b6d64
25  rule: secrets-not-from-env-vars
26  scored: true
27  status: fail
28summary:
29  error: 0
30  fail: 1
31  pass: 1
32  skip: 0
33  warn: 0

Lastly, delete the Pod called secret-pod and once again check the PolicyReport object.

1$ kubectl delete po secret-pod
2pod "secret-pod" deleted
3
4$ kubectl get polr polr-ns-default -o wide
5NAME              KIND   NAME   PASS   FAIL   WARN   ERROR   SKIP   AGE
6polr-ns-default                 1      0      0      0       0      28h

Notice how the PolicyReport has removed the previously-failed entry when the violating Pod was deleted.

Example: Trigger a ClusterPolicyReport

A ClusterPolicyReport is the same concept as a PolicyReport only it contains resources which are cluster scoped rather than namespaced.

As an example, create the following sample ClusterPolicy containing a single rule which validates that all new Namespaces should contain the label called thisshouldntexist and have some value. Notice how validationFailureAction: audit and background: true in this ClusterPolicy.

 1apiVersion: kyverno.io/v1
 2kind: ClusterPolicy
 3metadata:
 4  name: require-ns-labels
 5spec:
 6  validationFailureAction: audit
 7  background: true
 8  rules:
 9  - name: check-for-labels-on-namespace
10    match:
11      resources:
12        kinds:
13        - Namespace
14    validate:
15      message: "The label `thisshouldntexist` is required."
16      pattern:
17        metadata:
18          labels:
19            thisshouldntexist: "?*"

After creating this sample ClusterPolicy, check for the existence of a ClusterPolicyReport object.

1$ kubectl get cpolr
2NAME                  PASS   FAIL   WARN   ERROR   SKIP   AGE
3clusterpolicyreport   0      3      0      0       0      27h

Notice that a default ClusterPolicyReport named clusterpolicyreport exists with three failures.

The ClusterPolicyReport, when inspected, has the same structure as the PolicyReport object and contains entries in the results and summary objects with the outcomes of a policy audit.

 1results:
 2- message: 'validation error: The label `thisshouldntexist` is required. Rule check-for-labels-on-namespace
 3    failed at path /metadata/labels/thisshouldntexist/'
 4  policy: require-ns-labels
 5  resources:
 6  - apiVersion: v1
 7    kind: Namespace
 8    name: argocd
 9    uid: 0b139fa6-ea7f-43ab-9619-03ab430811ec
10  rule: check-for-labels-on-namespace
11  scored: true
12  status: fail
13- message: 'validation error: The label `thisshouldntexist` is required. Rule check-for-labels-on-namespace
14    failed at path /metadata/labels/'
15  policy: require-ns-labels
16  resources:
17  - apiVersion: v1
18    kind: Namespace
19    name: tkg-system-public
20    uid: 431c10e5-3926-47d8-963f-c76a65f9b84d
21  rule: check-for-labels-on-namespace
22  scored: true
23  status: fail
24- message: 'validation error: The label `thisshouldntexist` is required. Rule check-for-labels-on-namespace
25    failed at path /metadata/labels/'
26  policy: require-ns-labels
27  resources:
28  - apiVersion: v1
29    kind: Namespace
30    name: default
31    uid: d5f89d01-2d44-4957-bc86-a9aa757bd311
32  rule: check-for-labels-on-namespace
33  scored: true
34  status: fail
35summary:
36  error: 0
37  fail: 3
38  pass: 0
39  skip: 0
40  warn: 0

Last modified September 17, 2021 at 8:59 AM PST: updated the cmd in viewing policy report violations (cbbf766)