← Back to Insights

Why Most Kubernetes Security Reviews Never Become a Standard

Security reviews produce findings. Standards produce policy. Most teams never make the jump — not because they lack intention, but because nobody has mapped the path from one to the other.

Most teams do not fail at Kubernetes security because they lack tools. They fail because they cannot turn one risky finding into a decision, a rule, an enforcement path, and a rollout step. This is where security standardization gets real.

A security review is a snapshot. It tells you where you are right now, relative to a set of checks, on the day the review was done. It is useful. It is often necessary. But it is not a standard — and treating it as one is the most common reason the same issues appear in successive reviews, year after year.

This is not a criticism of reviews. It is a structural observation. A review is designed to find problems. A standard is designed to prevent them from recurring. These are different jobs. Conflating them produces organizations that are perpetually remediating and never standardizing.


What happens after a review ends

Walk through the typical post-review lifecycle and the problem becomes visible immediately.

The review concludes. A report is delivered — findings categorized by severity, technical detail, affected components. Leadership acknowledges the report. The security team or the platform team creates tickets for the high-severity items. Work begins.

Six weeks later, half the high-severity findings have been addressed. The rest are in a backlog with estimates attached. The medium-severity findings are in a separate backlog, lower priority. The low-severity findings have effectively been accepted as-is.

Three months later, the addressed items are closed. New deployments have gone out. Some of the new deployments have introduced the same categories of misconfiguration that the review found — not because anyone was careless, but because the rules that the review implied were never written down anywhere that new work had to conform to.

Six months later, a new review is done. It finds many of the same categories of issues in different places.

This is not dysfunction. It is the predictable behavior of a system where findings drive remediation but never drive standardization. The review did its job. The organization just did not do the job that comes after the review.


The four gaps that prevent reviews from becoming standards

Understanding why this happens requires naming the specific work that does not get done. There are four gaps, and each one is distinct.

Gap one: the decision gap.

A review finding says “this container is running as root and should not be.” It does not say “all containers in all application namespaces must run as non-root, with the following exceptions, enforced at the admission controller level, effective from this date.” That second sentence is a policy decision. It requires someone to make a call — not just to fix a finding.

The decision gap is the distance between knowing something is wrong and deciding what the rule should be going forward. Most post-review processes are built around fixing the former, with no one accountable for the latter.

Gap two: the ownership gap.

A finding can be assigned to a team and resolved. A policy needs an owner who maintains it over time — who updates it when Kubernetes changes, who reviews exceptions, who handles pushback from developers who find the control too restrictive.

Most organizations have no one in that role. The platform team might own the enforcement mechanism, but that is not the same as owning the policy. Security might own the audit, but that is not the same as owning the rollout plan. The ownership gap means that even well-intentioned policy decisions decay over time because no one is maintaining them.

Gap three: the enforcement gap.

A review finding can be fixed without any enforcement mechanism. You set runAsNonRoot: true on the affected workloads, the finding is resolved, and the review is closed. Nothing prevents the next deployment from missing that field.

A standard requires enforcement — not just documentation. In Kubernetes, that means admission control. It means that workloads which do not meet the standard are rejected at deploy time, or at minimum surfaced visibly with a warning that must be acknowledged. Without enforcement, the standard is a recommendation, and recommendations erode.

Gap four: the context gap.

A review report documents what is wrong. It rarely documents why the control matters, what the realistic threat scenario is, what the organizational tradeoffs were in deciding to enforce it, or what a legitimate exception looks like versus a bad habit.

That context is what makes a standard adoptable. Without it, developers who encounter an enforcement failure do not understand why the rule exists. They find the fastest path to compliance — which is often not the path that addresses the underlying risk. Or they escalate for an exception, and the exception is granted because no one can articulate clearly why it should not be.


Why the gap is harder to close than it looks

Each of the four gaps requires different work to close, and that work is harder than it looks from the outside.

The decision gap requires making calls that have organizational consequences. Deciding that all production containers must run as non-root means that some existing workloads will need to change. Some of those workloads belong to teams that will push back. The person making the policy decision needs enough authority to hold the decision, and enough context to know which pushback is legitimate and which is friction resistance.

The ownership gap requires building a role that does not exist in most organizations. “Security standard owner” is not a title. The work falls somewhere between security engineering, platform engineering, and compliance — and in the absence of a clear owner, it falls to no one.

The enforcement gap requires investment in admission control infrastructure that teams often deprioritize. Gatekeeper, Kyverno, Pod Security Admission — each of these requires setup, maintenance, exception handling, and integration with the deployment pipeline. The work is not enormous, but it is real work that competes with other priorities.

The context gap requires documentation effort that most teams skip. Writing down why a control exists, what the threat scenario is, what a legitimate exception looks like — this is unglamorous work. It does not close findings. It does not improve a metric. It is the kind of work that pays off eighteen months later when a new engineer can understand the system without asking someone who already knows.


What the transition actually requires

The organizations that successfully move from review to standard do a small number of things consistently.

They treat the review output as raw material, not final output. The findings from the review become the starting point for a policy design conversation — not a remediation checklist. Someone is responsible for looking at the findings and asking: which of these should be codified as permanent policy? At what enforcement level? With what exceptions?

They assign a named owner to each policy area. Not a team — a named person who is accountable for the policy being maintained, enforced, and explained. This person reviews exception requests. This person updates the policy when Kubernetes releases a change that affects the control.

They build enforcement before they declare compliance. They do not close a finding until the enforcement mechanism exists, not just the fix. A container that was running as root and is now running as non-root is fixed. That fix is not a standard until the admission policy that prevents the next deployment from running as root is also in place.

They write the context down. Every policy decision has a written rationale that answers four questions: what is the control, why does it exist, what is the realistic consequence of not having it, and what does a legitimate exception look like? This documentation lives somewhere that new team members can find it without asking.


Why the baseline review comes first

One reason the review-to-standard transition fails is that it tries to go from zero to full standard in one step. That step is too large. The baseline review and the policy design conversation happen simultaneously, with no agreed foundation for the policy decisions.

The more reliable path separates these into distinct phases. First, establish the baseline — a clear picture of the current state, prioritized by risk, with findings mapped to the controls that a standard would need to address. Second, design the policy set — starting from the highest-priority gaps, decide which controls should be codified and at what enforcement level. Third, build the rollout plan — a staged approach that moves from warn to audit to enforce, with developer communication at each stage.

The output is a standard that teams understand, can comply with, and know how to maintain — rather than a set of fixes that address today’s findings and leave tomorrow’s open.


A practical test

If you want to know whether your organization has a standard or just a review history, there is one question that cuts through the noise quickly:

If a developer deploys a new service tomorrow, how does it come into compliance with your security standard?

If the answer is “it gets reviewed in the next audit cycle” — you have a review process, not a standard.

If the answer is “the admission controller rejects it if it does not meet the policy, and the developer sees a clear error that tells them what to fix” — you have a standard.

Everything in between is progress. But knowing where you are is the starting point for knowing what work is left.


The Kubernetes Security Baseline Review is built specifically for this transition — it gives you a prioritized picture of your current posture and maps the gaps that a real standard would need to address, before you commit to enforcement or new tooling.

Not sure where your team actually stands? Start with a Baseline Review.

The Baseline Review gives you a clear picture of your current posture, what matters most, and which product should come next — before you commit to enforcement or new tooling. Delivered async in 5–7 business days.