Secrets Management in Kubernetes: Foundations
A practical overview of the most common secrets management patterns used in Kubernetes from native Secrets to Sealed Secrets, External Secrets Operator, CSI, and agent injection.
Looking to improve your secret management processes?Talk to an expert
When you're building applications on Kubernetes, one of the first things you need to understand is how to handle secrets—API keys, database credentials, tokens, and all the sensitive information your applications need to connect to the systems and services they depend on. Kubernetes gives you some resources out of the box, like Secrets and ConfigMaps, but they only cover the basics. They don’t give you a complete secrets management strategy, especially as your infrastructure becomes more complex.
In this video, we’re going to walk through the main patterns teams use to manage secrets in Kubernetes. We’ll start with built-in options like Secrets and ConfigMaps. Then we’ll look at how teams extend those with tools like Sealed Secrets and the External Secrets Operator, often within GitOps workflows. Finally, we’ll explain how modern platforms like Infisical integrate these patterns to provide centralized management, automated rotation, short-lived credentials like dynamic secrets, and stronger security controls at scale. The goal isn’t to push a one-size-fits-all solution. It’s to give you a clear mental model of the main patterns teams use, the trade-offs each one introduces, and how to identify the right approach for your specific use case.
Before we look at any tools or extensions, it’s important to start with what Kubernetes gives you out of the box. At the most basic level, Kubernetes has two resources for configuration: Secrets and ConfigMaps. ConfigMaps are intended for non-sensitive configuration like feature flags or application settings, while Secrets are meant for sensitive data such as API keys, passwords, and tokens. From Kubernetes’ perspective, though, they work almost exactly the same.
In practice, teams use ConfigMaps for values that change often or are safe to expose, and Secrets for values that require stricter access controls and auditing. Both are key-value objects stored inside the cluster and can be injected into running containers either as environment variables or as files mounted into the filesystem. One important thing to understand early is that Kubernetes Secrets are not encrypted by default. They’re base64-encoded, which is just encoding, not encryption. Anyone with access can easily decode those values. Under the hood, both Secrets and ConfigMaps are stored in etcd, Kubernetes’ backing datastore, and their security relies on RBAC, encryption at rest, and access controls around who can read or modify them.
This model is simple, and it works, which is why it’s where everyone starts. But the limitations show up quickly. Secrets are scoped to individual namespaces, there’s no centralized source of truth or global audit trail, and rotation means manually updating values and redeploying workloads whenever something changes. That baseline is what we build from before moving on to more advanced patterns.
Once teams start using Kubernetes Secrets, they quickly run into another problem: how do you store your Kubernetes manifests in Git without exposing your secrets? GitOps workflows are considered best practice. Your infrastructure should be versioned, reviewable, and deployed from source control. But if you commit a Secret manifest to Git, you’re committing the actual secret value, just base64-encoded. Anyone with access to the repository can decode it. So teams face a dilemma: either keep secrets out of Git entirely, which breaks GitOps, or accept that secrets are exposed in version control.
This is where Sealed Secrets come in. The idea is simple. You encrypt the secret before it ever reaches source control using a public key. You commit the encrypted version to Git, and a Sealed Secrets controller running in your cluster decrypts it at runtime using the private key. This solves the Git problem. Your manifest can live in source control safely, and only the cluster can decrypt it. But it’s important to understand that Sealed Secrets don’t fundamentally change how Kubernetes handles secrets. They just add an encryption layer on top. Once decrypted, you still end up with a regular Kubernetes Secret with all the same limitations we talked about earlier. Rotation is still manual, secrets still live inside the cluster, and there’s still no centralized source of truth.
As environments grow, teams often realize they need something more than just encrypting secrets in Git. They need a centralized control plane—one place to manage, rotate, and audit secrets across all environments. That’s when they start looking at external secret stores like AWS Secrets Manager, Azure Key Vault, Vault, or Infisical. These systems are designed specifically for storing, rotating, and auditing secrets centrally.
To bring those secrets into Kubernetes, teams typically use an operator, most commonly the External Secrets Operator, or ESO. At a high level, ESO acts as a bridge between Kubernetes and the external secrets manager. Instead of defining secret values directly in Kubernetes manifests, you define references to where the secret lives and how to access it. ESO runs as a controller in the cluster and pulls those secrets in, creating native Kubernetes Secret objects for applications to consume. The mental model here is that the external system is the source of truth and Kubernetes is just the consumer. Applications still consume secrets the same way they always have, but ownership of the secret lifecycle lives outside the cluster. This pattern provides centralized management, better rotation capabilities, and stronger auditing, and it’s widely used in production Kubernetes environments.
Some platforms build on this pattern by providing their own Kubernetes operators with additional features. For example, Infisical offers an operator that supports real-time secret updates, dynamic secrets with leases, and automatic workload reloads when secrets change. The core idea remains the same: secrets are centrally managed and synchronized into Kubernetes.
Up until now, all of these patterns have something in common: Kubernetes ultimately stores the secret in some form, whether it’s native Secrets, decrypted Sealed Secrets, or Secrets synced in from an external system. There’s another pattern that changes that entirely. What if Kubernetes never stored the secrets at all?
That’s where the Secrets Store CSI driver comes in. Instead of syncing secrets into Kubernetes Secret objects, the CSI driver mounts secrets directly into pods as files at runtime. No Kubernetes Secrets are created, and nothing is stored in etcd. Secrets only exist inside the running pod. From the application’s perspective, it simply reads files from disk. Architecturally, this shifts Kubernetes from being a storage layer for secrets to being a delivery mechanism. This can provide a stronger security boundary, but it comes with trade-offs. Secrets are file-based rather than environment variables, and rotation behavior depends on the CSI driver’s configuration.
A related approach is the agent injector pattern. Instead of Kubernetes mounting secrets directly, a mutating admission webhook injects an agent into the pod. That agent authenticates to the external secrets platform, fetches secrets, and writes them into a shared volume that the application reads from. The end result is similar—secrets delivered as files—but the responsibility shifts. With CSI, Kubernetes infrastructure handles delivery. With agent injection, a process inside the pod manages secrets. This gives you more runtime flexibility but adds operational complexity.
Across all of these patterns, there’s a clear evolution. Teams typically start with native Kubernetes Secrets. As they adopt GitOps, they add Sealed Secrets to make storing manifests in Git safer. As their systems grow, they move to external secret stores and operators to centralize management and improve rotation. Eventually, many converge on a centralized secrets platform to handle access control, auditing, and dynamic credentials at scale.
There isn’t a one-size-fits-all solution. Smaller clusters or simpler environments may be perfectly fine with native Secrets or Sealed Secrets. But for most teams running Kubernetes in production at scale, an operator-based model backed by a centralized secrets platform offers the best balance between security, operational simplicity, and scalability. The most important thing is understanding the trade-offs at each stage so you can choose the model that fits where your team is today.
Starting with Infisical is simple, fast, and free.

PRODUCT
CONTACT