Secrets Management in Kubernetes: Foundations

A practical overview of the most common secrets management patterns used in Kubernetes from native Secrets to Sealed Secrets, External Secrets Operator, CSI, and agent injection.

Transcript
When you're building applications on Kubernetes, one of the first things you need to understand is how to handle secrets. API keys, database credentials, tokens, all the sensitive information your applications need to connect to the systems and services they need to function.
Kubernetes gives you some resources out of the box, like Secrets and ConfigMaps, but they only cover the basics. They don't give you a complete secrets management strategy, especially as your infrastructure becomes more complex.
In this video, we're going to walk through the main patterns teams use to manage secrets in Kubernetes. We'll start with built-in options like Secrets and ConfigMaps. Then we'll look at how teams extend those with options like Sealed Secrets, External Secrets Operator, and GitOps workflows. Finally, we'll explain how modern platforms like Infisical integrate these patterns to provide centralized management, automated rotation, short-lived credentials like dynamic secrets, and stronger security controls at scale.
The goal is not to push a one-size-fits-all solution. It's to give you a clear mental model of the main patterns teams use. We'll talk about the trade-offs each one introduces and how to identify the best practice for your specific use case.
Now, before we look at any tools or extensions, it's important to start with what Kubernetes gives you out of the box. At the most basic level, Kubernetes has two resources for configuration: Secrets and ConfigMaps. ConfigMaps are intended for non-sensitive configuration like feature flags or application settings, while Secrets are meant for sensitive data such as API keys, passwords, and tokens.
From Kubernetes's perspective, though, they work almost exactly the same. In practice, teams use ConfigMaps for values that change often or are safe to expose, and then Secrets for values that require stricter access controls and auditing.
Both are key-value objects stored inside the cluster and can be injected into a running container either as environment variables or as files mounted into the file system.
One thing that's important to understand early is that Kubernetes Secrets are not encrypted by default. They're base64-encoded, which is just encoding, not encryption. I can base64-encode my secret value, take that string, and easily decode it in the terminal as well.
Under the hood, both Secrets and ConfigMaps are stored in etcd, Kubernetes's backing data store. Their security relies on things like RBAC, encryption at rest, and access controls around who can read or modify them.
This model is simple and it works, which is why it's kind of where everyone starts. But the limitations show up pretty quickly. Secrets are scoped to individual namespaces with no way to share them. There's no central source of truth or audit trail for who's accessed what, and rotation means manually updating and redeploying everything any time something changes.
That baseline is what we're going to look at first before moving on to more advanced patterns.
Right now, I've got a simple web app running inside Kubernetes. The app is expecting an environment variable called MY_SECRET, but Kubernetes isn't providing one yet, so the page just shows "not found."
Now, I'm going to create a native Kubernetes Secret. At the most basic level, this is just a key-value object stored in the cluster. To create a basic Kubernetes native Secret from the command line, I can run this.
We can see our app secret was created.
Now, before we apply and actually see the app consuming that secret, it's important to note: the method with which we just created that secret was the imperative method. We used the CLI. There's also a declarative method, which is defining the Secret in a secret.yaml file with the kind Secret. The declarative method to creating Secrets is the most common method you're going to see in production.
But note, nothing about the app changes yet. Kubernetes doesn't automatically give Secrets to workloads. To make that app-secret available, I update the deployment here. I tell Kubernetes to inject that secret into the container as an environment variable. Then I can run kubectl apply -f deployment.yaml.
I can restart my secrets-demo, and I can check the status. Successfully rolled out.
If we refresh, now the app can read that secret at runtime.
This is the most basic form of secrets management in Kubernetes: a Secret stored in the cluster and passed into a container when it starts.
Okay, so now we've created a configmap.yaml. This is just a Kubernetes object for non-sensitive configuration data. We'll go ahead and apply this to the cluster so that Kubernetes knows about it.
Note in our deployment, we're still pointing to our secretKeyRef, that custom secret that we had created. We're going to change this to configMapKeyRef. Note our name for the ConfigMap was app-config.
Once again, we'll apply it. We'll restart. We'll check our status: successfully rolled out. We'll refresh.
Now we can see the value changed from "hello from Kubernetes" to our ConfigMap value, "hello from config map."
ConfigMaps and Secrets are injected into containers the exact same way. The difference is intent. ConfigMaps are meant for non-sensitive configuration, while Secrets are meant for more sensitive values. That said, native Kubernetes Secrets are not magically secure on their own. They rely on things like RBAC and encryption at rest to actually be safe. That's why many teams move past native Secrets as their setup grows.
Secrets and ConfigMaps can also be mounted as files, but the core idea is the same. Kubernetes injects secrets into containers at runtime.
Once teams start using Kubernetes Secrets, they quickly run into a new problem: how do you store your Kubernetes manifests in Git without exposing your secrets?
GitOps workflows are a best practice. Your infrastructure should be versioned, reviewable, and deployed from source control. But if you commit a Kubernetes Secret manifest to Git, you're just committing the actual secret value, just base64-encoded. As we saw earlier, anyone with access to the repo can decode it.
So teams face a dilemma. Either you keep your secrets out of Git entirely, which breaks GitOps, or accept that secrets are exposed in version control.
This is where Sealed Secrets come in. Sealed Secrets are usually the first thing teams reach for to solve this Git problem. The idea is simple. You encrypt the secret before it ever reaches source control using a public key. You commit that encrypted version to Git, and then the Sealed Secrets controller running in your cluster decrypts it at runtime using the private key.
This solves the Git problem. Your manifest can live in source control safely, and only the cluster can decrypt it. But it's important to understand: Sealed Secrets don't fundamentally change how Kubernetes handles secrets. They just add an encryption layer on top. Once decrypted, you still have a regular Kubernetes Secret with all the same limitations we talked about earlier.
Okay, so to use Sealed Secrets, the first thing we'll do is download the Sealed Secrets controller. Guys, I know we've covered a lot of resources in this video. If you want to check some of these things out yourself, I've left all of the documentation in the description.
We also need to install something called kubeseal, the tool that encrypts secrets before they get to Git.
Once that's good to go, we will create a secret.yaml file. Now, this file is just a standard Kubernetes Secret. There's nothing special about it, and this is exactly the kind of file that you don't want to be committing to Git.
Again, for Sealed Secrets, we'll use something called kubeseal. When I run kubeseal, it encrypts the secret using the cluster's public key and outputs a sealed version that's safe to commit. Only this Kubernetes cluster can decrypt it.
Now that the sealed secret exists, I can go ahead and delete this old secret file. That file was just the temporary input. Keeping it around would defeat the whole point.
I'm also deleting the original secret here so we're not accidentally using an old value. The Sealed Secrets controller will recreate it for us.
Okay, and we'll update deployment.yaml to reference our new sealed secret. valueFrom will be secretKeyRef again. Name is app-secret.
We'll reapply deployment.yaml. We'll apply our sealed-secret.yaml. Then we'll roll out, restart, and check the status. We'll refresh.
We can see that our value changed. From the application's point of view, nothing changed. It's still consuming a normal Kubernetes Secret. The only difference is how that secret moved through Git.
Sealed Secrets improve Git hygiene, but they're still Kubernetes native. Rotation is still manual, secrets still end up in the cluster, and there's no source of external truth.
Now, this is an evolution, not a full secrets management solution. Sealed Secrets solve an important problem. They let teams store Kubernetes manifests in Git without committing raw secrets. But as environments grow, teams start to hit new limitations. Secrets are still static and scoped to individual clusters and namespaces. There's no way to see or manage them from one central place. Rotation means re-encrypting, recommitting, and redeploying. Secrets ultimately still live inside Kubernetes.
At that point, many teams realize what they need is one centralized control plane, one place to manage, rotate, and audit secrets across all of their environments. So they start looking at external secret stores, systems like AWS Secrets Manager, Azure Key Vault, or Infisical.
These are built specifically for storing, rotating, and auditing secrets centrally. To fetch those secrets back into Kubernetes, teams typically use an operator, most commonly the External Secrets Operator, also known as ESO.
At a high level, ESO acts as a bridge between Kubernetes and this external secrets manager. Instead of defining secret values directly in Kubernetes manifests, you define references: where the secret lives and how to access it. ESO acts as a controller in the cluster that pulls those secrets into Kubernetes as native Kubernetes Secrets.
This is a pull-based model driven by Kubernetes. The mental model to keep in mind is this: the external system is the source of truth. Kubernetes is just the consumer. Applications still consume secrets the way they always have, but ownership of the secret lives outside the cluster.
This pattern has significant benefits. You get a central source of truth, a much better rotation story, and a pattern that's widely used in production Kubernetes environments. ESO is a solid generic solution that works with many different secret backends.
But some providers also offer their own native Kubernetes operator with additional features built in. For example, Infisical provides its own Kubernetes operator. This builds on the same pattern but extends it, supporting things like real-time secret updates, dynamic secrets with leases, and native support for automatically reloading or redeploying workloads when secrets change without additional tooling.
We'll save a hands-on deep dive into ESO for another video. For now, the takeaway is this: operators like the External Secrets Operator represent the modern mainstream pattern for secrets in Kubernetes. Centralized, external, and synced in.
Now, up until now, we've been looking at patterns where Kubernetes ends up holding the secret in some form, whether that's native Secrets, Sealed Secrets, or secrets synced in from an external system.
There's an entirely different pattern that teams might choose. What if Kubernetes never stored secrets at all?
That's where the Secret Store CSI Driver comes in. Instead of syncing secrets into Kubernetes Secret objects, CSI allows secrets to be directly mounted into pods as files at runtime. There are no Kubernetes Secrets created and nothing stored in etcd. Secrets only exist inside the running pod.
The mental model here is simple. Secrets are fetched when the pod starts and mounted like a volume. From the application's point of view, it's just reading files from disk. But Kubernetes never owns the secret lifecycle.
Infisical supports this pattern through its CSI Provider, which integrates with the Secret Store CSI Driver. Secrets are pulled directly from Infisical, authenticated using Kubernetes identities, and then mounted directly into the container file system without ever creating Kubernetes Secret objects.
This gives teams a much stronger security boundary compared to sync-based models. Of course, there are trade-offs. Secrets are file-based rather than environment variables. Secrets need to be explicitly declared, and rotation behavior depends on CSI Driver configuration. But architecturally, this is a big shift. Kubernetes is no longer storing secrets at all. It's just providing access to them at runtime.
In the CSI pattern, Kubernetes mounts secrets into a pod as files using the CSI Driver. Depending on your secrets management platform, there may be another option teams consider: the agent injector pattern.
This builds on that same idea, secrets as files, but it changes where the logic lives. For instance, if you're using Infisical for secrets management, the Infisical Kubernetes Agent Injector works differently than CSI. Kubernetes doesn't mount secrets directly. Instead, a mutating admission webhook injects an Infisical agent into the pod. That agent authenticates to Infisical and writes secrets into a shared volume, which the application reads from, just like with CSI.
Here's a mental model. With CSI, Kubernetes infrastructure mounts secrets into the pod. With agent injection, a process inside the pod manages secrets. The delivery mechanism is the same, files on disk, but responsibility shifts from Kubernetes infrastructure to the pod itself.
This gives you more control at runtime. The trade-off is added complexity. So if CSI is about infrastructure-level secret delivery, agent injection is about application-level secret management. Same output, but different ownership.
Kubernetes secrets tend to evolve the same way for most teams. You start with Kubernetes native Secrets, then you move to Sealed Secrets to make Git safer. As systems grow, teams adopt external secret stores and operators. Eventually, many land on a centralized platform like Infisical to handle rotation, access control, and auditing at scale.
There isn't a one-size-fits-all solution, but for most teams running Kubernetes in production, an operator-based model with a centralized secrets platform tends to be the best balance between security, operational simplicity, and scalability. Other patterns still make sense in specific cases, especially for smaller clusters or workflows. But as environments grow, teams generally converge on this approach.
The important thing is understanding the trade-offs at each stage so you can choose the right model for where your team is at today.
Guys, thank you so much for watching. As mentioned, I'll drop all of the documentation below in the description. This has been a high-level overview of secrets management with Kubernetes. For more content like this, like and subscribe. Appreciate you guys, and we'll see you in the next one.
Starting with Infisical is simple, fast, and free.