
If you're a software developer in 2026, you're almost certainly using an AI coding agent like Cursor, Claude Code, or Codex to write and ship code faster. While these tools have made engineering teams way more productive, they've also introduced a problem that most developers haven't thought about: your agent is reading your .env file, including your plaintext secrets, and sending that context to an external server as part of every request.
This post covers the shortcomings of .env files and what a better setup might look like in 2026. Hopefully, this makes for a useful read and is helpful for others who may find themselves rethinking secrets management as AI tooling becomes a bigger part of how they write code.
Where We Started
For the better part of fifteen years, the .env file was the go-to approach for managing environment variables in local development. Generate your API keys and database credentials, drop them into a plaintext file, add it to .gitignore, and move on. The simplicity is what drove adoption. There was no infrastructure to set up, no services to run, and virtually no learning curve.
But simplicity came with tradeoffs. At the end of the day, your credentials still sat in an unencrypted plaintext file, and the only thing standing between them and an accidental leak was .gitignore. With AI agents now becoming a core part of how teams write code, .gitignore is no longer enough.
The Agent Problem
AI coding agents read files across your working directory. They're designed to traverse your codebase, use real context, and avoid hallucination, but when they come across a .env file sitting in your project, they read it like any other file.
The problem is that agents don't respect .gitignore rules. Some tools offer their own ways to exclude agents from reading certain files (Cursor, for example, uses .cursorignore) but these are inconsistent across agents, opt-in by default, and don't address the underlying problem. The agent reads your .env, includes its contents in its context, and sends it to an external server as part of its inference request.
Consider what this looks like in practice: You open your project in an agent-enabled editor and ask the agent to help build a feature. As part of that, the agent traverses your project and accidentally pulls in your .env file, thinking it needs access to the sensitive data inside to build that feature. Before you know it, that context, including your plaintext secrets, is now part of a request sent to an inference server you don't control.
This is the unintended consequence of .env files now that AI agents are part of the development workflow. You asked a reasonable question, the agent did what it was designed to do, but your secrets were along for the ride.
The Fix: Runtime Secret Injection
Given how long .env files have been around for, most developers never think to question the practice. But there is a different method worth considering: instead of loading secrets from a plaintext file, you can fetch them from a secret store like Infisical and inject them directly into your local development process at runtime.
So how does this work?
If you're open to storing your secrets externally (which is central to secrets management), you can build or use an existing CLI tool to fetch and inject them in as above. With this approach, your application reads them similarly — via process.env, os.environ, or your runtime's equivalent — but no secrets sit in plaintext anymore. Instead, they live in memory, are fetched on demand, scoped to a single process, and disappear when that process exits.
This is the key point: the agent can read every file in your project, but it doesn't have direct access to the environment variables of a running process. By moving secrets out of files and into the runtime environment, you remove the agent's access path entirely.
What This Looks Like in Practice
You can build the infrastructure for runtime injection yourself, including a secrets store, if you have the engineering capacity. You'd need encrypted storage for secrets at rest, an authentication layer, authorization controls, audit logging, and failure handling for when the backend is unreachable. The pattern is well-understood, but it's real work to build and maintain.
If you'd rather not, Infisical implements this pattern as a platform you can self-host or use as a managed service. At the CLI level, infisical run -- <start command> authenticates, fetches secrets, and spawns your command as a child process with secrets injected as environment variables. By prefixing your application start command with the Infisical CLI, you can have it fetch secrets and load them into your application at startup. Here's what that looks like across different runtimes:
infisical run --env=dev --path=/apps/frontend -- npm run dev
infisical run --env=prod --path=/apps/backend -- flask run
infisical run --env=dev --path=/apps/ -- ./mvnw spring-boot:run --quiet
You'll notice this isn't Node.js specific. The CLI injects environment variables at the OS level, so Python, Go, Rust, Java, and any other runtime that reads environment variables works without modification.
By the way. the migration from .env to runtime injection is a one-line change to your dev workflow.
Delete the .env File
The .env file served its purpose well for over a decade. It was simple, widely understood, and good enough when the only risk was an accidental git commit. But development workflows have changed, and so has the threat model. Your editor now reads every file in your project, builds context from it, and sends that context over a network. If your secrets are sitting in a plaintext file in your workspace, they're part of that context.
Runtime injection is how secrets should have worked from the start. Whether you build the infrastructure yourself or use something like Infisical, the principle is the same: secrets belong in memory, scoped to a process, for the duration of a run. Not in a plaintext file sitting in your agent's context window.
Delete the .env file. You won't miss it.

Jake Hulberg
Developer Advocate, Infisical
