👋 Hey there, I’m Dheeraj Choudhary an AI/ML educator, cloud enthusiast, and content creator on a mission to simplify tech for the world.
After years of building on YouTube and LinkedIn, I’ve finally launched TechInsight Neuron a no-fluff, insight-packed newsletter where I break down the latest in AI, Machine Learning, DevOps, and Cloud.
What to expect: actionable tutorials, tool breakdowns, industry trends, and career insights all crafted for engineers, builders, and the curious.
If you're someone who learns by doing and wants to stay ahead in the tech game you're in the right place.
Introduction
Every application needs configuration. Database connection strings, API keys, service URLs, feature flags, and credentials all have to get into the container somehow. How you handle that in Docker determines whether your application is secure, portable, and easy to operate, or whether you're one accidental Git commit away from a credential leak.
Docker gives you several mechanisms for passing configuration to containers, from simple environment variables to encrypted secrets mounted as files. Choosing between them is not just a matter of convenience. It's a security decision with real consequences. Environment variables are visible to anyone who can run docker inspect. Secrets baked into a Dockerfile become part of every image layer, readable long after you think they're gone.
This guide covers every mechanism for passing configuration into Docker containers, explains exactly why environment variables are the wrong tool for sensitive data, and walks through the right approaches for development, single-host production, and cloud-native production environments.
Why Configuration Management Matters
Configuration that lives inside your application code is a problem the moment you want to run that same code in two different places. The database URL for your laptop is different from staging, which is different from production. If it's hardcoded, you need different builds for each environment. That breaks reproducibility, the core promise of containers.
The standard solution is to externalize configuration: the application code reads its settings from the environment at startup rather than having them embedded at build time. This is the twelve-factor app principle, and Docker is built around it. The same image runs in development, staging, and production. Only the configuration passed in at runtime differs.
The practical failure mode is treating all configuration the same. Non-sensitive configuration like
NODE_ENV=production,PORT=3000, orLOG_LEVEL=infocan reasonably live in environment variables. Sensitive configuration like database passwords, API keys, TLS certificates, and OAuth secrets cannot, for reasons that are specific and concrete.

Environment Variables: The Three Ways to Set Them
1. Inline with -e on the command line
docker run -e NODE_ENV=production -e PORT=3000 my-app
# Multiple variables
docker run \
-e DATABASE_URL=postgres://db:5432/myapp \
-e API_KEY=abc123 \
-e NODE_ENV=production \
my-appVariables passed with -e are available immediately inside the container as standard environment variables. This works fine for non-sensitive config in quick testing. For anything you'd run regularly, the command becomes unwieldy and puts config values in shell history.
2. From a file with --env-file
docker run --env-file .env my-appThe .env file format is simple: one KEY=VALUE per line, lines starting with # are comments:
NODE_ENV=production
PORT=3000
DATABASE_URL=postgres://db:5432/myapp
# This is a comment
LOG_LEVEL=infoDocker reads the file and sets each variable in the container. The values never appear in the shell or in ps output, which is marginally better than inline -e for sensitive values, but still fully visible via docker inspect.
3. Via ENV in the Dockerfile
ENV NODE_ENV=production
ENV PORT=3000ENV instructions set default values baked into the image itself. Containers started from the image get these values unless overridden at runtime with -e. This is appropriate for non-sensitive defaults that should always be set, like NODE_ENV or PORT. Never use ENV for secrets. The value becomes part of every image layer and is visible in docker history and docker inspect.
Reading Environment Variables in Application Code
Inside the container, environment variables are just standard OS environment variables. Every language reads them the same way:
// Node.js
const dbUrl = process.env.DATABASE_URL;
const port = process.env.PORT || 3000;# Python
import os
db_url = os.environ.get('DATABASE_URL')
port = int(os.environ.get('PORT', 3000))// Go
import "os"
dbURL := os.Getenv("DATABASE_URL")# Shell
echo $DATABASE_URLAlways provide sensible defaults for non-critical config where the application can function without an explicit value. For critical config like database URLs, fail loudly at startup if the variable is missing rather than continuing with a broken default. A startup crash with a clear error message is far easier to debug than mysterious runtime failures.
The .env File: Keeping Secrets Off the Command Line
The .env file is a plain text file in your project root that holds key-value pairs. Docker Compose loads it automatically. You can also reference it explicitly with docker run --env-file .env.
# .env
NODE_ENV=production
PORT=3000
DB_PASSWORD=my_secure_password
API_KEY=real_api_key_here
REDIS_PASSWORD=redis_secretTwo rules apply without exception:
Add
.envto.gitignore. Always. No exceptions. One accidental commit of a.envfile with real credentials is a security incident, and credentials committed to Git never fully go away even after deletion since they live in the commit history.Commit a
.env.examplefile instead. This gives teammates a template showing what variables are needed without exposing the actual values:
# .env.example - copy to .env and fill in real values
NODE_ENV=
PORT=3000
DB_PASSWORD=
API_KEY=
REDIS_PASSWORD=The .env.example lives in version control. The .env does not.
Environment Variables in Docker Compose
Compose provides two ways to set environment variables for a service.
Inline in the compose file
services:
api:
image: my-app
environment:
NODE_ENV: production
PORT: 3000
LOG_LEVEL: infoInterpolated from the host shell or .env file
services:
api:
image: my-app
environment:
API_KEY: ${API_KEY}
DB_PASSWORD: ${DB_PASSWORD:-default_value}Compose automatically loads the .env file from the same directory as the compose file and makes its values available for ${VAR} interpolation throughout the compose file. This means the sensitive values stay in .env (not in version control) while the compose file (which is in version control) only contains the variable names and structure.
You can also use env_file to load variables directly into a service container:
services:
api:
image: my-app
env_file:
- .env # loaded first
- .env.local # overrides .env valuesThe difference between .env (for compose file interpolation) and env_file (for container environment): .env populates ${VAR} references inside the compose file itself. env_file passes the file's contents directly as environment variables into the container. Both load from the same file format; they just serve different purposes.

Why Environment Variables Are Not Secure for Secrets
Environment variables feel safe. They're not in the source code. They're not hardcoded. But they have a specific and well-documented set of exposure vectors that make them wrong for sensitive data.
Visible in docker inspect
Anyone with access to the Docker daemon can see every environment variable of every container in plain text:
docker inspect my-container
# Output includes:
# "Env": [
# "DB_PASSWORD=my_actual_password",
# "API_KEY=real_api_key_here",
# ...
# ]This means any developer with Docker access on a shared host can read every secret from every running container.
Visible to all child processes
Environment variables are inherited by every child process the container spawns. If your application forks, shells out, or starts subprocesses, those subprocesses all inherit the environment, including secrets. If a subprocess crashes and dumps its environment to logs, secrets appear in log files.
Can appear in error messages and logs
Frameworks and runtime environments sometimes include environment variable values in error output. A misconfigured ORM printing its full database URL, an HTTP client logging the Authorization header, or a crash reporter dumping the full process environment are all common enough that treating this as a theoretical risk undersells it.
Stored in image layers if set in Dockerfile
If you use ENV in a Dockerfile to set a secret, that value is stored in the image layer and visible in docker history:
docker history my-image
# IMAGE CREATED CREATED BY
# a1b2c3... 2 hours ago ENV DB_PASSWORD=my_secret ← visibleEven if you add a later layer that unsets the variable, the value remains in the earlier layer and can be extracted by anyone who has the image.
Persisted in .env files that get committed
The most common real-world secret leak is a .env file accidentally committed to a Git repository. Once it's in Git history, removing it is difficult and incomplete: the commit still exists, anyone who cloned the repo before the deletion has a copy, and services like GitHub have already indexed it.
The ENV Instruction in Dockerfiles: A Special Risk
The Dockerfile's ENV instruction is appropriate for non-sensitive defaults. For secrets, it's one of the most dangerous patterns in Docker.
# NEVER do this
ENV DB_PASSWORD=my_secret_password
ENV API_KEY=real_key_hereEven if you delete these lines later, the value is permanently stored in the image layer created by that instruction. Every copy of that image, on every machine, in every registry, carries the secret.
The safe pattern for values needed only during the build process is to use ARG (which doesn't persist into the final image) combined with build secrets (Docker BuildKit's --secret flag):
# ARG only exists at build time, not in the final image
ARG NPM_TOKEN
RUN npm config set //registry.npmjs.org/:_authToken ${NPM_TOKEN}
RUN npm install
# After this RUN, the token is gone from the running container
# but it's still in this layer's filesystem diffThe cleaner approach with BuildKit secrets, which never appear in any layer:
# syntax=docker/dockerfile:1
RUN --mount=type=secret,id=npm_token \
NPM_TOKEN=$(cat /run/secrets/npm_token) \
npm installdocker build --secret id=npm_token,src=.npm_token .BuildKit secrets are mounted as a tmpfs at /run/secrets/ during that specific RUN instruction and are never written to any image layer. They don't appear in docker history, docker inspect, or any image metadata.
Docker Secrets: The Right Way for Swarm
Docker Swarm has a native secrets system that is the gold standard for single-node and multi-node Docker deployments using Swarm mode. Secrets in Swarm are encrypted at rest in the Raft database, transmitted encrypted to nodes, and mounted as files inside containers at /run/secrets/<secret-name>. They are never exposed as environment variables.
# Create a secret from a file
echo "my_secure_password" | docker secret create db_password -
# Or from a file
docker secret create db_password ./db_password.txt
# List secrets (values are never shown)
docker secret ls
# Use a secret in a Swarm service
docker service create \
--name api \
--secret db_password \
--env DB_PASSWORD_FILE=/run/secrets/db_password \
my-app:1.0Inside the container, the application reads the secret from the file:
// Node.js reading a Docker secret
const fs = require('fs');
const dbPassword = fs.readFileSync('/run/secrets/db_password', 'utf8').trim();Many official Docker images support the _FILE convention, where you set an environment variable pointing to a file path rather than the value itself:
# PostgreSQL: reads password from file
-e POSTGRES_PASSWORD_FILE=/run/secrets/db_password
# MySQL: same pattern
-e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_passwordSwarm secrets are only available to services explicitly granted access. Removing a secret from a service revokes access immediately. Secrets can be rotated by creating a new secret version and updating the service to use it, all without downtime.
The limitation is that Swarm secrets require Swarm mode. You cannot use docker secret commands with standalone containers started by docker run.
Docker Compose Secrets: File-Based Secrets for Single Hosts
For single-host development and production using Docker Compose without Swarm, Compose has its own secrets mechanism that works with local files:
services:
db:
image: postgres:16-alpine
secrets:
- db_password
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
api:
build: ./api
secrets:
- db_password
- api_key
# Application reads from /run/secrets/db_password
# Not from an environment variable
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txtThe secret files are small plaintext files containing only the secret value, living outside version control:
mkdir -p secrets
echo "my_secure_password" > secrets/db_password.txt
echo "real_api_key_here" > secrets/api_key.txt
# Add to .gitignore
echo "secrets/" >> .gitignoreCompose mounts these files at /run/secrets/<secret-name> inside each container that declares them. The application reads the file rather than an environment variable. This keeps the secret value out of docker inspect output, out of the process environment, and out of logs.
Compose secrets without Swarm are not encrypted at rest the way Swarm secrets are. The files on the host are still plaintext. The improvement over environment variables is that they don't appear in docker inspect, aren't inherited by child processes through the environment, and aren't visible in process listings. For production with serious security requirements, use a dedicated secrets manager.
Production Secret Management: Cloud and Third-Party Options
For production workloads, dedicated secret management systems provide encryption at rest, access auditing, secret rotation, and fine-grained access control that file-based approaches can't match.
AWS Secrets Manager
Integrates with IAM roles, so ECS tasks or EC2 instances running your containers can retrieve secrets without storing credentials anywhere. The application fetches the secret at startup using the AWS SDK:
import boto3
import json
client = boto3.client('secretsmanager', region_name='us-east-1')
secret = client.get_secret_value(SecretId='myapp/production/db')
credentials = json.loads(secret['SecretString'])
db_password = credentials['password']AWS Secrets Manager supports automatic secret rotation for RDS databases, where it rotates the password and updates both the secret store and the database simultaneously.
HashiCorp Vault
The most flexible option, usable in any environment (cloud, on-premises, hybrid). Vault provides dynamic secrets (credentials generated on demand and expired automatically), encryption as a service, and detailed audit logs of every secret access. A common Docker integration pattern uses the Vault Agent as a sidecar container that authenticates to Vault, retrieves secrets, writes them to a shared tmpfs volume, and keeps them updated as they rotate:
services:
vault-agent:
image: hashicorp/vault:latest
volumes:
- ./vault-agent-config:/vault/agent
- secrets-volume:/run/secrets # shared tmpfs
environment:
VAULT_ADDR: https://vault.example.com
api:
image: my-app:1.0
volumes:
- secrets-volume:/run/secrets # reads secrets written by vault-agent
depends_on:
- vault-agent
volumes:
secrets-volume:
driver: local
driver_opts:
type: tmpfs # in-memory, never written to disk
device: tmpfsAzure Key Vault and Google Secret Manager
Both follow a similar pattern to AWS Secrets Manager: the container authenticates using a managed identity or service account and retrieves secrets at startup. Azure Key Vault is the natural choice for Azure-hosted workloads. Google Secret Manager integrates with GKE and Cloud Run workloads via Workload Identity.
Mozilla SOPS
For teams that want secrets in version control but encrypted, SOPS (Secrets OPerationS) encrypts secret files using AWS KMS, GCP KMS, Azure Key Vault, or age/PGP keys. The encrypted file is safe to commit. At deploy time, the CI/CD pipeline or the application decrypts it using the appropriate key. This works well for GitOps workflows where everything lives in the repository.
Practical Strategies by Environment
Development
# .env file (in .gitignore, never committed)
DB_PASSWORD=dev_password_123
API_KEY=dev_key_for_testingUse a .env file with development-specific (ideally fake or low-privilege) credentials. Use real credentials only if the development environment actually needs to connect to real services. Mock credentials and local services (a local PostgreSQL container, a local Redis container) are preferable for most development work.
Staging / CI
Use the secret management system of your CI platform: GitHub Actions secrets, GitLab CI variables, CircleCI environment variables. These inject secrets as environment variables into the CI environment. For staging deployments, use the same cloud secret manager as production but with staging-specific secret values.
Production (single host, Docker Compose)
Use Docker Compose secrets with file-based secrets stored outside version control, with file permissions restricted to the Docker daemon. For higher security requirements, integrate with a cloud secret manager or HashiCorp Vault.
Production (Docker Swarm)
Use native Docker Secrets. Encrypted at rest, transmitted encrypted, mounted as files, access-controlled per service.
Production (Kubernetes)
Use Kubernetes Secrets combined with an external secrets operator (External Secrets Operator) that syncs from AWS Secrets Manager, Vault, or another dedicated system. Never store real production secrets in Kubernetes Secret YAML files committed to a repository.

A Secure Complete Example
Here's a three-service stack using the layered approach: non-sensitive config in environment variables, sensitive config via Compose secrets.
Directory structure:
myapp/
├── compose.yaml
├── .env ← gitignored, non-sensitive local overrides
├── .env.example ← committed, template for teammates
├── .gitignore
└── secrets/
├── db_password.txt ← gitignored
└── api_key.txt ← gitignored.gitignore:
.env
secrets/.env.example (committed to Git):
NODE_ENV=development
PORT=3000
LOG_LEVEL=info.env (local only, not committed):
NODE_ENV=production
PORT=3000
LOG_LEVEL=warncompose.yaml:
services:
db:
image: postgres:16-alpine
restart: unless-stopped
secrets:
- db_password
environment:
POSTGRES_USER: appuser
POSTGRES_DB: myapp
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "appuser"]
interval: 10s
timeout: 5s
retries: 5
api:
build: ./api
restart: unless-stopped
secrets:
- db_password
- api_key
env_file:
- .env
environment:
# Non-sensitive: fine as environment variables
DB_HOST: db
DB_USER: appuser
DB_NAME: myapp
# Sensitive: application reads from /run/secrets/
DB_PASSWORD_FILE: /run/secrets/db_password
API_KEY_FILE: /run/secrets/api_key
ports:
- "3000:3000"
depends_on:
db:
condition: service_healthy
volumes:
postgres-data:
secrets:
db_password:
file: ./secrets/db_password.txt
api_key:
file: ./secrets/api_key.txtThe compose file itself is safe to commit. It contains no secret values. The secrets/ directory and .env file with any real values stay out of version control. The .env.example in version control shows teammates what variables are needed without exposing any real values.
Secret Hygiene: What to Do When a Secret Leaks
When a secret leaks into Git, into logs, or anywhere it shouldn't be, the response is always the same regardless of the mechanism:
1. Rotate immediately. The compromised secret is now untrusted and must be replaced. Generate a new credential, update all services using it, verify they work with the new credential.
2. Revoke the old secret. Don't just stop using it. Revoke or delete it so it cannot be used by anyone who obtained it.
3. Audit access logs. Most secret managers and cloud platforms log every use of a secret. Check whether the compromised secret was actually used by anyone other than your application, and when.
4. Remove from Git history properly. git rm only removes a file from the current commit. The secret remains in every previous commit. Proper removal requires tools like git filter-repo or BFG Repo Cleaner to rewrite history. After rewriting, notify all collaborators to re-clone, since their local copies still have the old history.
5. Rotate adjacent secrets. If one secret was exposed, assume others in the same file or system may be compromised too. Rotate conservatively.
6. Review how it happened and fix the process. A leaked secret is a process failure, not just a technical one. Whether it was a missing .gitignore entry, a log line that printed the environment, or a misconfigured CI job, fix the root cause.
Secret scanning tools like GitGuardian can automatically scan your repositories for accidentally committed credentials and alert you before they reach the main branch.
Key Takeaways
Externalize all configuration from your application code. The same image should run in development, staging, and production, with only runtime configuration differing
Environment variables have three injection methods:
-eon the command line,--env-filepointing to a file, andENVin the Dockerfile. Use the first two for runtime config. Use the DockerfileENVonly for non-sensitive defaultsNever put sensitive values in environment variables. They are visible in
docker inspectoutput, inherited by all child processes, can appear in logs and crash reports, and are stored permanently in image layers if set viaENVin the DockerfileThe
.envfile must always be in.gitignore. Always commit a.env.examplewith placeholder values as a template for teammatesDocker Swarm secrets are the most secure built-in option: encrypted at rest, transmitted encrypted to nodes, mounted as files at
/run/secrets/<name>, never exposed as environment variables, access-controlled per serviceDocker Compose secrets (without Swarm) mount local files at
/run/secrets/<name>inside containers. They don't appear indocker inspector the process environment, but the source files on the host are plaintext. Better than environment variables for secrets, not as strong as a dedicated secret managerFor production workloads, use a dedicated secrets manager: AWS Secrets Manager, Azure Key Vault, Google Secret Manager, or HashiCorp Vault. These provide encryption at rest, audit logging, access control, and secret rotation
BuildKit's
--secretflag lets you pass secrets toRUNinstructions duringdocker buildwithout the secret appearing in any image layer. Use this for build-time secrets like private package registry tokensApplications should read secrets from files (
/run/secrets/<name>) rather than environment variables. Many official images support the_FILEenv var convention (e.g.,POSTGRES_PASSWORD_FILE) for exactly this purposeWhen a secret leaks: rotate immediately, revoke the old credential, audit access logs, remove from Git history with
git filter-repoor BFG, rotate adjacent secrets, and fix the process that allowed the leak
Conclusion
Configuration management in Docker comes down to one core distinction: non-sensitive configuration and sensitive configuration require different mechanisms, and treating them the same creates unnecessary security risk.
Environment variables work fine for
NODE_ENV,PORT,LOG_LEVEL, and service hostnames. They're the wrong tool for passwords, API keys, certificates, and tokens. The right tools for those are Docker Secrets for Swarm deployments, Compose file-based secrets for single-host setups, and dedicated secret managers like Vault or cloud-native equivalents for anything that needs encryption at rest, rotation, and audit logging.The patterns in this guide, particularly the
.env/.env.examplesplit, Compose secrets with the_FILEconvention, and the per-environment strategy table, give you a clear framework that scales from a local laptop to production without changing the application code.
🔗Let’s Stay Connected
📱 Join Our WhatsApp Community
Get early access to AI/ML, Cloud & Devops resources, behind-the-scenes updates, and connect with like-minded learners.
➡️ Join the WhatsApp Group
✅ Follow Me for Daily Tech Insights
➡️ LinkedIN
➡️ YouTube
➡️ X (Twitter)
➡️ Website

