HashiCorp Vault Hardening Guide
Secrets management security including auth methods, policies, and audit logging
Overview
HashiCorp Vault is the industry-standard secrets management solution used enterprise-wide for database credentials, API keys, PKI certificates, and dynamic secrets. The Codecov breach (2021) exposed HashiCorp’s GPG signing key through supply chain attack, forcing rotation of all signing keys and validation of all software releases. CI/CD integrations with CircleCI, GitLab, and Jenkins create numerous OAuth and token-based access points.
Intended Audience
- Security engineers managing secrets infrastructure
- DevOps engineers configuring Vault integrations
- GRC professionals assessing secrets management compliance
- Platform teams implementing zero-trust architectures
How to Use This Guide
- L1 (Baseline): Essential controls for all organizations
- L2 (Hardened): Enhanced controls for security-sensitive environments
- L3 (Maximum Security): Strictest controls for regulated industries
Scope
This guide covers Vault-specific security configurations including authentication methods, secrets engine hardening, audit logging, and CI/CD integration security.
Table of Contents
- Authentication & Access Controls
- Secrets Engine Security
- Network & API Security
- Audit Logging
- CI/CD Integration Security
- Operational Security
- Compliance Quick Reference
1. Authentication & Access Controls
1.1 Implement Least-Privilege Auth Methods
Profile Level: L1 (Baseline) CIS Controls: 6.3, 6.8 NIST 800-53: AC-6, IA-2
Description
Configure Vault authentication methods appropriate to each use case. Avoid using root tokens for regular operations; implement workload identity where possible.
Rationale
Why This Matters:
- Root tokens provide unlimited access
- Long-lived tokens create persistent risk
- Workload identity eliminates stored secrets
Attack Prevented: Token theft, credential stuffing, privilege escalation
Real-World Incidents:
- Codecov Breach (2021): Compromised CI environment extracted secrets, including HashiCorp’s GPG signing key
Prerequisites
- Vault cluster deployed and initialized
- Authentication backends configured
- Policy structure designed
- Identity provider integration (for OIDC)
ClickOps Implementation
Step 1: Disable Root Token After Initial Setup
- Revoke the root token after initial configuration
- Create an admin-emergency policy for break-glass scenarios
- Generate emergency tokens with short TTLs and use limits
Step 2: Configure OIDC for User Authentication
- Enable the OIDC auth method
- Configure OIDC with your identity provider (Okta, Azure AD, etc.)
- Create role mappings with bound audiences and redirect URIs
Step 3: Configure AppRole for Applications
- Enable the AppRole auth method
- Create roles with limited TTLs and SecretID constraints
- Bind roles to specific CIDRs (L2)
Validation & Testing
- Attempt to use root token - should be revoked
- Login via OIDC - should succeed with appropriate policies
- AppRole authentication - verify CIDR binding works
- Check token TTLs are enforced
Expected result: Each auth method provides minimal required access
Monitoring & Maintenance
Maintenance schedule:
- Weekly: Review failed authentication attempts
- Monthly: Audit auth method configurations
- Quarterly: Rotate AppRole SecretIDs
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC6.1 | Logical access controls |
| NIST 800-53 | IA-2, IA-5 | Authentication and token management |
| ISO 27001 | A.9.2.1 | User registration and de-registration |
Code Pack: Terraform
# --- OIDC Auth Method (preferred for human users) ---
resource "vault_jwt_auth_backend" "oidc" {
description = "OIDC authentication for human users"
path = "oidc"
type = "oidc"
oidc_discovery_url = var.oidc_discovery_url
oidc_client_id = var.oidc_client_id
oidc_client_secret = var.oidc_client_secret
default_role = "default"
tune {
default_lease_ttl = "1h"
max_lease_ttl = "4h"
token_type = "default-service"
}
}
resource "vault_jwt_auth_backend_role" "oidc_default" {
backend = vault_jwt_auth_backend.oidc.path
role_name = "default"
role_type = "oidc"
token_policies = ["default"]
user_claim = "email"
groups_claim = "groups"
allowed_redirect_uris = var.oidc_redirect_uris
token_ttl = 3600
token_max_ttl = 14400
}
# --- AppRole Auth Method (for CI/CD and automation) ---
resource "vault_auth_backend" "approle" {
type = "approle"
path = "approle"
description = "AppRole authentication for CI/CD pipelines and automation"
tune {
default_lease_ttl = "30m"
max_lease_ttl = "1h"
}
}
resource "vault_approle_auth_backend_role" "ci_cd" {
backend = vault_auth_backend.approle.path
role_name = "ci-cd"
token_policies = ["ci-cd-read"]
token_ttl = 1800
token_max_ttl = 3600
secret_id_num_uses = 1
secret_id_ttl = 600
token_num_uses = 10
}
# --- Kubernetes Auth Method (for workloads running in K8s) ---
resource "vault_auth_backend" "kubernetes" {
type = "kubernetes"
path = "kubernetes"
description = "Kubernetes authentication for in-cluster workloads"
tune {
default_lease_ttl = "15m"
max_lease_ttl = "1h"
}
}
resource "vault_kubernetes_auth_backend_config" "k8s" {
backend = vault_auth_backend.kubernetes.path
kubernetes_host = var.kubernetes_host
kubernetes_ca_cert = var.kubernetes_ca_cert
issuer = var.kubernetes_issuer
}
Code Pack: API Script
# Check for root token usage (critical finding)
info "1.1 Checking for root token usage..."
TOKEN_INFO=$(vault_get "/auth/token/lookup-self" 2>/dev/null || echo '{}')
TOKEN_POLICIES=$(echo "${TOKEN_INFO}" | jq -r '.data.policies // [] | join(",")' 2>/dev/null || echo "")
if echo "${TOKEN_POLICIES}" | grep -q "root"; then
fail "1.1 CRITICAL: Current token has root policy -- rotate to a scoped admin token immediately"
warn "1.1 Root tokens should only be used for break-glass emergency procedures"
increment_failed
else
pass "1.1 Current token does not use root policy"
fi
# List all enabled auth methods
info "1.1 Listing enabled auth methods..."
AUTH_METHODS=$(vault_get "/sys/auth" 2>/dev/null || echo '{}')
AUTH_PATHS=$(echo "${AUTH_METHODS}" | jq -r '.data // . | to_entries[] | select(.value.type?) | "\(.key) (\(.value.type))"' 2>/dev/null || true)
if [ -n "${AUTH_PATHS}" ]; then
pass "1.1 Enabled auth methods:"
echo "${AUTH_PATHS}" | while read -r line; do
echo " - ${line}"
done
else
fail "1.1 Could not retrieve auth methods -- check token permissions"
increment_failed
fi
# Verify OIDC is configured (preferred for human users)
info "1.1 Checking for OIDC auth method..."
OIDC_ENABLED=$(echo "${AUTH_METHODS}" | jq -r '.data // . | to_entries[] | select(.value.type == "oidc") | .key' 2>/dev/null || true)
if [ -n "${OIDC_ENABLED}" ]; then
pass "1.1 OIDC auth method enabled at path: ${OIDC_ENABLED}"
# Verify OIDC configuration is complete
OIDC_CONFIG=$(vault_get "/auth/oidc/config" 2>/dev/null || echo '{}')
OIDC_DISCOVERY=$(echo "${OIDC_CONFIG}" | jq -r '.data.oidc_discovery_url // empty' 2>/dev/null || true)
if [ -n "${OIDC_DISCOVERY}" ]; then
pass "1.1 OIDC discovery URL configured: ${OIDC_DISCOVERY}"
else
warn "1.1 OIDC auth method enabled but discovery URL not configured"
fi
else
warn "1.1 OIDC auth method not enabled -- recommended for human user authentication"
warn "1.1 Enable with: vault auth enable oidc"
fi
# Check for deprecated or insecure auth methods
info "1.1 Checking for deprecated auth methods..."
USERPASS_ENABLED=$(echo "${AUTH_METHODS}" | jq -r '.data // . | to_entries[] | select(.value.type == "userpass") | .key' 2>/dev/null || true)
if [ -n "${USERPASS_ENABLED}" ]; then
warn "1.1 Userpass auth method detected at: ${USERPASS_ENABLED}"
warn "1.1 Consider migrating to OIDC for stronger authentication guarantees"
fi
Code Pack: CLI Script
# After initial configuration, revoke root token
vault token revoke <root-token>
# Create admin policy for emergency use
vault policy write admin-emergency - <<EOF
path "*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
EOF
# Create emergency token with TTL
vault token create -policy=admin-emergency -ttl=1h -use-limit=5
# Enable OIDC auth method
vault auth enable oidc
# Configure OIDC with your IdP
vault write auth/oidc/config \
oidc_discovery_url="https://your-idp.okta.com" \
oidc_client_id="$CLIENT_ID" \
oidc_client_secret="$CLIENT_SECRET" \
default_role="default"
# Create role mapping
vault write auth/oidc/role/default \
bound_audiences="$CLIENT_ID" \
allowed_redirect_uris="https://vault.company.com/ui/vault/auth/oidc/oidc/callback" \
allowed_redirect_uris="http://localhost:8250/oidc/callback" \
user_claim="email" \
groups_claim="groups" \
policies="default"
# Enable AppRole
vault auth enable approle
# Create role with limited TTL
vault write auth/approle/role/jenkins \
token_policies="jenkins-secrets" \
token_ttl=1h \
token_max_ttl=4h \
secret_id_ttl=24h \
secret_id_num_uses=10
# Bind to specific CIDR (L2)
vault write auth/approle/role/jenkins \
token_bound_cidrs="10.0.0.0/8" \
secret_id_bound_cidrs="10.0.0.0/8"
# Monitor auth method usage
vault read sys/auth
# Check token counts by auth method
vault read sys/internal/counters/tokens
Code Pack: Sigma Detection Rule
detection:
selection_root_token:
auth.policies|contains: 'root'
selection_auth_changes:
request.path|startswith: 'sys/auth'
request.operation:
- 'update'
- 'delete'
condition: selection_root_token or selection_auth_changes
fields:
- auth.display_name
- auth.policies
- request.path
- request.operation
- request.remote_address
- time
1.2 Implement Granular Policies
Profile Level: L1 (Baseline) NIST 800-53: AC-3, AC-6
Description
Create fine-grained policies limiting access to specific paths. Avoid wildcard policies that grant excessive access.
ClickOps Implementation
Step 1: Create Hierarchical Policy Structure
- Create a base read-only policy for all authenticated users
- Create team-specific policies scoped to team secret paths
- Create application policies with the most restrictive access
Code Pack: Terraform
# --- Base read-only policy for all authenticated users ---
resource "vault_policy" "base_read" {
name = "base-read"
policy = <<-EOT
# Allow lookup of own token capabilities
path "sys/capabilities-self" {
capabilities = ["update"]
}
# Allow reading own identity
path "auth/token/lookup-self" {
capabilities = ["read"]
}
# Allow renewing own token
path "auth/token/renew-self" {
capabilities = ["update"]
}
# Deny access to root-level system paths
path "sys/raw/*" {
capabilities = ["deny"]
}
path "sys/seal" {
capabilities = ["deny"]
}
EOT
}
# --- Team-scoped policy (per-team secret paths) ---
resource "vault_policy" "team_secrets" {
for_each = var.team_names
name = "team-${each.value}-secrets"
policy = <<-EOT
# Read and write secrets under team namespace
path "secret/data/teams/${each.value}/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "secret/metadata/teams/${each.value}/*" {
capabilities = ["list", "read", "delete"]
}
# Deny cross-team access
path "secret/data/teams/*" {
capabilities = ["deny"]
}
EOT
}
# --- CI/CD read-only policy (AppRole consumers) ---
resource "vault_policy" "ci_cd_read" {
name = "ci-cd-read"
policy = <<-EOT
# Read-only access to application secrets
path "secret/data/apps/+/config" {
capabilities = ["read"]
}
# Read dynamic database credentials
path "database/creds/*" {
capabilities = ["read"]
}
# Read PKI certificates
path "pki/issue/*" {
capabilities = ["read", "update"]
}
# Deny all write operations on secret engines
path "secret/data/*" {
capabilities = ["deny"]
}
EOT
}
# --- Admin policy (Vault operators) ---
resource "vault_policy" "admin" {
name = "vault-admin"
policy = <<-EOT
# Manage auth methods
path "sys/auth/*" {
capabilities = ["create", "read", "update", "delete", "list", "sudo"]
}
# Manage policies
path "sys/policies/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Manage secret engines
path "sys/mounts/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Read audit configuration
path "sys/audit*" {
capabilities = ["read", "list", "sudo"]
}
# Manage identity entities and groups
path "identity/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
# Deny seal/unseal (requires root or auto-unseal)
path "sys/seal" {
capabilities = ["deny"]
}
path "sys/raw/*" {
capabilities = ["deny"]
}
EOT
}
Code Pack: CLI Script
# Base policy - all authenticated users
vault policy write base - <<EOF
path "secret/data/shared/*" {
capabilities = ["read", "list"]
}
path "auth/token/lookup-self" {
capabilities = ["read"]
}
path "auth/token/renew-self" {
capabilities = ["update"]
}
EOF
# Team-specific policy
vault policy write team-platform - <<EOF
path "secret/data/platform/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "aws/creds/platform-deploy" {
capabilities = ["read"]
}
EOF
# Application policy (most restrictive)
vault policy write app-frontend - <<EOF
path "secret/data/frontend/config" {
capabilities = ["read"]
}
path "database/creds/frontend-readonly" {
capabilities = ["read"]
}
EOF
1.3 Enable Entity and Group Management
Profile Level: L2 (Hardened) NIST 800-53: AC-2
Description
Use Vault’s identity system to manage users and groups across auth methods, enabling consistent policy application.
Code Pack: CLI Script
# Create identity group
vault write identity/group \
name="platform-team" \
policies="team-platform" \
member_entity_ids=""
# Create entity for user
vault write identity/entity \
name="john.doe@company.com" \
policies="base"
# Link OIDC alias to entity
vault write identity/entity-alias \
name="john.doe@company.com" \
canonical_id="<entity-id>" \
mount_accessor="<oidc-accessor>"
2. Secrets Engine Security
2.1 Use Dynamic Secrets Where Possible
Profile Level: L1 (Baseline) NIST 800-53: IA-5(7)
Description
Configure dynamic secrets engines that generate credentials on-demand with automatic expiration, eliminating static credential risk.
Rationale
Why This Matters:
- Static credentials never expire without rotation
- Dynamic credentials auto-revoke after TTL
- Limits blast radius of credential theft
Code Pack: Terraform
# --- Database secrets engine mount ---
resource "vault_mount" "database" {
path = "database"
type = "database"
description = "Dynamic database credential generation"
default_lease_ttl_seconds = 1800
max_lease_ttl_seconds = 3600
}
# --- PostgreSQL connection configuration ---
resource "vault_database_secret_backend_connection" "postgres" {
backend = vault_mount.database.path
name = "postgres-app"
allowed_roles = ["app-readonly", "app-readwrite"]
postgresql {
connection_url = var.postgres_connection_url
username = var.postgres_admin_username
password = var.postgres_admin_password
}
verify_connection = true
}
# --- Read-only dynamic role ---
resource "vault_database_secret_backend_role" "app_readonly" {
backend = vault_mount.database.path
name = "app-readonly"
db_name = vault_database_secret_backend_connection.postgres.name
default_ttl = 1800
max_ttl = 3600
creation_statements = [
"CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';",
"GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";",
]
revocation_statements = [
"REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM \"{{name}}\";",
"DROP ROLE IF EXISTS \"{{name}}\";",
]
}
# --- Read-write dynamic role ---
resource "vault_database_secret_backend_role" "app_readwrite" {
backend = vault_mount.database.path
name = "app-readwrite"
db_name = vault_database_secret_backend_connection.postgres.name
default_ttl = 900
max_ttl = 1800
creation_statements = [
"CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}';",
"GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO \"{{name}}\";",
]
revocation_statements = [
"REVOKE ALL PRIVILEGES ON ALL TABLES IN SCHEMA public FROM \"{{name}}\";",
"DROP ROLE IF EXISTS \"{{name}}\";",
]
}
2.2 Implement Secrets Versioning and Rotation
Profile Level: L1 (Baseline) NIST 800-53: IA-5(1)
Description
Enable KV v2 secrets engine with versioning for audit trail and rollback capability.
Code Pack: CLI Script
# Enable KV v2
vault secrets enable -version=2 -path=secret kv
# Configure version retention
vault write secret/config \
max_versions=10 \
cas_required=true
# Write secret with CAS (check-and-set) for conflict prevention
vault kv put -cas=0 secret/myapp/config \
api_key="secret123" \
db_password="dbpass456"
# Read specific version
vault kv get -version=2 secret/myapp/config
# Delete version (soft delete)
vault kv delete -versions=1 secret/myapp/config
# Destroy version permanently (L3 only)
vault kv destroy -versions=1 secret/myapp/config
2.3 Enable Transit Engine for Encryption-as-a-Service
Profile Level: L2 (Hardened) NIST 800-53: SC-28
Description
Use Transit secrets engine for application-level encryption without exposing encryption keys.
Code Pack: CLI Script
# Enable transit
vault secrets enable transit
# Create encryption key
vault write -f transit/keys/payment-data \
type=aes256-gcm96 \
exportable=false \
allow_plaintext_backup=false
# Encrypt data
vault write transit/encrypt/payment-data \
plaintext=$(echo "4111111111111111" | base64)
# Decrypt data
vault write transit/decrypt/payment-data \
ciphertext="vault:v1:..."
# Enable key rotation
vault write -f transit/keys/payment-data/rotate
# Configure minimum decryption version (after key rotation)
vault write transit/keys/payment-data/config \
min_decryption_version=2
3. Network & API Security
3.1 Configure TLS and API Security
Profile Level: L1 (Baseline) NIST 800-53: SC-8
Description
Secure Vault API with TLS, client certificates, and rate limiting.
ClickOps Implementation
Code Pack: CLI Script
# Listener configuration
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/vault/certs/vault.crt"
tls_key_file = "/vault/certs/vault.key"
tls_min_version = "tls12"
tls_cipher_suites = "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
# Client certificate verification (L2)
tls_require_and_verify_client_cert = true
tls_client_ca_file = "/vault/certs/client-ca.crt"
}
# API address
api_addr = "https://vault.company.com:8200"
cluster_addr = "https://vault-node:8201"
# Disable insecure TLS skip verify
disable_tls_cert_verification = false
3.2 Implement Request Rate Limiting
Profile Level: L2 (Hardened) NIST 800-53: SC-5
Description
Configure rate limiting to prevent abuse and detect anomalous access patterns.
4. Audit Logging
4.1 Enable Comprehensive Audit Logging
Profile Level: L1 (Baseline) NIST 800-53: AU-2, AU-3
Description
Enable audit logging to file and SIEM for all Vault operations.
ClickOps Implementation
- Enable file audit device for local persistent logging
- Enable syslog audit device for centralized log forwarding
- Enable socket audit device for real-time SIEM streaming
- Verify all audit devices are active
Code Pack: Terraform
# --- File audit device (local persistent log) ---
resource "vault_audit" "file" {
type = "file"
path = "file"
description = "File-based audit log for local retention"
options = {
file_path = var.audit_file_path
log_raw = "false"
hmac_accessor = "true"
mode = "0600"
}
}
# --- Syslog audit device (centralized log forwarding) ---
resource "vault_audit" "syslog" {
type = "syslog"
path = "syslog"
description = "Syslog audit device for SIEM integration"
options = {
tag = "vault-audit"
facility = "AUTH"
log_raw = "false"
}
}
# --- Socket audit device (real-time log streaming, L2+) ---
resource "vault_audit" "socket" {
count = var.profile_level >= 2 ? 1 : 0
type = "socket"
path = "socket"
description = "Socket audit device for real-time log streaming"
options = {
address = var.audit_socket_address
socket_type = "tcp"
log_raw = "false"
}
}
Code Pack: API Script
# Enable file audit device
if [ -n "${FILE_ENABLED}" ]; then
pass "4.1 File audit device already enabled"
else
info "4.1 Enabling file audit device..."
vault_put "/sys/audit/file" '{
"type": "file",
"description": "File-based audit log for local retention",
"options": {
"file_path": "/var/log/vault/audit.log",
"log_raw": false,
"hmac_accessor": true,
"mode": "0600"
}
}' > /dev/null 2>&1 && pass "4.1 File audit device enabled" \
|| { fail "4.1 Failed to enable file audit device"; increment_failed; }
fi
# Enable syslog audit device
if [ -n "${SYSLOG_ENABLED}" ]; then
pass "4.1 Syslog audit device already enabled"
else
info "4.1 Enabling syslog audit device..."
vault_put "/sys/audit/syslog" '{
"type": "syslog",
"description": "Syslog audit device for SIEM integration",
"options": {
"tag": "vault-audit",
"facility": "AUTH",
"log_raw": false
}
}' > /dev/null 2>&1 && pass "4.1 Syslog audit device enabled" \
|| { fail "4.1 Failed to enable syslog audit device"; increment_failed; }
fi
Code Pack: CLI Script
# Enable file audit device
vault audit enable file file_path=/vault/audit/vault-audit.log
# Enable syslog audit device
vault audit enable syslog tag="vault" facility="AUTH"
# Enable socket audit device (for SIEM)
vault audit enable socket \
address="siem.company.com:514" \
socket_type="tcp"
# Verify audit devices
vault audit list -detailed
Code Pack: Sigma Detection Rule
detection:
selection:
request.path|startswith: 'sys/audit'
request.operation:
- 'delete'
condition: selection
fields:
- auth.display_name
- auth.policies
- request.path
- request.operation
- request.remote_address
- time
4.2 Configure Audit Log Alerting
Profile Level: L1 (Baseline)
Detection Use Cases
Code Pack: SDK Script
import json
from collections import defaultdict
from datetime import datetime, timedelta
def detect_mass_secret_access(logs, threshold=100, window_minutes=5):
"""Detect unusual volume of secret reads"""
access_counts = defaultdict(int)
window_start = datetime.utcnow() - timedelta(minutes=window_minutes)
for log in logs:
if log.get('request', {}).get('path', '').startswith('secret/'):
if log['request']['operation'] == 'read':
accessor = log.get('auth', {}).get('accessor', 'unknown')
access_counts[accessor] += 1
alerts = []
for accessor, count in access_counts.items():
if count > threshold:
alerts.append(f"High secret access: {accessor} read {count} secrets")
return alerts
def detect_auth_failures(logs, threshold=10, window_minutes=5):
"""Detect brute force attempts"""
failures = defaultdict(int)
for log in logs:
if log.get('type') == 'response':
if not log.get('response', {}).get('succeeded', True):
remote_addr = log.get('request', {}).get('remote_address', 'unknown')
failures[remote_addr] += 1
return [f"Auth failures from {ip}: {count}"
for ip, count in failures.items() if count > threshold]
5. CI/CD Integration Security
5.1 Secure Jenkins Integration
Profile Level: L1 (Baseline)
Description
Configure secure Vault integration for Jenkins with minimal privileges and short-lived tokens.
Rationale
Why This Matters:
- CI/CD systems are prime targets for supply chain attacks
- CircleCI breach (2023) exposed customer secrets
- Jenkins compromise = access to all pipelines’ secrets
ClickOps Implementation
Jenkins Configuration (Jenkinsfile):
Configure a Jenkinsfile that uses the withVault step to securely retrieve secrets during pipeline execution. The Vault URL and AppRole credential ID are injected via environment variables, and secrets are mapped to environment variables within the build step scope only.
Code Pack: CLI Script
# Step 1: Create Jenkins-Specific Policy
vault policy write jenkins-secrets - <<EOF
# Read secrets for Jenkins builds
path "secret/data/jenkins/*" {
capabilities = ["read"]
}
# Generate AWS credentials for deployments
path "aws/creds/jenkins-deploy" {
capabilities = ["read"]
}
# No access to production secrets
path "secret/data/production/*" {
capabilities = ["deny"]
}
EOF
# Step 2: Configure AppRole with Restrictions
vault write auth/approle/role/jenkins \
token_policies="jenkins-secrets" \
token_ttl=15m \
token_max_ttl=30m \
secret_id_ttl=1h \
secret_id_num_uses=1 \
token_bound_cidrs="10.0.0.0/8"
Code Pack: SDK Script
// Jenkinsfile
pipeline {
agent any
environment {
VAULT_ADDR = 'https://vault.company.com'
}
stages {
stage('Get Secrets') {
steps {
withVault(configuration: [
vaultUrl: "${VAULT_ADDR}",
vaultCredentialId: 'vault-approle'
], vaultSecrets: [
[path: 'secret/data/jenkins/api-keys',
secretValues: [[envVar: 'API_KEY', vaultKey: 'data.api_key']]]
]) {
sh 'echo "Using secret safely"'
}
}
}
}
}
5.2 Implement OIDC for GitHub Actions
Profile Level: L2 (Hardened)
Description
Use GitHub Actions OIDC to authenticate to Vault without storing long-lived tokens.
Configure JWT authentication for GitHub Actions using OIDC federation. This eliminates long-lived tokens by using GitHub’s OIDC provider to authenticate directly to Vault with short-lived JWTs bound to specific repositories and branches.
Code Pack: CLI Script
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- uses: hashicorp/vault-action@v2
with:
url: https://vault.company.com
method: jwt
role: deploy
jwtGithubAudience: https://github.com/your-org
secrets: |
secret/data/deploy/credentials api_key | API_KEY ;
6. Operational Security
6.1 Configure Auto-Unseal
Profile Level: L2 (Hardened) NIST 800-53: SC-12
Description
Configure auto-unseal using cloud KMS to eliminate manual unseal key management.
Code Pack: CLI Script
# AWS KMS auto-unseal
seal "awskms" {
region = "us-east-1"
kms_key_id = "alias/vault-unseal-key"
}
# Azure Key Vault auto-unseal
seal "azurekeyvault" {
tenant_id = "your-tenant-id"
client_id = "your-client-id"
client_secret = "your-client-secret"
vault_name = "vault-unseal"
key_name = "vault-key"
}
# GCP Cloud KMS auto-unseal
seal "gcpckms" {
project = "your-project"
region = "us-east1"
key_ring = "vault"
crypto_key = "unseal"
}
6.2 Implement Disaster Recovery
Profile Level: L2 (Hardened) NIST 800-53: CP-9, CP-10
Description
Configure Vault disaster recovery and backup procedures.
Use Raft snapshots for backup and restore operations. Create snapshots regularly, verify their integrity, and test restoration procedures. For Enterprise deployments, configure DR replication for automated failover.
Code Pack: CLI Script
# Create Raft snapshot
vault operator raft snapshot save backup.snap
# Verify snapshot
vault operator raft snapshot inspect backup.snap
# Restore from snapshot (DR scenario)
vault operator raft snapshot restore backup.snap
# For Enterprise: Configure DR replication
vault write -f sys/replication/dr/primary/enable
7. Compliance Quick Reference
SOC 2 Mapping
| Control ID | Vault Control | Guide Section |
|---|---|---|
| CC6.1 | Auth methods and policies | 1.1 |
| CC6.2 | Granular policies | 1.2 |
| CC7.2 | Audit logging | 4.1 |
NIST 800-53 Mapping
| Control | Vault Control | Guide Section |
|---|---|---|
| AC-6 | Least privilege policies | 1.2 |
| IA-5 | Token and auth management | 1.1 |
| AU-2 | Audit logging | 4.1 |
| SC-28 | Transit encryption | 2.3 |
Appendix A: Edition Compatibility
| Control | Community | Enterprise | HCP Vault |
|---|---|---|---|
| Auth Methods | ✅ | ✅ | ✅ |
| Audit Logging | ✅ | ✅ | ✅ |
| Dynamic Secrets | ✅ | ✅ | ✅ |
| Namespaces | ❌ | ✅ | ✅ |
| Sentinel Policies | ❌ | ✅ | ✅ |
| DR Replication | ❌ | ✅ | ✅ |
| Performance Replication | ❌ | ✅ | ✅ |
Appendix B: References
Official HashiCorp Documentation:
- HashiCorp Security
- Compliance Overview
- Vault Documentation
- Production Hardening
- Security Model
- Auth Methods
- Audit Devices
- Kubernetes Security Considerations
API & Developer Tools:
Compliance Frameworks:
- SOC 2 Type II, ISO 27001, ISO 27017, ISO 27018 (for HCP Vault) – reports available under NDA via Compliance Overview
Security Incidents:
- Codecov Supply Chain Attack (Apr 2021): Compromised CI environment at Codecov was used to exfiltrate environment variables from CI builds. HashiCorp’s GPG signing key was exposed, forcing rotation of all signing keys and validation of all published software releases.
Changelog
| Date | Version | Maturity | Changes | Author |
|---|---|---|---|---|
| 2025-12-14 | 0.1.0 | draft | Initial HashiCorp Vault hardening guide | Claude Code (Opus 4.5) |