v0.1.0-draft AI Drafted

MongoDB Atlas Hardening Guide

Data Last updated: 2025-02-05

Database-as-a-Service security hardening for MongoDB Atlas network access, authentication, and encryption

Overview

MongoDB Atlas is the leading cloud database platform, serving millions of developers with fully managed MongoDB deployments across AWS, Azure, and Google Cloud. As a critical data store for applications, Atlas security configurations directly impact data protection. By default, all access is blocked and must be explicitly enabled, but misconfigurations can expose databases to unauthorized access.

Intended Audience

  • Security engineers managing database infrastructure
  • Database administrators configuring Atlas clusters
  • GRC professionals assessing data security
  • DevOps engineers implementing secure deployments

How to Use This Guide

  • L1 (Baseline): Essential controls for all organizations
  • L2 (Hardened): Enhanced controls for security-sensitive environments
  • L3 (Maximum Security): Strictest controls for regulated industries

Scope

This guide covers MongoDB Atlas security configurations including network access, authentication, encryption, and monitoring. Self-managed MongoDB deployments are covered in a separate guide.


Table of Contents

  1. Network Security
  2. Authentication & Access
  3. Encryption
  4. Monitoring & Auditing
  5. Compliance Quick Reference

1. Network Security

1.1 Configure IP Access List

Profile Level: L1 (Baseline)

Framework Control
CIS Controls 13.5
NIST 800-53 SC-7

Description

Configure IP access lists to restrict which IP addresses can connect to your Atlas clusters. By default, all access is blocked.

Rationale

Why This Matters:

  • Default-deny ensures no unauthorized network access
  • IP allowlisting limits exposure to known addresses
  • Prevents database exposure to the internet

ClickOps Implementation

Step 1: Access Network Configuration

  1. Navigate to: MongoDB AtlasProjectNetwork Access
  2. Review current IP access list

Step 2: Configure IP Access

  1. Click Add IP Address
  2. Configure allowed IPs:
    • Development: Individual developer IPs (temporary)
    • Production: Application server IPs/CIDR ranges only
    • NEVER: 0.0.0.0/0 (allows any IP)
  3. Add description for each entry
  4. Set expiration for temporary access

Best Practices:

Environment Recommended Configuration
Development Individual IPs with expiration
Staging Application server IPs only
Production Smallest CIDR possible, VPC peering preferred

Time to Complete: ~15 minutes


Code Pack: Terraform
hth-mongodb-atlas-1.01-configure-ip-access-list.tf View source on GitHub ↗
# -----------------------------------------------------------------------------
# 1.1 IP Access List Entries
# Each entry restricts database connections to an approved CIDR range.
# NEVER include 0.0.0.0/0 -- this opens databases to the entire internet.
# -----------------------------------------------------------------------------

resource "mongodbatlas_project_ip_access_list" "allowed" {
  for_each = { for idx, entry in var.allowed_cidr_blocks : idx => entry }

  project_id = var.atlas_project_id
  cidr_block = each.value.cidr_block
  comment    = each.value.comment
}
Code Pack: API Script
hth-mongodb-atlas-1.01-configure-ip-access-list.sh View source on GitHub ↗
# Retrieve all IP access list entries for the project
ACCESS_LIST=$(atlas_get "/groups/${ATLAS_PROJECT_ID}/accessList") || {
  fail "1.1 Failed to retrieve IP access list"
  increment_failed
  summary
  exit 1
}

TOTAL_ENTRIES=$(echo "${ACCESS_LIST}" | jq '.totalCount // 0')
info "1.1 Found ${TOTAL_ENTRIES} IP access list entries"

# Check for open access (0.0.0.0/0) -- critical finding
OPEN_ENTRIES=$(echo "${ACCESS_LIST}" | jq -r '
  .results[]
  | select(.cidrBlock == "0.0.0.0/0" or .cidrBlock == "::/0")
  | .cidrBlock
' 2>/dev/null || true)

if [ -n "${OPEN_ENTRIES}" ]; then
  fail "1.1 CRITICAL: Open access entry detected -- databases exposed to entire internet"
  echo "${OPEN_ENTRIES}" | while read -r cidr; do
    fail "  - ${cidr}"
  done
  increment_failed
else
  pass "1.1 No open access entries (0.0.0.0/0) found"
  increment_applied
fi

# Report overly broad CIDR blocks (larger than /16)
BROAD_ENTRIES=$(echo "${ACCESS_LIST}" | jq -r '
  .results[]
  | select(.cidrBlock != null)
  | select(
      (.cidrBlock | split("/") | .[1] | tonumber) < 16
    )
  | "\(.cidrBlock) (\(.comment // "no comment"))"
' 2>/dev/null || true)

if [ -n "${BROAD_ENTRIES}" ]; then
  warn "1.1 Overly broad CIDR blocks detected (wider than /16):"
  echo "${BROAD_ENTRIES}" | while read -r entry; do
    warn "  - ${entry}"
  done
fi

# Check for entries with no comment (poor documentation)
UNCOMMENTED=$(echo "${ACCESS_LIST}" | jq '[.results[] | select(.comment == null or .comment == "")] | length' 2>/dev/null || echo "0")
if [ "${UNCOMMENTED}" -gt 0 ]; then
  warn "1.1 ${UNCOMMENTED} access list entries have no comment (add descriptions for audit trail)"
fi

# Report temporary entries that may have expired or are about to
TEMP_ENTRIES=$(echo "${ACCESS_LIST}" | jq -r '
  .results[]
  | select(.deleteAfterDate != null)
  | "\(.cidrBlock) expires \(.deleteAfterDate)"
' 2>/dev/null || true)

if [ -n "${TEMP_ENTRIES}" ]; then
  info "1.1 Temporary access list entries:"
  echo "${TEMP_ENTRIES}" | while read -r entry; do
    info "  - ${entry}"
  done
fi
Code Pack: Sigma Detection Rule
hth-mongodb-atlas-1.01-configure-ip-access-list.yml View source on GitHub ↗
detection:
    selection:
        eventTypeName:
            - 'ACCESS_LIST_ENTRY_ADDED'
            - 'PROJECT_IP_ACCESS_LIST_ENTRY_ADDED'
    filter_open_access:
        cidrBlock|contains:
            - '0.0.0.0/0'
            - '::/0'
    condition: selection and filter_open_access
fields:
    - username
    - remoteAddress
    - cidrBlock
    - groupId
    - created

1.2 Configure VPC Peering or Private Endpoints

Profile Level: L2 (Hardened)

Framework Control
CIS Controls 12.1
NIST 800-53 SC-7

Description

Configure private connectivity via VPC peering or private endpoints to eliminate public internet exposure.

Rationale

Why This Matters:

  • Private endpoints eliminate public internet exposure
  • Traffic stays within cloud provider network
  • More secure than IP allowlisting alone

Prerequisites

  • MongoDB Atlas M10 tier or higher
  • AWS VPC, Azure VNet, or GCP VPC configured

ClickOps Implementation

Step 1: Configure VPC Peering

  1. Navigate to: Network AccessPeering
  2. Click Add Peering Connection
  3. Select cloud provider and region
  4. Enter VPC/VNet details:
    • VPC ID
    • CIDR block
    • Account/Project ID
  5. Accept peering from your cloud provider console

Step 2: Configure Private Endpoints (Recommended)

  1. Navigate to: Network AccessPrivate Endpoint
  2. Click Add Private Endpoint
  3. Select cloud provider and region
  4. Follow provider-specific instructions:
    • AWS: Create VPC endpoint
    • Azure: Create private endpoint
    • GCP: Create private service connect

Step 3: Update IP Access List

  1. Private endpoints are automatically added
  2. Remove public IP entries if no longer needed
  3. Verify connectivity through private endpoint

Time to Complete: ~1 hour


2. Authentication & Access

2.1 Configure Database Users with Least Privilege

Profile Level: L1 (Baseline)

Framework Control
CIS Controls 5.4
NIST 800-53 AC-6

Description

Create database users with role-based access control (RBAC) following the principle of least privilege.

Rationale

Why This Matters:

  • Limits blast radius of compromised credentials
  • Supports compliance requirements
  • Enables audit of access patterns

ClickOps Implementation

Step 1: Access Database Users

  1. Navigate to: Database AccessDatabase Users
  2. Review existing users

Step 2: Create Least Privilege Users

  1. Click Add New Database User
  2. Configure authentication:
    • SCRAM: Password-based (most common)
    • X.509 Certificate: Certificate-based (recommended)
    • AWS IAM: For AWS workloads
    • LDAP: Deprecated in Atlas 8.0+
  3. Configure privileges:
    • Built-in Role: Select appropriate role
    • Custom Role: Create granular permissions
  4. Restrict to specific database if possible

Recommended Roles:

Use Case Recommended Role
Application read readAnyDatabase or read on specific DB
Application write readWriteAnyDatabase or readWrite on specific DB
Admin operations dbAdmin on specific DB
Full admin atlasAdmin (limit to 1-2 users)

Step 3: Create Separate Service Accounts

  1. Create dedicated users for each application
  2. Avoid shared credentials
  3. Document user purpose

Time to Complete: ~30 minutes


Code Pack: Terraform
hth-mongodb-atlas-2.01-configure-database-users.tf View source on GitHub ↗
# -----------------------------------------------------------------------------
# 2.1 Database Users with Scoped Roles
# Each user receives only the minimum permissions needed. Roles are scoped
# to specific databases rather than granted project-wide. Avoid atlasAdmin
# and readWriteAnyDatabase unless absolutely required and documented.
# -----------------------------------------------------------------------------

resource "mongodbatlas_database_user" "users" {
  for_each = { for idx, user in var.database_users : user.username => user }

  project_id         = var.atlas_project_id
  username           = each.value.username
  password           = each.value.password
  auth_database_name = each.value.auth_database

  dynamic "roles" {
    for_each = each.value.roles
    content {
      role_name       = roles.value.role_name
      database_name   = roles.value.database_name
      collection_name = roles.value.collection_name
    }
  }

  dynamic "scopes" {
    for_each = each.value.scopes
    content {
      name = scopes.value.name
      type = scopes.value.type
    }
  }
}
Code Pack: API Script
hth-mongodb-atlas-2.01-configure-database-users.sh View source on GitHub ↗
# Retrieve all database users for the project
DB_USERS=$(atlas_get "/groups/${ATLAS_PROJECT_ID}/databaseUsers") || {
  fail "2.1 Failed to retrieve database users"
  increment_failed
  summary
  exit 1
}

TOTAL_USERS=$(echo "${DB_USERS}" | jq '.totalCount // 0')
info "2.1 Found ${TOTAL_USERS} database users"

# Check for users with atlasAdmin role on all databases (overly permissive)
ADMIN_USERS=$(echo "${DB_USERS}" | jq -r '
  .results[]
  | select(
      .roles[]
      | select(.roleName == "atlasAdmin" and (.databaseName == "admin" or .databaseName == ""))
    )
  | .username
' 2>/dev/null | sort -u || true)

if [ -n "${ADMIN_USERS}" ]; then
  fail "2.1 Users with atlasAdmin role detected (overly permissive):"
  echo "${ADMIN_USERS}" | while read -r user; do
    fail "  - ${user}"
  done
  increment_failed
else
  pass "2.1 No users with unrestricted atlasAdmin role"
  increment_applied
fi

# Check for users with readWriteAnyDatabase (broad access)
BROAD_USERS=$(echo "${DB_USERS}" | jq -r '
  .results[]
  | select(
      .roles[]
      | select(.roleName == "readWriteAnyDatabase")
    )
  | .username
' 2>/dev/null | sort -u || true)

if [ -n "${BROAD_USERS}" ]; then
  warn "2.1 Users with readWriteAnyDatabase role (consider scoping to specific databases):"
  echo "${BROAD_USERS}" | while read -r user; do
    warn "  - ${user}"
  done
fi

# Check for users authenticating with SCRAM (password) vs X.509 or LDAP
SCRAM_USERS=$(echo "${DB_USERS}" | jq '[.results[] | select(.databaseName == "admin")] | length' 2>/dev/null || echo "0")
X509_USERS=$(echo "${DB_USERS}" | jq '[.results[] | select(.databaseName == "$external" and .x509Type != "NONE")] | length' 2>/dev/null || echo "0")
LDAP_USERS=$(echo "${DB_USERS}" | jq '[.results[] | select(.ldapAuthType != null and .ldapAuthType != "NONE")] | length' 2>/dev/null || echo "0")

info "2.1 Authentication breakdown:"
info "  - SCRAM (password): ${SCRAM_USERS}"
info "  - X.509 certificate: ${X509_USERS}"
info "  - LDAP: ${LDAP_USERS}"

if [ "${SCRAM_USERS}" -gt 0 ] && should_apply 2 2>/dev/null; then
  warn "2.1 L2 recommends migrating SCRAM users to X.509 or LDAP authentication"
fi

# Check for users with no role scoping (roles on all clusters)
UNSCOPED_USERS=$(echo "${DB_USERS}" | jq -r '
  .results[]
  | select(.scopes == null or (.scopes | length) == 0)
  | .username
' 2>/dev/null || true)

if [ -n "${UNSCOPED_USERS}" ]; then
  warn "2.1 Users with no cluster scope (access to all clusters in project):"
  echo "${UNSCOPED_USERS}" | while read -r user; do
    warn "  - ${user}"
  done
fi

2.2 Enable Multi-Factor Authentication for Atlas Console

Profile Level: L1 (Baseline)

Framework Control
CIS Controls 6.5
NIST 800-53 IA-2(1)

Description

Require MFA for all users accessing the MongoDB Atlas console.

ClickOps Implementation

Step 1: Configure Organization MFA

  1. Navigate to: OrganizationSettingsRequire Multi-Factor Authentication
  2. Enable MFA requirement for all organization members

Step 2: Configure Personal MFA

  1. Each user: AccountSecurityMulti-Factor Authentication
  2. Configure MFA method:
    • Authenticator app (recommended)
    • SMS (not recommended)

2.3 Configure X.509 Certificate Authentication

Profile Level: L2 (Hardened)

Framework Control
CIS Controls 6.5
NIST 800-53 IA-2

Description

Configure X.509 certificate authentication for stronger machine-to-machine authentication.

ClickOps Implementation

Step 1: Enable X.509 Authentication

  1. Navigate to: Database AccessDatabase Users
  2. Click Add New Database User
  3. Select Certificate authentication
  4. Choose:
    • Atlas-managed: Atlas manages certificates
    • Self-managed: You provide CA and certificates

Step 2: Configure Atlas-Managed X.509

  1. Download client certificate for your application
  2. Configure application connection string with certificate
  3. Rotate certificates before expiration

2.4 Configure Organization and Project Roles

Profile Level: L1 (Baseline)

Framework Control
CIS Controls 5.4
NIST 800-53 AC-6(1)

Description

Configure granular roles for Atlas console access at organization and project levels.

ClickOps Implementation

Step 1: Review Organization Roles

  1. Navigate to: OrganizationAccess Manager
  2. Review user assignments
  3. Available roles:
    • Organization Owner: Full access (limit to 2-3)
    • Organization Member: Basic access
    • Organization Read Only: View only
    • Billing Admin: Billing only

Step 2: Review Project Roles

  1. Navigate to: ProjectAccess Manager
  2. Assign project-specific roles:
    • Project Owner: Full project access
    • Project Data Access Admin: Database user management
    • Project Cluster Manager: Cluster management
    • Project Read Only: View only

3. Encryption

3.1 Verify Default Encryption

Profile Level: L1 (Baseline)

Framework Control
CIS Controls 3.11
NIST 800-53 SC-8, SC-28

Description

Verify that default encryption at rest and in transit is enabled (cannot be disabled in Atlas).

Atlas Default Security

Feature Default Setting Can Disable?
Encryption at Rest (AES-256) ✅ Enabled ❌ No
Encryption in Transit (TLS 1.2+) ✅ Enabled ❌ No
TLS 1.3 Support ✅ Available N/A

Validation

  1. Navigate to: Clusters → Select cluster → Security
  2. Verify encryption indicators show enabled
  3. Test connection requires TLS

3.2 Configure Customer Key Management (CMK)

Profile Level: L2 (Hardened)

Framework Control
CIS Controls 3.11
NIST 800-53 SC-12

Description

Configure customer-managed encryption keys for additional control over data encryption.

Rationale

Why This Matters:

  • Provides customer control over encryption keys
  • Supports compliance requirements (PCI, HIPAA)
  • Enables key rotation policies

Prerequisites

  • MongoDB Atlas M10 tier or higher
  • Cloud provider KMS (AWS KMS, Azure Key Vault, GCP Cloud KMS)

ClickOps Implementation

Step 1: Configure Cloud Provider KMS

  1. Create KMS key in your cloud provider
  2. Configure key policy for Atlas access
  3. Note key ARN/ID

Step 2: Enable CMK in Atlas

  1. Navigate to: ProjectSecurityEncryption at Rest
  2. Click Configure Encryption at Rest
  3. Select cloud provider
  4. Enter KMS key details
  5. Configure role/credentials for Atlas access
  6. Enable encryption

Step 3: Verify CMK Configuration

  1. Check cluster shows CMK-encrypted
  2. Test key rotation capability

Time to Complete: ~1 hour


Code Pack: Terraform
hth-mongodb-atlas-3.02-configure-customer-key-management.tf View source on GitHub ↗
# -----------------------------------------------------------------------------
# 3.2 Encryption at Rest with Customer-Managed Keys (AWS KMS)
# Enables customer-controlled encryption keys so the organization retains
# full control over the key lifecycle (rotation, revocation, audit).
# Atlas uses envelope encryption: the CMK wraps the data encryption key.
# -----------------------------------------------------------------------------

resource "mongodbatlas_encryption_at_rest" "cmk" {
  project_id = var.atlas_project_id

  aws_kms_config {
    enabled                = true
    customer_master_key_id = var.aws_kms_key_id
    region                 = var.aws_kms_region
    access_key_id          = var.aws_access_key_id
    secret_access_key      = var.aws_secret_access_key
    role_id                = var.aws_role_arn != "" ? var.aws_role_arn : null
  }
}

3.3 Configure Client-Side Field Level Encryption

Profile Level: L3 (Maximum Security)

Framework Control
CIS Controls 3.11
NIST 800-53 SC-28

Description

Configure Client-Side Field Level Encryption (CSFLE) to encrypt sensitive fields before they leave the application.

Rationale

Why This Matters:

  • Encrypts PII and sensitive data at field level
  • Data remains encrypted even in database
  • Only authorized clients can decrypt

Implementation

  1. Configure encryption schema defining fields to encrypt
  2. Generate data encryption keys
  3. Configure application driver with encryption settings
  4. Test encryption/decryption of sensitive fields

4. Monitoring & Auditing

4.1 Enable Database Auditing

Profile Level: L1 (Baseline)

Framework Control
CIS Controls 8.2
NIST 800-53 AU-2

Description

Enable database auditing to log authentication attempts and data access.

ClickOps Implementation

Step 1: Enable Auditing

  1. Navigate to: ProjectDatabase Deployments
  2. Select cluster → Auditing
  3. Enable auditing
  4. Configure audit filter for events of interest

Step 2: Configure Audit Log Export

  1. Navigate to: ProjectIntegrations
  2. Configure log export to:
    • Atlas Data Federation
    • AWS S3
    • Azure Blob Storage
    • Third-party SIEM

Code Pack: Terraform
hth-mongodb-atlas-4.01-enable-database-auditing.tf View source on GitHub ↗
# -----------------------------------------------------------------------------
# 4.1 Database Auditing
# Captures authentication, authorization, and DDL events. The audit filter
# controls which operations are logged. Enable audit_authorization_success
# at L2+ for full authorization visibility (higher log volume).
# -----------------------------------------------------------------------------

resource "mongodbatlas_auditing" "config" {
  project_id                  = var.atlas_project_id
  audit_filter                = var.audit_filter
  audit_authorization_success = var.audit_authorization_success
  enabled                     = true
}
Code Pack: API Script
hth-mongodb-atlas-4.01-enable-database-auditing.sh View source on GitHub ↗
# Retrieve current auditing configuration
AUDIT_CONFIG=$(atlas_get "/groups/${ATLAS_PROJECT_ID}/auditing") || {
  fail "4.1 Failed to retrieve auditing configuration (requires M10+ cluster)"
  increment_failed
  summary
  exit 1
}

AUDIT_ENABLED=$(echo "${AUDIT_CONFIG}" | jq -r '.enabled // false')
AUDIT_FILTER=$(echo "${AUDIT_CONFIG}" | jq -r '.auditFilter // "none"')
AUDIT_AUTH_SUCCESS=$(echo "${AUDIT_CONFIG}" | jq -r '.auditAuthorizationSuccess // false')

if [ "${AUDIT_ENABLED}" = "true" ]; then
  pass "4.1 Database auditing is enabled"
  info "4.1 Current audit filter: ${AUDIT_FILTER}"
  info "4.1 Audit authorization success: ${AUDIT_AUTH_SUCCESS}"
  increment_applied
else
  warn "4.1 Database auditing is DISABLED -- enabling now..."

  # Enable auditing with a comprehensive filter
  AUDIT_PAYLOAD='{
    "enabled": true,
    "auditFilter": "{\"$or\":[{\"users\":[]},{\"atype\":{\"$in\":[\"authCheck\",\"authenticate\",\"createCollection\",\"createDatabase\",\"createIndex\",\"dropCollection\",\"dropDatabase\",\"dropIndex\",\"createUser\",\"dropUser\",\"updateUser\",\"grantRolesToUser\",\"revokeRolesFromUser\",\"createRole\",\"dropRole\",\"updateRole\",\"shutdown\"]}}]}",
    "auditAuthorizationSuccess": false
  }'

  RESULT=$(atlas_patch "/groups/${ATLAS_PROJECT_ID}/auditing" "${AUDIT_PAYLOAD}") || {
    fail "4.1 Failed to enable auditing"
    increment_failed
    summary
    exit 1
  }

  NEW_STATUS=$(echo "${RESULT}" | jq -r '.enabled // false')
  if [ "${NEW_STATUS}" = "true" ]; then
    pass "4.1 Database auditing enabled successfully"
    increment_applied
  else
    fail "4.1 Auditing enable request returned but status is still disabled"
    increment_failed
  fi
fi

# L2 check: audit authorization success should be enabled
if should_apply 2 2>/dev/null; then
  if [ "${AUDIT_AUTH_SUCCESS}" != "true" ]; then
    warn "4.1 L2 recommends enabling auditAuthorizationSuccess for full visibility"
  else
    pass "4.1 L2: auditAuthorizationSuccess is enabled"
  fi
fi
Code Pack: Sigma Detection Rule
hth-mongodb-atlas-4.01-enable-database-auditing.yml View source on GitHub ↗
detection:
    selection:
        eventTypeName:
            - 'AUDIT_CONFIGURATION_UPDATED'
            - 'PROJECT_AUDIT_CONFIGURATION_UPDATED'
    filter_disabled:
        enabled: false
    condition: selection and filter_disabled
fields:
    - username
    - remoteAddress
    - groupId
    - enabled
    - auditFilter
    - created

4.2 Monitor Atlas Activity Feed

Profile Level: L1 (Baseline)

Framework Control
CIS Controls 8.11
NIST 800-53 AU-6

Description

Monitor Atlas Activity Feed for administrative and security events.

ClickOps Implementation

Step 1: Access Activity Feed

  1. Navigate to: ProjectActivity Feed
  2. Review recent events:
    • User authentication
    • Configuration changes
    • Alerts

Step 2: Configure Alerts

  1. Navigate to: ProjectAlerts
  2. Create alerts for:
    • Failed authentication attempts
    • Configuration changes
    • Resource threshold violations

5. Compliance Quick Reference

SOC 2 Trust Services Criteria Mapping

Control ID Atlas Control Guide Section
CC6.1 MFA for console 2.2
CC6.1 Database users 2.1
CC6.6 Network access 1.1
CC6.7 Encryption 3.1
CC7.2 Auditing 4.1

NIST 800-53 Rev 5 Mapping

Control Atlas Control Guide Section
SC-7 Network security 1.1, 1.2
AC-6 Least privilege 2.1
IA-2(1) MFA 2.2
SC-28 Encryption at rest 3.1
AU-2 Auditing 4.1

Appendix A: Tier Compatibility

Feature M0 (Free) M2/M5 M10+ Dedicated
IP Access List
VPC Peering
Private Endpoints
CMK Encryption
Database Auditing
LDAP/X.509

Appendix B: References

Official MongoDB Documentation:

API Documentation:

Compliance Frameworks:

Hardening Benchmarks:

Security Incidents:

  • Corporate Systems Breach (December 2023): MongoDB detected unauthorized access to corporate systems on December 13, 2023 via a phishing attack. Customer names, phone numbers, email addresses, and account metadata were exposed. One customer’s system logs were accessed. MongoDB Atlas cluster data was NOT affected — the attackers never accessed Atlas clusters or the Atlas authentication system. — MongoDB Security Incident Update

Changelog

Date Version Maturity Changes Author
2025-02-05 0.1.0 draft Initial guide with network, authentication, and encryption controls Claude Code (Opus 4.5)

Contributing

Found an issue or want to improve this guide?