v0.3.0-draft AI Drafted

Snowflake Hardening Guide

Data Last updated: 2026-02-19

Data warehouse security including network policies, MFA enforcement, and access controls

Overview

Snowflake is a cloud data platform whose 2024 breach affecting 165+ organizations (AT&T, Ticketmaster, Santander) demonstrated catastrophic supply chain risk. Over 500+ million individuals had data exposed via credential stuffing attacks on accounts without MFA. OAuth integrations with Tableau, Looker, and Power BI create broad access chains to sensitive data. AT&T paid $370,000 ransom, and 32 consolidated lawsuits are pending.

Intended Audience

  • Security engineers managing Snowflake security
  • Data engineers configuring access controls
  • GRC professionals assessing data platform compliance
  • Third-party risk managers evaluating BI tool integrations

How to Use This Guide

  • L1 (Baseline): Essential controls for all organizations
  • L2 (Hardened): Enhanced controls for security-sensitive environments
  • L3 (Maximum Security): Strictest controls for regulated industries (VPS deployment)

Scope

This guide covers Snowflake-specific security configurations including authentication, network policies, data sharing governance, and BI tool integration security.


Table of Contents

  1. Authentication & Access Controls
  2. Network Access Controls
  3. OAuth & Integration Security
  4. Data Security
  5. Monitoring & Detection
  6. Third-Party Integration Security
  7. Compliance Quick Reference

1. Authentication & Access Controls

1.1 Enforce MFA for All Users

Profile Level: L1 (Baseline) - CRITICAL CIS Controls: 6.3, 6.5 NIST 800-53: IA-2(1), IA-2(2)

Description

Require multi-factor authentication for ALL Snowflake users. The 2024 breach was enabled by credential stuffing against accounts without MFA.

Rationale

Why This Matters:

  • 165+ organizations breached via simple credential stuffing
  • No MFA = trivial account takeover
  • MFA would have prevented 100% of 2024 breach victims

Attack Prevented: Credential stuffing, password spray, account takeover

Real-World Incidents:

  • 2024 Snowflake Breach: UNC5537 threat actor used stolen credentials to access 165+ customer accounts. AT&T, Ticketmaster, Santander, LendingTree, and others affected. $370,000 ransom paid by AT&T. 500+ million individuals had data exposed.

Prerequisites

  • ACCOUNTADMIN role access
  • User inventory for enrollment tracking
  • Communication plan for MFA rollout

ClickOps Implementation

Step 1: Enable MFA at Account Level

  1. Navigate to: Admin → Security (Snowsight)
  2. Under Authentication, enable:
    • Multi-Factor Authentication: Required
    • MFA Policy: Enforce for all users

Step 2: Create MFA Network Policy (Enforce MFA Before Password)

  1. Navigate to: Admin → Security → Network Policies
  2. Create policy requiring MFA regardless of network

Step 3: Verify MFA Enrollment

Run the MFA enrollment verification query from the DB Query Code Pack below to check all active users.

Time to Complete: ~15 minutes (policy) + user enrollment time

Validation & Testing

  1. Attempt login without MFA - should be blocked
  2. Complete login with MFA - should succeed
  3. Run enrollment query - all active users should show MFA enabled
  4. Verify service accounts use key-pair authentication

Expected result: No user can authenticate with password-only

Monitoring & Maintenance

Ongoing monitoring: Use the MFA bypass alert and weekly compliance check queries from the DB Query Code Pack below.

Maintenance schedule:

  • Weekly: Review MFA enrollment compliance
  • Monthly: Audit MFA bypass exceptions
  • Quarterly: Review authentication policies

Operational Impact

Aspect Impact Level Details
User Experience Low Users enroll once, authenticate via app
System Performance None No performance impact
Maintenance Burden Low Self-service enrollment
Rollback Difficulty Easy Can disable policy (not recommended)

Rollback Procedure: Emergency MFA disable is available via the DB Query Code Pack below (not recommended).

Compliance Mappings

Framework Control ID Control Description
SOC 2 CC6.1 Logical access controls
NIST 800-53 IA-2(1), IA-2(2) MFA for network/local access
PCI DSS 8.3.1 MFA for all access
HIPAA 164.312(d) Person or entity authentication

Code Pack: API Script
hth-snowflake-1.01-enforce-mfa.sh View source on GitHub ↗
# Identify users without MFA enrolled
MFA_AUDIT=$(snow_query "
SELECT
    name,
    login_name,
    ext_authn_duo,
    has_rsa_public_key,
    disabled,
    last_success_login
FROM SNOWFLAKE.ACCOUNT_USAGE.USERS
WHERE deleted_on IS NULL
  AND disabled = 'false'
ORDER BY name;
") || {
  fail "1.1 Failed to query user MFA status"
  increment_failed
  summary
  exit 0
}

# Count users without MFA
NO_MFA_COUNT=$(echo "${MFA_AUDIT}" | jq '[.[] | select(.EXT_AUTHN_DUO == "false" and .HAS_RSA_PUBLIC_KEY == "false")] | length' 2>/dev/null || echo "unknown")
TOTAL_COUNT=$(echo "${MFA_AUDIT}" | jq 'length' 2>/dev/null || echo "unknown")
# Create authentication policy requiring MFA for all users
  info "1.1 Creating authentication policy to enforce MFA..."
  snow_exec "
CREATE AUTHENTICATION POLICY IF NOT EXISTS hth_require_mfa
  MFA_AUTHENTICATION_METHODS = ('TOTP')
  CLIENT_TYPES = ('SNOWFLAKE_UI', 'SNOWSQL', 'DRIVERS')
  SECURITY_INTEGRATIONS = ()
  COMMENT = 'HTH: Enforce MFA for all human users (Control 1.1)';
" > /dev/null 2>&1 || {
    fail "1.1 Failed to create authentication policy"
    increment_failed
    summary
    exit 0
  }

  # Attach policy at account level
  info "1.1 Attaching authentication policy to account..."
  snow_exec "
ALTER ACCOUNT SET AUTHENTICATION POLICY hth_require_mfa;
" > /dev/null 2>&1 || {
    warn "1.1 Could not attach policy at account level (may require ACCOUNTADMIN)"
    increment_failed
    summary
    exit 0
  }
Code Pack: DB Query
hth-snowflake-1.01-enforce-mfa.sql View source on GitHub ↗
-- Check MFA enrollment status for all active users
SELECT
    name,
    login_name,
    ext_authn_duo,
    ext_authn_uid,
    disabled,
    last_success_login
FROM SNOWFLAKE.ACCOUNT_USAGE.USERS
WHERE deleted_on IS NULL
ORDER BY ext_authn_duo DESC;
-- Alert on MFA bypass attempts (last 24 hours)
SELECT *
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE IS_SUCCESS = 'NO'
  AND ERROR_MESSAGE LIKE '%MFA%'
  AND EVENT_TIMESTAMP > DATEADD(hour, -24, CURRENT_TIMESTAMP());

-- Weekly MFA compliance check
SELECT
    COUNT(CASE WHEN ext_authn_duo = 'TRUE' THEN 1 END) as mfa_enabled,
    COUNT(CASE WHEN ext_authn_duo = 'FALSE' OR ext_authn_duo IS NULL THEN 1 END) as mfa_disabled,
    COUNT(*) as total_users
FROM SNOWFLAKE.ACCOUNT_USAGE.USERS
WHERE deleted_on IS NULL
  AND disabled = 'FALSE';
-- Emergency MFA disable (NOT RECOMMENDED)
ALTER ACCOUNT UNSET AUTHENTICATION POLICY;
Code Pack: Sigma Detection Rule
hth-snowflake-1.01-mfa-bypass-attempt.yml View source on GitHub ↗
detection:
    selection:
        event_type: 'LOGIN'
        is_success: 'YES'
    filter_mfa:
        second_authentication_factor|contains:
            - 'MFA_TOKEN'
            - 'DUO_PASSCODE'
    filter_keypair:
        first_authentication_factor: 'RSA_KEYPAIR'
    condition: selection and not filter_mfa and not filter_keypair
fields:
    - user_name
    - client_ip
    - reported_client_type
    - first_authentication_factor
    - second_authentication_factor
    - event_timestamp

1.2 Implement Service Account Key-Pair Authentication

Profile Level: L1 (Baseline) NIST 800-53: IA-5

Description

Replace password authentication for service accounts with RSA key-pair authentication. Eliminates credential stuffing risk for automated processes.

Rationale

Why This Matters:

  • Service accounts can’t use interactive MFA
  • Password-based service accounts were compromised in 2024 breach
  • Key-pair authentication is immune to credential stuffing

ClickOps Implementation

Step 1: Generate RSA Key Pair using OpenSSL to create a 2048-bit private key and extract the public key in Snowflake format.

Step 2: Configure User with Key-Pair by assigning the public key to the service account and removing its password.

Step 3: Update Application Connection to use the private key file instead of a password.

Code Pack: CLI Script
hth-snowflake-1.02-generate-key-pair.sh View source on GitHub ↗
# Generate private key (keep secure!)
openssl genrsa -out rsa_key.pem 2048

# Generate public key
openssl rsa -in rsa_key.pem -pubout -out rsa_key.pub

# Extract public key in Snowflake format
grep -v "PUBLIC KEY" rsa_key.pub | tr -d '\n'
Code Pack: SDK Script
hth-snowflake-1.02-key-pair-connection.py View source on GitHub ↗
import snowflake.connector

conn = snowflake.connector.connect(
    account='your_account',
    user='svc_etl_pipeline',
    private_key_file='/path/to/rsa_key.pem',
    warehouse='ETL_WH',
    database='PRODUCTION'
)
Code Pack: DB Query
hth-snowflake-1.02-configure-key-pair-auth.sql View source on GitHub ↗
-- Remove password from service account
ALTER USER svc_etl_pipeline
    SET RSA_PUBLIC_KEY = 'MIIBIjANBgkqhki...'
    UNSET PASSWORD;

-- Verify
DESC USER svc_etl_pipeline;

1.3 Implement RBAC with Custom Roles

Profile Level: L1 (Baseline) NIST 800-53: AC-3, AC-6

Description

Create granular role hierarchy instead of granting broad SYSADMIN or ACCOUNTADMIN access. Implement least privilege for data access.

ClickOps Implementation

Step 1: Design Role Hierarchy

  1. Navigate to: Admin –> Account –> Roles
  2. Create functional roles (data_analyst, data_engineer, security_admin)
  3. Create object access roles (sales_data_reader, sales_data_writer, pii_data_reader)
  4. Grant object access roles to functional roles
  5. Grant functional roles to users

Step 2: Restrict ACCOUNTADMIN

  1. Navigate to: Admin –> Account –> Roles –> ACCOUNTADMIN
  2. Review all members with ACCOUNTADMIN access
  3. Remove unnecessary ACCOUNTADMIN grants
  4. Document break-glass procedure for emergency admin access

Code Pack: API Script
hth-snowflake-1.03-implement-rbac.sh View source on GitHub ↗
# Audit ACCOUNTADMIN grants -- this role should have minimal direct assignments
info "1.3 Auditing ACCOUNTADMIN role grants..."
ADMIN_GRANTS=$(snow_query "SHOW GRANTS OF ROLE ACCOUNTADMIN;") || {
  fail "1.3 Failed to query ACCOUNTADMIN grants"
  increment_failed
  summary
  exit 0
}

ADMIN_COUNT=$(echo "${ADMIN_GRANTS}" | jq '[.[] | select(.granted_to == "USER")] | length' 2>/dev/null || echo "unknown")

if [ "${ADMIN_COUNT}" -gt 2 ]; then
  warn "1.3 ACCOUNTADMIN granted to ${ADMIN_COUNT} users (recommend <= 2)"
  echo "${ADMIN_GRANTS}" | jq -r '.[] | select(.granted_to == "USER") | "  - \(.grantee_name)"' 2>/dev/null || true
else
  pass "1.3 ACCOUNTADMIN grant count is acceptable (${ADMIN_COUNT} users)"
fi
# Create custom role hierarchy following least-privilege principle
info "1.3 Creating custom role hierarchy..."
snow_exec "
-- Functional roles
CREATE ROLE IF NOT EXISTS HTH_DATA_READER
  COMMENT = 'HTH: Read-only access to production data (Control 1.3)';
CREATE ROLE IF NOT EXISTS HTH_DATA_WRITER
  COMMENT = 'HTH: Read-write access to production data (Control 1.3)';
CREATE ROLE IF NOT EXISTS HTH_DATA_ANALYST
  COMMENT = 'HTH: Analyst role with warehouse usage and read access (Control 1.3)';
CREATE ROLE IF NOT EXISTS HTH_SECURITY_ADMIN
  COMMENT = 'HTH: Security administration without full ACCOUNTADMIN (Control 1.3)';

-- Role hierarchy: SECURITY_ADMIN -> SYSADMIN -> DATA_WRITER -> DATA_READER
GRANT ROLE HTH_DATA_READER TO ROLE HTH_DATA_WRITER;
GRANT ROLE HTH_DATA_READER TO ROLE HTH_DATA_ANALYST;
GRANT ROLE HTH_DATA_WRITER TO ROLE SYSADMIN;
GRANT ROLE HTH_SECURITY_ADMIN TO ROLE ACCOUNTADMIN;
" > /dev/null 2>&1 || {
  fail "1.3 Failed to create custom role hierarchy"
  increment_failed
  summary
  exit 0
}

2. Network Access Controls

2.1 Implement Network Policies

Profile Level: L1 (Baseline) CIS Controls: 13.3 NIST 800-53: AC-3, SC-7

Description

Restrict Snowflake access to known IP ranges (corporate network, VPN, approved BI tool IPs). Block access from unauthorized networks.

Rationale

Why This Matters:

  • 2024 attackers accessed accounts from attacker-controlled infrastructure
  • IP restrictions would have blocked compromised credential usage
  • Network policies are defense-in-depth for credential theft

Attack Prevented: Credential stuffing from botnets, unauthorized access from foreign locations

ClickOps Implementation

Step 1: Create Network Policy

  1. Navigate to: Admin → Security → Network Policies
  2. Click Add Policy
  3. Configure:
    • Name: corporate_access
    • Allowed IPs: Corporate ranges, VPN egress
    • Blocked IPs: Known bad ranges (optional)

Step 2: Apply Network Policy

Apply the network policy at account level or per-user using the SQL commands in the DB Query Code Pack below.

Validation & Testing

Verify network policy assignments using the validation queries in the DB Query Code Pack below.


Code Pack: Terraform
hth-snowflake-2.01-implement-network-policies.tf View source on GitHub ↗
# Network policy restricting access to corporate IP ranges
resource "snowflake_network_policy" "corporate_access" {
  name    = "HTH_CORPORATE_ACCESS"
  comment = "HTH: Restrict account access to corporate IP ranges (Control 2.1)"

  allowed_ip_list = var.allowed_ip_list
  blocked_ip_list = var.blocked_ip_list
}

# Attach network policy at account level
resource "snowflake_network_policy_attachment" "account_level" {
  network_policy_name = snowflake_network_policy.corporate_access.name
  set_for_account     = true
}

# Optional: stricter network policy for service accounts
resource "snowflake_network_policy" "service_accounts" {
  count = length(var.service_account_allowed_ips) > 0 ? 1 : 0

  name    = "HTH_SERVICE_ACCOUNT_ACCESS"
  comment = "HTH: Restrict service account access to known IPs (Control 2.1)"

  allowed_ip_list = var.service_account_allowed_ips
  blocked_ip_list = []
}
Code Pack: API Script
hth-snowflake-2.01-implement-network-policies.sh View source on GitHub ↗
# Audit existing network policies
info "2.1 Auditing existing network policies..."
EXISTING_POLICIES=$(snow_query "SHOW NETWORK POLICIES;") || {
  warn "2.1 No network policies found or query failed"
  EXISTING_POLICIES="[]"
}

POLICY_COUNT=$(echo "${EXISTING_POLICIES}" | jq 'length' 2>/dev/null || echo "0")

if [ "${POLICY_COUNT}" -gt 0 ]; then
  info "2.1 Found ${POLICY_COUNT} existing network policies:"
  echo "${EXISTING_POLICIES}" | jq -r '.[] | "  - \(.name): allowed=\(.allowed_ip_list // "none"), blocked=\(.blocked_ip_list // "none")"' 2>/dev/null || true
else
  warn "2.1 No network policies configured -- account is accessible from any IP"
fi
info "2.1 Creating network policy with allowed IPs: ${SNOWFLAKE_ALLOWED_IPS}"
  snow_exec "
CREATE NETWORK POLICY IF NOT EXISTS hth_corporate_access
  ALLOWED_IP_LIST = (${SNOWFLAKE_ALLOWED_IPS})
  BLOCKED_IP_LIST = ()
  COMMENT = 'HTH: Restrict access to corporate IPs (Control 2.1)';
" > /dev/null 2>&1 || {
    fail "2.1 Failed to create network policy"
    increment_failed
    summary
    exit 0
  }

  # Activate at account level
  info "2.1 Activating network policy at account level..."
  snow_exec "
ALTER ACCOUNT SET NETWORK_POLICY = 'hth_corporate_access';
" > /dev/null 2>&1 || {
    fail "2.1 Failed to activate network policy (requires ACCOUNTADMIN)"
    increment_failed
    summary
    exit 0
  }
Code Pack: DB Query
hth-snowflake-2.01-implement-network-policies.sql View source on GitHub ↗
-- Apply network policy to account (affects all users)
ALTER ACCOUNT SET NETWORK_POLICY = corporate_access;

-- Or apply to specific users only
ALTER USER external_partner SET NETWORK_POLICY = partner_network_policy;
-- Test from allowed IP - should succeed
SELECT CURRENT_USER();

-- View network policy assignments
SHOW PARAMETERS LIKE 'NETWORK_POLICY' IN ACCOUNT;
SHOW PARAMETERS LIKE 'NETWORK_POLICY' IN USER svc_tableau;
Code Pack: Sigma Detection Rule
hth-snowflake-2.01-network-policy-modified.yml View source on GitHub ↗
detection:
    selection_create:
        query_text|contains:
            - 'CREATE NETWORK POLICY'
            - 'CREATE OR REPLACE NETWORK POLICY'
    selection_alter:
        query_text|contains:
            - 'ALTER NETWORK POLICY'
            - 'ALTER ACCOUNT SET NETWORK_POLICY'
            - 'ALTER ACCOUNT UNSET NETWORK_POLICY'
    selection_drop:
        query_text|contains:
            - 'DROP NETWORK POLICY'
    filter_success:
        execution_status: 'SUCCESS'
    condition: (selection_create or selection_alter or selection_drop) and filter_success
fields:
    - user_name
    - role_name
    - query_text
    - start_time
    - client_ip
    - execution_status

2.2 Enable Private Connectivity (PrivateLink/Private Service Connect)

Profile Level: L2 (Hardened) NIST 800-53: SC-7

Description

Configure private network connectivity to Snowflake, eliminating exposure to public internet.

ClickOps Implementation

AWS PrivateLink:

  1. Navigate to: Admin → Security → Private Connectivity
  2. Enable PrivateLink
  3. Configure VPC endpoint in AWS
  4. Update DNS for private resolution

Azure Private Link:

  1. Similar process for Azure environments
  2. Configure Private Endpoint in Azure
Code Pack: DB Query
hth-snowflake-2.02-verify-private-connectivity.sql View source on GitHub ↗
-- Verify private connectivity configuration
SELECT SYSTEM$GET_PRIVATELINK_CONFIG();

3. OAuth & Integration Security

3.1 Restrict OAuth Token Scope and Lifetime

Profile Level: L1 (Baseline) NIST 800-53: IA-5(13)

Description

Configure OAuth security integrations with minimum required scopes and short token lifetimes for BI tool connections.

Rationale

Why This Matters:

  • OAuth tokens for Tableau, Looker, Power BI access data
  • Long-lived tokens create persistent risk
  • Stolen OAuth tokens enabled downstream access in 2024 breach

ClickOps Implementation

Step 1: Audit Existing Security Integrations

List and inspect all security integrations using the audit queries in the DB Query Code Pack below.

Step 2: Configure OAuth Integration

  1. Create a new OAuth security integration for your BI tool
  2. Set token refresh validity to 86400 seconds (1 day) instead of the 90-day default
  3. Add ACCOUNTADMIN, SECURITYADMIN, and SYSADMIN to the blocked roles list

Step 3: Block High-Privilege Roles from OAuth

  1. Edit the security integration to ensure ACCOUNTADMIN, SECURITYADMIN, SYSADMIN, and ORGADMIN are all in the blocked roles list
  2. Verify no admin roles can authenticate via OAuth tokens

Code Pack: Terraform
hth-snowflake-3.01-restrict-oauth-scopes.tf View source on GitHub ↗
# Snowflake OAuth security integration with restricted scopes
resource "snowflake_security_integration" "oauth_restricted" {
  name    = "HTH_OAUTH_RESTRICTED"
  type    = "OAUTH"
  comment = "HTH: OAuth integration with restricted scopes and blocked admin roles (Control 3.1)"

  oauth_client                       = "CUSTOM"
  oauth_client_type                  = "CONFIDENTIAL"
  oauth_redirect_uri                 = var.oauth_redirect_uri
  oauth_issue_refresh_tokens         = true
  oauth_refresh_token_validity       = var.oauth_refresh_token_validity
  oauth_enforce_pkce                 = "OPTIONAL"

  # Block privileged roles from OAuth access
  blocked_roles_list = [
    "ACCOUNTADMIN",
    "SECURITYADMIN",
    "ORGADMIN",
  ]

  enabled = true
}

# External OAuth integration (L2) for IdP-issued tokens
resource "snowflake_security_integration" "external_oauth" {
  count = var.profile_level >= 2 ? 1 : 0

  name    = "HTH_EXTERNAL_OAUTH"
  type    = "EXTERNAL_OAUTH"
  comment = "HTH: External OAuth with IdP-issued tokens (Control 3.2)"

  external_oauth_type               = var.external_oauth_type
  external_oauth_issuer             = var.external_oauth_issuer
  external_oauth_token_user_mapping_claim = ["sub"]
  external_oauth_snowflake_user_mapping_attribute = "login_name"
  external_oauth_jws_keys_url       = var.external_oauth_jws_keys_url

  blocked_roles_list = [
    "ACCOUNTADMIN",
    "SECURITYADMIN",
    "ORGADMIN",
  ]

  enabled = true
}
Code Pack: API Script
hth-snowflake-3.01-restrict-oauth-scopes.sh View source on GitHub ↗
# Audit all security integrations for OAuth scope and lifetime
info "3.1 Auditing security integrations..."
INTEGRATIONS=$(snow_query "SHOW SECURITY INTEGRATIONS;") || {
  warn "3.1 No security integrations found or query failed"
  INTEGRATIONS="[]"
}

OAUTH_COUNT=$(echo "${INTEGRATIONS}" | jq '[.[] | select(.type == "OAUTH - SNOWFLAKE" or .type == "OAUTH - CUSTOM" or .type == "OAUTH - EXTERNAL")] | length' 2>/dev/null || echo "0")

if [ "${OAUTH_COUNT}" -gt 0 ]; then
  info "3.1 Found ${OAUTH_COUNT} OAuth integration(s):"
  echo "${INTEGRATIONS}" | jq -r '.[] | select(.type | test("OAUTH")) | "  - \(.name) [\(.type)] enabled=\(.enabled)"' 2>/dev/null || true
else
  info "3.1 No OAuth integrations found"
fi
# Verify all OAuth integrations block privileged roles
info "3.1 Checking OAuth integrations for blocked role lists..."
INTEGRATION_NAMES=$(echo "${INTEGRATIONS}" | jq -r '.[] | select(.type | test("OAUTH")) | .name' 2>/dev/null || true)

ISSUES_FOUND=0
while IFS= read -r int_name; do
  [ -z "${int_name}" ] && continue
  DETAIL=$(snow_query "DESCRIBE SECURITY INTEGRATION \"${int_name}\";") || {
    warn "3.1 Could not describe integration: ${int_name}"
    continue
  }

  # Check for BLOCKED_ROLES_LIST
  BLOCKED=$(echo "${DETAIL}" | jq -r '.[] | select(.property == "BLOCKED_ROLES_LIST") | .property_value' 2>/dev/null || echo "")

  if [ -z "${BLOCKED}" ] || [ "${BLOCKED}" = "ACCOUNTADMIN, SECURITYADMIN" ]; then
    # Default -- check it at minimum blocks ACCOUNTADMIN
    if echo "${BLOCKED}" | grep -q "ACCOUNTADMIN"; then
      pass "3.1 Integration ${int_name} blocks ACCOUNTADMIN"
    else
      fail "3.1 Integration ${int_name} does NOT block ACCOUNTADMIN"
      ISSUES_FOUND=$((ISSUES_FOUND + 1))
    fi
  else
    info "3.1 Integration ${int_name} blocked roles: ${BLOCKED}"
  fi

  # Check token lifetime
  TOKEN_LIFETIME=$(echo "${DETAIL}" | jq -r '.[] | select(.property == "OAUTH_ACCESS_TOKEN_VALIDITY") | .property_value' 2>/dev/null || echo "")
  REFRESH_LIFETIME=$(echo "${DETAIL}" | jq -r '.[] | select(.property == "OAUTH_REFRESH_TOKEN_VALIDITY") | .property_value' 2>/dev/null || echo "")

  if [ -n "${TOKEN_LIFETIME}" ]; then
    if [ "${TOKEN_LIFETIME}" -le 600 ]; then
      pass "3.1 Integration ${int_name} access token lifetime: ${TOKEN_LIFETIME}s (<=600s)"
    else
      warn "3.1 Integration ${int_name} access token lifetime: ${TOKEN_LIFETIME}s (recommend <=600s)"
    fi
  fi
done <<< "${INTEGRATION_NAMES}"
Code Pack: DB Query
hth-snowflake-3.01-audit-security-integrations.sql View source on GitHub ↗
-- List all security integrations
SHOW SECURITY INTEGRATIONS;

-- Describe OAuth integration details
DESC SECURITY INTEGRATION tableau_oauth;

3.2 Implement External OAuth (IdP Integration)

Profile Level: L2 (Hardened) NIST 800-53: IA-2(1)

Description

Configure External OAuth using your identity provider (Okta, Azure AD) for centralized authentication and MFA enforcement.

Code Implementation

Code Pack: DB Query
hth-snowflake-3.02-configure-external-oauth.sql View source on GitHub ↗
-- Create External OAuth integration with Okta
CREATE OR REPLACE SECURITY INTEGRATION okta_oauth
    TYPE = EXTERNAL_OAUTH
    ENABLED = TRUE
    EXTERNAL_OAUTH_TYPE = OKTA
    EXTERNAL_OAUTH_ISSUER = 'https://your-org.okta.com/oauth2/default'
    EXTERNAL_OAUTH_JWS_KEYS_URL = 'https://your-org.okta.com/oauth2/default/v1/keys'
    EXTERNAL_OAUTH_AUDIENCE_LIST = ('your-snowflake-account')
    EXTERNAL_OAUTH_TOKEN_USER_MAPPING_CLAIM = 'sub'
    EXTERNAL_OAUTH_SNOWFLAKE_USER_MAPPING_ATTRIBUTE = 'LOGIN_NAME';

-- For Azure AD
CREATE OR REPLACE SECURITY INTEGRATION azure_ad_oauth
    TYPE = EXTERNAL_OAUTH
    ENABLED = TRUE
    EXTERNAL_OAUTH_TYPE = AZURE
    EXTERNAL_OAUTH_ISSUER = 'https://login.microsoftonline.com/{tenant-id}/v2.0'
    EXTERNAL_OAUTH_JWS_KEYS_URL = 'https://login.microsoftonline.com/{tenant-id}/discovery/v2.0/keys'
    EXTERNAL_OAUTH_AUDIENCE_LIST = ('your-snowflake-account');

4. Data Security

4.1 Implement Column-Level Security with Masking Policies

Profile Level: L2 (Hardened) NIST 800-53: AC-3, SC-28

Description

Apply dynamic data masking to sensitive columns (PII, financial data) to restrict visibility based on user role.

ClickOps Implementation

Step 1: Create Masking Policies

  1. Navigate to: Data –> Databases –> [Database] –> Policies
  2. Create masking policy for SSN that returns full value for PII_ADMIN role, masked value (XXX-XX-####) for all others
  3. Create masking policy for email that returns full value for PII_ADMIN and CUSTOMER_SERVICE roles, masked value for all others

Step 2: Apply Masking Policies to Columns

  1. Navigate to the target table and column
  2. Set the SSN masking policy on the ssn column
  3. Set the email masking policy on the email column

Code Pack: Terraform
hth-snowflake-4.01-column-masking.tf View source on GitHub ↗
# Dynamic data masking policy for email addresses
resource "snowflake_masking_policy" "mask_email" {
  count = var.profile_level >= 2 ? 1 : 0

  name     = "HTH_MASK_EMAIL"
  database = var.target_database
  schema   = var.target_schema
  comment  = "HTH: Mask email addresses for non-privileged roles (Control 4.1)"

  signature {
    column {
      name = "val"
      type = "VARCHAR"
    }
  }

  masking_expression = <<-EOT
    CASE
      WHEN CURRENT_ROLE() IN ('HTH_DATA_WRITER', 'SYSADMIN', 'ACCOUNTADMIN')
        THEN val
      WHEN CURRENT_ROLE() = 'HTH_DATA_ANALYST'
        THEN REGEXP_REPLACE(val, '.+@', '***@')
      ELSE '********'
    END
  EOT

  return_data_type = "VARCHAR"
}

# Dynamic data masking policy for SSN / national ID
resource "snowflake_masking_policy" "mask_ssn" {
  count = var.profile_level >= 2 ? 1 : 0

  name     = "HTH_MASK_SSN"
  database = var.target_database
  schema   = var.target_schema
  comment  = "HTH: Mask SSN/national ID for non-privileged roles (Control 4.1)"

  signature {
    column {
      name = "val"
      type = "VARCHAR"
    }
  }

  masking_expression = <<-EOT
    CASE
      WHEN CURRENT_ROLE() IN ('SYSADMIN', 'ACCOUNTADMIN')
        THEN val
      ELSE CONCAT('***-**-', RIGHT(val, 4))
    END
  EOT

  return_data_type = "VARCHAR"
}

# Row access policy for multi-tenant data isolation
resource "snowflake_row_access_policy" "tenant_isolation" {
  count = var.profile_level >= 2 ? 1 : 0

  name     = "HTH_TENANT_ISOLATION"
  database = var.target_database
  schema   = var.target_schema
  comment  = "HTH: Row-level security for multi-tenant data isolation (Control 4.2)"

  signature {
    column {
      name = "tenant_id"
      type = "VARCHAR"
    }
  }

  row_access_expression = <<-EOT
    CASE
      WHEN CURRENT_ROLE() IN ('SYSADMIN', 'ACCOUNTADMIN')
        THEN TRUE
      ELSE tenant_id = CURRENT_SESSION_CONTEXT('TENANT_ID')
    END
  EOT
}

4.2 Enable Row Access Policies

Profile Level: L2 (Hardened) NIST 800-53: AC-3

Description

Implement row-level security to restrict data visibility based on user attributes (department, region, customer assignment).

Code Pack: DB Query
hth-snowflake-4.02-row-access-policy.sql View source on GitHub ↗
-- Create row access policy
CREATE OR REPLACE ROW ACCESS POLICY region_access AS (region_col VARCHAR)
RETURNS BOOLEAN ->
    CURRENT_ROLE() IN ('DATA_ADMIN')
    OR region_col = CURRENT_SESSION()::JSON:region;

-- Apply to table
ALTER TABLE sales ADD ROW ACCESS POLICY region_access ON (region);

4.3 Restrict Data Sharing

Profile Level: L1 (Baseline) NIST 800-53: AC-21

Description

Audit and control Snowflake data sharing to external accounts. Prevent accidental data exposure via shares.

Code Pack: DB Query
hth-snowflake-4.03-restrict-data-sharing.sql View source on GitHub ↗
-- Audit existing shares
SHOW SHARES;

-- Review who has access
SHOW GRANTS ON SHARE customer_data_share;

-- Remove access
REVOKE USAGE ON DATABASE customers FROM SHARE customer_data_share;

5. Monitoring & Detection

5.1 Enable Comprehensive Audit Logging

Profile Level: L1 (Baseline) NIST 800-53: AU-2, AU-3, AU-6

Description

Configure access to SNOWFLAKE.ACCOUNT_USAGE schema for security monitoring and anomaly detection.

Detection Use Cases

Key anomaly detection queries are provided in the code pack below. These cover:

  • Anomaly 1: Failed Login Spike – Detect credential stuffing by identifying users/IPs with 10+ failed logins per hour
  • Anomaly 2: Bulk Data Export – Flag SELECT queries returning 1M+ rows (potential exfiltration)
  • Anomaly 3: New IP Address Access – Identify successful logins from IPs not seen in the prior 7 days
  • Anomaly 4: Privilege Escalation – Monitor for GRANT or ALTER statements targeting ACCOUNTADMIN

Code Pack: API Script
hth-snowflake-5.01-enable-audit-logging.sh View source on GitHub ↗
# Query recent login history for anomalies
info "5.1 Querying login history (last 24 hours)..."
LOGIN_HISTORY=$(snow_query "
SELECT
    user_name,
    client_ip,
    reported_client_type,
    first_authentication_factor,
    second_authentication_factor,
    is_success,
    error_code,
    error_message,
    event_timestamp
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE event_timestamp > DATEADD('hour', -24, CURRENT_TIMESTAMP())
ORDER BY event_timestamp DESC
LIMIT 500;
") || {
  fail "5.1 Failed to query login history (check ACCOUNTADMIN or SNOWFLAKE database access)"
  increment_failed
  summary
  exit 0
}

TOTAL_LOGINS=$(echo "${LOGIN_HISTORY}" | jq 'length' 2>/dev/null || echo "0")
FAILED_LOGINS=$(echo "${LOGIN_HISTORY}" | jq '[.[] | select(.IS_SUCCESS == "NO")] | length' 2>/dev/null || echo "0")

info "5.1 Last 24h: ${TOTAL_LOGINS} total logins, ${FAILED_LOGINS} failed"

if [ "${FAILED_LOGINS}" -gt 0 ]; then
  warn "5.1 Failed login attempts detected:"
  echo "${LOGIN_HISTORY}" | jq -r '.[] | select(.IS_SUCCESS == "NO") | "  - \(.USER_NAME) from \(.CLIENT_IP) [\(.REPORTED_CLIENT_TYPE)] error: \(.ERROR_MESSAGE)"' 2>/dev/null | head -20 || true
fi
# Detect suspicious patterns: off-hours access, unusual clients, brute force
info "5.1 Checking for brute-force patterns..."
BRUTE_FORCE=$(snow_query "
SELECT
    user_name,
    client_ip,
    COUNT(*) AS failed_attempts
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE event_timestamp > DATEADD('hour', -1, CURRENT_TIMESTAMP())
  AND is_success = 'NO'
GROUP BY user_name, client_ip
HAVING COUNT(*) >= 5
ORDER BY failed_attempts DESC;
") || {
  warn "5.1 Could not run brute-force detection query"
  BRUTE_FORCE="[]"
}

BRUTE_COUNT=$(echo "${BRUTE_FORCE}" | jq 'length' 2>/dev/null || echo "0")
if [ "${BRUTE_COUNT}" -gt 0 ]; then
  fail "5.1 Potential brute-force detected (>=5 failures/hour):"
  echo "${BRUTE_FORCE}" | jq -r '.[] | "  - \(.USER_NAME) from \(.CLIENT_IP): \(.FAILED_ATTEMPTS) attempts"' 2>/dev/null || true
else
  pass "5.1 No brute-force patterns detected in the last hour"
fi

# Check for access without MFA
info "5.1 Checking for logins without second factor..."
NO_MFA_LOGINS=$(snow_query "
SELECT
    user_name,
    client_ip,
    first_authentication_factor,
    second_authentication_factor,
    COUNT(*) AS login_count
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE event_timestamp > DATEADD('hour', -24, CURRENT_TIMESTAMP())
  AND is_success = 'YES'
  AND (second_authentication_factor IS NULL OR second_authentication_factor = '')
GROUP BY user_name, client_ip, first_authentication_factor, second_authentication_factor
ORDER BY login_count DESC;
") || {
  warn "5.1 Could not query MFA login data"
  NO_MFA_LOGINS="[]"
}

NO_MFA_COUNT=$(echo "${NO_MFA_LOGINS}" | jq 'length' 2>/dev/null || echo "0")
if [ "${NO_MFA_COUNT}" -gt 0 ]; then
  warn "5.1 ${NO_MFA_COUNT} user(s) logged in without second factor in last 24h"
  echo "${NO_MFA_LOGINS}" | jq -r '.[] | "  - \(.USER_NAME) via \(.FIRST_AUTHENTICATION_FACTOR) (\(.LOGIN_COUNT)x)"' 2>/dev/null | head -10 || true
else
  pass "5.1 All successful logins used a second authentication factor"
fi
# Verify ACCOUNT_USAGE schema is accessible (required for audit)
info "5.1 Verifying ACCOUNT_USAGE schema access..."
snow_query "SELECT COUNT(*) AS cnt FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY WHERE start_time > DATEADD('minute', -5, CURRENT_TIMESTAMP());" > /dev/null 2>&1 && {
  pass "5.1 ACCOUNT_USAGE schema is accessible for audit logging"
} || {
  fail "5.1 Cannot access ACCOUNT_USAGE schema -- grant IMPORTED PRIVILEGES on SNOWFLAKE database"
}
Code Pack: Sigma Detection Rule
hth-snowflake-5.01-bulk-data-export.yml View source on GitHub ↗
detection:
    selection_copy:
        query_text|contains:
            - 'COPY INTO @'
            - 'COPY INTO ''s3://'
            - 'COPY INTO ''gcs://'
            - 'COPY INTO ''azure://'
    selection_large_result:
        rows_produced|gte: 1000000
    selection_large_bytes:
        bytes_scanned|gte: 1073741824
    filter_success:
        execution_status: 'SUCCESS'
    condition: (selection_copy or selection_large_result or selection_large_bytes) and filter_success
fields:
    - user_name
    - role_name
    - query_text
    - rows_produced
    - bytes_scanned
    - start_time
    - warehouse_name
    - client_ip

5.2 Forward Logs to SIEM

Profile Level: L1 (Baseline)

Description

Export Snowflake audit logs to SIEM (Splunk, Datadog, Sumo Logic) for real-time alerting and correlation.

Code Pack: DB Query
hth-snowflake-5.02-siem-export-task.sql View source on GitHub ↗
-- Create task to export logs to S3/Azure Blob for SIEM ingestion
CREATE OR REPLACE TASK export_login_history
    WAREHOUSE = security_wh
    SCHEDULE = '60 MINUTE'
AS
    COPY INTO @security_logs/login_history/
    FROM (
        SELECT *
        FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
        WHERE event_timestamp > DATEADD(minute, -60, CURRENT_TIMESTAMP())
    )
    FILE_FORMAT = (TYPE = JSON);

ALTER TASK export_login_history RESUME;

6. Third-Party Integration Security

6.1 Integration Risk Assessment Matrix

Integration Risk Level OAuth Scopes Recommended Controls
Tableau High Full data access IP restriction, role blocking, token rotation
Power BI High Full data access Gateway IP allowlist, limited roles
Looker High Full data access Service account, IP restriction
dbt Cloud High Write access Service account, key-pair auth
Fivetran Medium Specific schemas Limited role, source restrictions

6.2 Tableau Integration Hardening

Controls:

  • ✅ Create dedicated service account with key-pair auth
  • ✅ Restrict to Tableau Server IPs only
  • ✅ Block admin roles from OAuth
  • ✅ Limit to specific databases/schemas
  • ✅ Enable query tagging for monitoring
Code Pack: DB Query
hth-snowflake-6.02-tableau-integration.sql View source on GitHub ↗
-- Create restricted role for Tableau
CREATE ROLE tableau_reader;
GRANT USAGE ON WAREHOUSE bi_warehouse TO ROLE tableau_reader;
GRANT USAGE ON DATABASE analytics TO ROLE tableau_reader;
GRANT USAGE ON ALL SCHEMAS IN DATABASE analytics TO ROLE tableau_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA analytics.dashboards TO ROLE tableau_reader;

-- Create service account
CREATE USER svc_tableau
    DEFAULT_ROLE = tableau_reader
    DEFAULT_WAREHOUSE = bi_warehouse
    RSA_PUBLIC_KEY = 'MIIBIjAN...';

-- Apply network policy
ALTER USER svc_tableau SET NETWORK_POLICY = tableau_only;

7. Compliance Quick Reference

SOC 2 Mapping

Control ID Snowflake Control Guide Section
CC6.1 MFA enforcement 1.1
CC6.2 RBAC with custom roles 1.3
CC6.6 Network policies 2.1
CC7.2 Login/query history monitoring 5.1

PCI DSS Mapping

Control Snowflake Control Guide Section
8.3.1 MFA for all access 1.1
7.1 Role-based access 1.3
10.2 Audit logging 5.1
3.4 Column masking 4.1

Appendix A: Edition Compatibility

Control Standard Enterprise Business Critical VPS
MFA
Network Policies
Dynamic Masking
Row Access Policies
PrivateLink
Tri-Secret Secure
Customer-Managed Keys

Appendix B: References

Official Snowflake Documentation:

API & Developer Tools:

Compliance Frameworks:

Security Incidents:

  • (2024) UNC5537 threat actor campaign used credential stuffing against Snowflake customer accounts lacking MFA. 165+ organizations affected including AT&T, Ticketmaster, Santander, and LendingTree. Over 500 million individuals had data exposed. AT&T paid $370,000 ransom. Root cause: customer accounts without MFA – not a Snowflake platform breach.

Changelog

Date Version Maturity Changes Author
2026-02-19 0.3.0 draft Migrate all remaining inline code to Code Packs (sections 1.1, 2.1, 2.2, 3.1, 4.3); zero inline code blocks remain Claude Code (Opus 4.6)
2026-02-19 0.2.0 draft Migrate inline code to Code Packs (sections 1.2, 3.2, 4.2, 5.2, 6.2) Claude Code (Opus 4.6)
2025-12-14 0.1.0 draft Initial Snowflake hardening guide Claude Code (Opus 4.5)