Cursor Hardening Guide
AI code editor security hardening for code privacy, MCP security, agent sandboxing, API key management, and workspace trust
Product Editions Covered: Cursor Free, Cursor Pro, Cursor Teams, Cursor Enterprise
Overview
Cursor is an AI-powered code editor built on VSCode that integrates large language models (LLMs) directly into the development workflow. As organizations adopt AI coding assistants, securing these tools becomes critical—they process proprietary source code, handle API credentials, connect to multiple AI providers, and increasingly operate as autonomous agents capable of executing terminal commands, modifying files, and interacting with external services via MCP servers.
The threat landscape for AI code editors evolved rapidly in 2025-2026. Seven CVEs were assigned to Cursor in 2025 alone—including remote code execution via MCP prompt injection (CurXecute), persistent team-wide compromise through poisoned MCP configurations (MCPoison), sandbox escapes via shell builtins (NomShub), and case-sensitivity bypasses enabling sensitive file overwrites. A malicious extension on the Open VSX registry led to a confirmed $500,000 cryptocurrency theft. Security researchers demonstrated that invisible Unicode characters in .cursorrules files can weaponize AI code generation across entire teams.
This guide provides comprehensive hardening controls informed by vendor documentation, CVE analysis, security researcher disclosures, and industry frameworks including the OWASP Top 10 for LLM Applications (2025), the OWASP Top 10 for Agentic Applications (2026), NIST AI RMF, NIST SP 800-218A, and MITRE ATLAS v5.
Intended Audience
- Security engineers evaluating AI coding tools
- DevOps/Platform engineers managing developer environments
- Engineering managers responsible for tooling security
- Compliance teams assessing data privacy for AI tools
How to Use This Guide
- L1 (Crawl): Essential controls for all organizations using Cursor
- L2 (Walk): Enhanced controls for organizations with sensitive codebases
- L3 (Run): Strictest controls for regulated industries or high-security environments
Scope
This guide covers Cursor-specific security configurations including AI privacy settings, MCP server security, agent sandbox controls, API key management, rules file integrity, code privacy controls, workspace trust, extension supply chain security, and organizational policies. General VSCode security and operating system hardening are out of scope.
Why This Guide Exists
No CIS Benchmark or equivalent standard exists for AI code editors. As AI coding assistants become mission-critical development tools with autonomous agent capabilities, securing them is essential to:
- Prevent proprietary code leakage to third-party AI providers
- Protect API keys and credentials from exposure via AI context
- Defend against prompt injection attacks through MCP servers, rules files, and repository content
- Control autonomous agent actions (file writes, terminal execution, network access)
- Audit AI usage and code generation for compliance
- Manage extension supply chain risks in AI-augmented workflows
- Meet emerging regulatory requirements (EU AI Act, NIST AI RMF)
Table of Contents
- Authentication & Access Controls
- AI Privacy & Data Controls
- API Key & Credential Management
- MCP Server Security
- Agent & Sandbox Security
- Rules File & Project Security
- Workspace Trust & Code Security
- Extension & Integration Security
- Network & Telemetry Controls
- Monitoring & Audit Logging
- Organization & Team Controls
1. Authentication & Access Controls
1.1 Enforce Account Authentication for All Users
Profile Level: L1 (Crawl) NIST 800-53: IA-2
Description
Require all developers to authenticate with a Cursor account instead of using the editor anonymously. This enables audit logging, usage tracking, and centralized policy enforcement.
Rationale
Why This Matters:
- Anonymous usage prevents attribution of AI-generated code
- Account-based access enables usage monitoring and anomaly detection
- Required for enforcing organizational policies and compliance
Attack Prevented: Unauthorized tool usage, lack of accountability
Prerequisites
- Cursor account for each developer
- Decision on authentication method (email/password, GitHub OAuth, Google OAuth)
- Communication plan for mandatory account creation
ClickOps Implementation
Step 1: Require Login
- Open Cursor → Settings (Cmd/Ctrl + ,)
- Navigate to: Cursor Settings
- Ensure Sign in to Cursor is completed
- For team deployments: Use Cursor Teams or Enterprise to enforce authentication
Step 2: Configure Authentication Method
- Go to: https://cursor.com/settings
- Choose authentication provider:
- Email/Password: Basic authentication
- GitHub OAuth: Recommended for developer workflows
- Google Workspace: Recommended for G Suite organizations
- Complete authentication flow
Step 3: Verify Authentication Status
- In Cursor, check bottom-right status bar for account email
- Verify account is active and authenticated
Time to Complete: ~5 minutes per user
Validation & Testing
- Attempt to use Cursor features without authentication
- Verify AI features require authenticated account
- Confirm account shows in Cursor status bar
Expected result: All Cursor features require authenticated account
Operational Impact
| Aspect | Impact Level | Details |
|---|---|---|
| User Experience | Low | One-time authentication flow |
| Development Workflow | None | No workflow changes after authentication |
| Maintenance Burden | Low | Occasional re-authentication required |
| Rollback Difficulty | Easy | Sign out from account |
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC6.1 | User identification and authentication |
| NIST 800-53 | IA-2 | Identification and authentication |
| ISO 27001 | A.9.2.1 | User registration and de-registration |
1.2 Enable Multi-Factor Authentication (MFA)
Profile Level: L2 (Walk) NIST 800-53: IA-2(1)
Description
Require MFA for Cursor account authentication to prevent account takeover via compromised credentials.
Rationale
Why This Matters:
- Developer accounts access proprietary source code
- Cursor accounts may have API keys for OpenAI, Anthropic, and other providers
- Account compromise could leak code via AI chat history
- StackAware researchers demonstrated an account takeover chain via login link interception
Attack Prevented: Credential stuffing, password reuse attacks, phishing, login link interception
ClickOps Implementation
Step 1: Enable MFA on Cursor Account
- Visit: https://cursor.com/settings/security
- Navigate to Multi-Factor Authentication
- Click Enable MFA
- Choose method:
- Authenticator App (TOTP): Recommended (Authy, 1Password, Google Authenticator)
- SMS: Available but less secure
- Scan QR code with authenticator app
- Enter verification code
- Save recovery codes in secure location (password manager)
Step 2: Verify MFA Enforcement
- Sign out of Cursor
- Sign back in
- Verify MFA prompt appears after password
Time to Complete: ~10 minutes
Validation & Testing
- Attempt login with only password - should prompt for MFA
- Test authenticator app generates valid codes
- Verify recovery codes work for MFA bypass
Expected result: All logins require MFA verification
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC6.1 | Multi-factor authentication |
| NIST 800-53 | IA-2(1) | Multi-factor authentication |
| PCI DSS | 8.3 | MFA for all access |
1.3 Configure SSO with SAML/OIDC (Enterprise)
Profile Level: L2 (Walk) NIST 800-53: IA-2, IA-8
Description
Integrate Cursor with your identity provider (IdP) via SAML 2.0 or OIDC for centralized authentication. Enforce SSO-only login to prevent local credential usage.
Rationale
Why This Matters:
- Centralizes authentication lifecycle — offboarding in the IdP immediately revokes Cursor access
- Enables conditional access policies (device compliance, location-based restrictions)
- Eliminates password reuse risk for Cursor accounts
- Required for SCIM provisioning (Control 1.4)
Attack Prevented: Orphaned accounts, credential reuse, unauthorized access after offboarding
Prerequisites
- Cursor Enterprise plan
- IdP with SAML 2.0 support (Okta, Microsoft Entra ID, Google Workspace, OneLogin)
ClickOps Implementation
Step 1: Configure SSO in Cursor Admin Dashboard
- Navigate to: https://cursor.com/dashboard → Settings → Single Sign-On (SSO)
- Select your IdP type (SAML 2.0 or OIDC)
- Enter IdP metadata URL or upload metadata XML
- Configure attribute mapping:
email→ user emailname→ display name
- Save configuration
Step 2: Configure IdP Side
- In your IdP, create a new SAML/OIDC application for Cursor
- Set ACS URL and Entity ID provided by Cursor dashboard
- Assign users/groups to the Cursor application
Step 3: Enforce SSO-Only Authentication
- In Cursor dashboard: Enable Require SSO for all team members
- This disables local login for all team members
Step 4: Verify SSO Flow
- Sign out of Cursor
- Attempt sign in — should redirect to IdP login
- Complete IdP authentication
- Verify automatic redirect back to Cursor with active session
Time to Complete: ~30 minutes
Validation & Testing
- Verify SSO login flow completes without errors
- Test that local login is blocked when SSO is enforced
- Offboard a test user in IdP — verify Cursor access is revoked
- Verify JIT (Just-in-Time) provisioning creates new user accounts on first SSO login
Expected result: All team members authenticate exclusively through SSO
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC6.1 | Centralized identity management |
| NIST 800-53 | IA-2 | Identification and authentication |
| NIST 800-53 | IA-8 | Identification and authentication (non-org users) |
| ISO 27001 | A.9.2.1 | User registration and de-registration |
1.4 Enable SCIM Provisioning (Enterprise)
Profile Level: L2 (Walk) NIST 800-53: AC-2
Description
Enable SCIM 2.0 to automate user lifecycle management (provisioning, deprovisioning, group sync) between your IdP and Cursor.
Rationale
Why This Matters:
- Automating deprovisioning ensures no orphaned accounts retain access to AI chat history or cached code context
- Group-based role assignment enforces RBAC consistently
- Reduces manual administration burden for large teams
Attack Prevented: Orphaned accounts, excessive access, manual provisioning errors
Prerequisites
- Cursor Enterprise plan
- SSO configured (Control 1.3)
- IdP with SCIM 2.0 support
ClickOps Implementation
Step 1: Generate SCIM Token
- In Cursor dashboard → Settings → SCIM Provisioning
- Generate a SCIM bearer token
- Copy the SCIM endpoint URL and token
Step 2: Configure IdP SCIM Client
- In your IdP, open the Cursor SAML application settings
- Enable SCIM provisioning
- Enter the SCIM endpoint URL and bearer token from Step 1
- Configure provisioning actions:
- Create Users: Enabled
- Update User Attributes: Enabled
- Deactivate Users: Enabled
- Map IdP groups to Cursor roles (Member, Admin)
Step 3: Test Provisioning
- Assign a test user to the Cursor application in IdP
- Verify user appears in Cursor dashboard within minutes
- Remove the test user from IdP
- Verify user is deactivated in Cursor
Time to Complete: ~30 minutes
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC6.2 | Prior to issuing access, authorization is verified |
| NIST 800-53 | AC-2 | Account management |
| ISO 27001 | A.9.2.6 | Removal or adjustment of access rights |
2. AI Privacy & Data Controls
2.1 Enable Privacy Mode for Sensitive Codebases
Profile Level: L1 (Crawl) NIST 800-53: SC-4
Description
Configure Cursor’s Privacy Mode to prevent code from being stored or used for training by third-party AI providers. When enabled, Cursor has zero data retention agreements with OpenAI, Anthropic, Google, xAI, and Fireworks. Code enters volatile memory for processing and is discarded.
Rationale
Why This Matters:
- Without Privacy Mode, Cursor may store codebase data, prompts, and code snippets to improve AI features and train models
- For accounts created after October 15, 2025, prompts may be shared with OpenAI when using their models
- Fireworks (Cursor’s inference provider) may collect prompts to improve inference speed
- Privacy Mode routes requests through separate server replicas where all logging functions are no-ops
- Compliance regulations (GDPR, HIPAA, SOC 2) may prohibit cloud AI processing of sensitive code
Attack Prevented: Data leakage to third-party AI providers, unauthorized code retention, training data contamination
Real-World Context:
- Samsung banned ChatGPT after engineers leaked sensitive code (April 2023)
- Over 50% of Cursor users already enable Privacy Mode, indicating widespread concern
- Internal repositories are 6x more likely to contain hardcoded secrets than public ones
Prerequisites
- Classification of codebases (public, internal, confidential)
- Decision on which repos require Privacy Mode
- Communication to developers about Privacy Mode policies
ClickOps Implementation
Step 1: Enable Privacy Mode Globally
- Open Cursor → Settings (Cmd/Ctrl + ,)
- Navigate to: Cursor Settings → General → Privacy Mode
- Enable: Privacy Mode
- When enabled, zero data retention agreements apply with all AI providers
- Code enters volatile memory only for processing, then is discarded
- Cursor’s servers run separate replicas where logging is disabled
- For Teams/Enterprise: Enable org-wide enforcement in admin dashboard to prevent individual override
Step 2: Configure Per-Workspace Privacy
For granular control, add privacy settings to workspace configuration:
Code Pack: Config
# .vscode/settings.json — per-workspace Privacy Mode enforcement
cat > .vscode/settings.json <<'SETTINGS'
{
"cursor.general.privacyMode": "enabled",
"cursor.general.enableShadowWorkspace": false,
"cursor.general.allowAnonymousUsage": false
}
SETTINGS
echo "Privacy Mode workspace settings written to .vscode/settings.json"
# Verify Privacy Mode is active across all workspace settings files
echo "=== Checking for Privacy Mode in settings files ==="
for f in \
"${HOME}/Library/Application Support/Cursor/User/settings.json" \
"${HOME}/.config/Cursor/User/settings.json" \
".vscode/settings.json"; do
if [ -f "$f" ]; then
PRIVACY=$(grep -o '"cursor.general.privacyMode"[[:space:]]*:[[:space:]]*"[^"]*"' "$f" 2>/dev/null || echo "not set")
echo " $f: $PRIVACY"
fi
done
echo ""
echo "=== Checking network traffic to AI provider APIs ==="
echo "Run one of these to verify no code leaks to cloud providers:"
echo " macOS: lsof -i -n -P | grep -i cursor | grep -E '(openai|anthropic)'"
echo " Linux: ss -tnp | grep cursor | grep -E '(openai|anthropic)'"
Step 3: Verify Privacy Mode Active
- Check Cursor status bar for Privacy Mode indicator
- Run the verification commands from the Code Pack
Time to Complete: ~5 minutes per workspace
Validation & Testing
Code Pack: Config
# .vscode/settings.json — per-workspace Privacy Mode enforcement
cat > .vscode/settings.json <<'SETTINGS'
{
"cursor.general.privacyMode": "enabled",
"cursor.general.enableShadowWorkspace": false,
"cursor.general.allowAnonymousUsage": false
}
SETTINGS
echo "Privacy Mode workspace settings written to .vscode/settings.json"
# Verify Privacy Mode is active across all workspace settings files
echo "=== Checking for Privacy Mode in settings files ==="
for f in \
"${HOME}/Library/Application Support/Cursor/User/settings.json" \
"${HOME}/.config/Cursor/User/settings.json" \
".vscode/settings.json"; do
if [ -f "$f" ]; then
PRIVACY=$(grep -o '"cursor.general.privacyMode"[[:space:]]*:[[:space:]]*"[^"]*"' "$f" 2>/dev/null || echo "not set")
echo " $f: $PRIVACY"
fi
done
echo ""
echo "=== Checking network traffic to AI provider APIs ==="
echo "Run one of these to verify no code leaks to cloud providers:"
echo " macOS: lsof -i -n -P | grep -i cursor | grep -E '(openai|anthropic)'"
echo " Linux: ss -tnp | grep cursor | grep -E '(openai|anthropic)'"
Expected result: No code sent to external AI services for retention or training
Monitoring & Maintenance
Alert on Privacy Mode Bypass:
- Monitor for network connections to
api.openai.com,api.anthropic.comthat bypass Cursor’s proxy - Use endpoint security tools to detect unauthorized AI API calls
Important caveat: Regardless of model selection, some requests may route through OpenAI or Anthropic for background summarization tasks. In Privacy Mode, these still have zero data retention, but the routing itself is worth noting for strict data flow requirements.
Maintenance schedule:
- Weekly: Verify Privacy Mode still enabled in settings
- Monthly: Audit developer workspaces for privacy settings compliance
- Quarterly: Review Privacy Mode policy effectiveness
Operational Impact
| Aspect | Impact Level | Details |
|---|---|---|
| Developer Productivity | Low | AI features remain functional; only data retention changes |
| Code Quality | None | AI assistance quality is identical |
| Maintenance Burden | Low | Once configured, no ongoing maintenance |
| Rollback Difficulty | Easy | Disable Privacy Mode in settings |
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC6.7 | Data transmission controls |
| NIST 800-53 | SC-4 | Information in shared system resources |
| GDPR | Article 28 | Processor obligations (AI providers as processors) |
| ISO 27001 | A.13.2.1 | Information transfer policies |
| NIST AI RMF | GOVERN 1.7 | AI data governance policies |
| OWASP LLM | LLM02 | Sensitive information disclosure |
2.2 Configure AI Provider Restrictions
Profile Level: L2 (Walk) NIST 800-53: SC-7
Description
Restrict which AI providers Cursor can use. Allow only approved providers with acceptable data processing agreements.
Rationale
Why This Matters:
- Cursor routes requests to multiple providers: OpenAI, Anthropic, Google (Gemini), xAI, and Fireworks
- Different providers have varying data retention, training, and compliance policies
- Organizations may have specific vendor approval processes
ClickOps Implementation
Step 1: Review AI Provider Settings
- Open Cursor → Settings → Cursor Settings
- Navigate to: Models
- Review enabled providers and models
Step 2: Restrict to Approved Providers
- For Enterprise: Use admin dashboard to configure allowed models at the organization level
- For individual users: Disable BYOK (Bring Your Own Key) for unapproved providers
Step 3: Verify Provider Restrictions
- Attempt to use disabled provider in chat
- Should show error: “Model not available”
Recommended Provider Security Posture
| Provider | Data Retention | Training on Data | SOC 2 | Zero Retention Agreement | Recommendation |
|---|---|---|---|---|---|
| OpenAI API | 30 days (default) | No (API) | Yes | Yes (via Cursor Privacy Mode) | Approved with Privacy Mode |
| Anthropic | Not used for training | No | Yes | Yes (via Cursor Privacy Mode) | Approved |
| Google Gemini | Varies by tier | Enterprise: No | Yes | Via Vertex AI | Review DPA |
| Fireworks | Temporary (inference) | No | Yes | Yes (via Cursor) | Approved with Privacy Mode |
| Local Models | Local only | No | N/A | N/A | Highest security (L3) |
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC9.2 | Third-party vendor management |
| NIST 800-53 | SA-9 | External system services |
| OWASP LLM | LLM03 | Supply chain vulnerabilities |
2.3 Configure .cursorignore for Sensitive Files
Profile Level: L1 (Crawl) NIST 800-53: AC-3, SC-4
Description
Create a .cursorignore file to exclude sensitive files and directories from being sent to Cursor’s servers for AI processing, indexing, or embedding. This is a critical data boundary control.
Rationale
Why This Matters:
- Cursor sends code context (recently viewed files, surrounding code) to AI providers on every keystroke for Tab completions
- Codebase indexing uploads code chunks for embedding computation
- Without
.cursorignore, secrets, credentials, and proprietary configuration may be included in AI context .cursorignoreprovides a hard block — AI cannot see excluded files even if explicitly referenced
Known Limitation: .cursorignore is described as “best-effort” by Cursor. Bugs may allow ignored files through in certain cases (see GHSA-vhc2-fjv4-wqch). Use .cursorignore as defense-in-depth alongside secret scanning and Privacy Mode, not as a sole control.
Attack Prevented: Credential leakage via AI context, sensitive data exposure to AI providers
ClickOps Implementation
Step 1: Create .cursorignore File
Add a .cursorignore file to your project root:
Code Pack: Config
# Create a comprehensive .cursorignore file to exclude sensitive files from AI context
cat > .cursorignore <<'IGNORE'
# === Credentials & Secrets ===
.env
.env.*
.env.local
.env.production
**/.env
**/secrets/
**/credentials/
*.pem
*.key
*.p12
*.pfx
*.jks
id_rsa*
id_ed25519*
*.keystore
# === Cloud & Infrastructure Configs ===
.aws/
.azure/
.gcloud/
kubeconfig*
terraform.tfstate*
terraform.tfvars
*.auto.tfvars
# === Internal Configuration ===
**/config/production.*
**/config/secrets.*
docker-compose.override.yml
# === Cursor / IDE Configuration ===
.cursor/mcp.json
.vscode/launch.json
# === Compliance & Legal ===
**/compliance/
**/legal/
**/audit/
IGNORE
echo ".cursorignore written with sensitive file exclusions"
# Verify .cursorignore is present and covers critical patterns
echo "=== .cursorignore Audit ==="
if [ ! -f .cursorignore ]; then
echo "FAIL: .cursorignore not found in project root"
exit 1
fi
REQUIRED_PATTERNS=(".env" "*.pem" "*.key" "id_rsa" "terraform.tfstate" ".aws/")
MISSING=0
for pattern in "${REQUIRED_PATTERNS[@]}"; do
if ! grep -qF "$pattern" .cursorignore; then
echo " MISSING: $pattern not in .cursorignore"
MISSING=$((MISSING + 1))
fi
done
if [ "$MISSING" -eq 0 ]; then
echo "PASS: All critical patterns present in .cursorignore"
else
echo "WARN: $MISSING critical patterns missing from .cursorignore"
fi
Step 2: Also Create .cursorindexingignore (Optional)
.cursorignore— hard block from both AI access and indexing.cursorindexingignore— excludes from indexing only; files remain accessible to AI features if explicitly referenced
Use .cursorignore for secrets and credentials. Use .cursorindexingignore for large non-sensitive files (vendor directories, build artifacts).
Step 3: Commit to Repository
- Add
.cursorignoreto version control - Standardize across all organizational repositories
Time to Complete: ~10 minutes
Validation & Testing
Code Pack: Config
# Create a comprehensive .cursorignore file to exclude sensitive files from AI context
cat > .cursorignore <<'IGNORE'
# === Credentials & Secrets ===
.env
.env.*
.env.local
.env.production
**/.env
**/secrets/
**/credentials/
*.pem
*.key
*.p12
*.pfx
*.jks
id_rsa*
id_ed25519*
*.keystore
# === Cloud & Infrastructure Configs ===
.aws/
.azure/
.gcloud/
kubeconfig*
terraform.tfstate*
terraform.tfvars
*.auto.tfvars
# === Internal Configuration ===
**/config/production.*
**/config/secrets.*
docker-compose.override.yml
# === Cursor / IDE Configuration ===
.cursor/mcp.json
.vscode/launch.json
# === Compliance & Legal ===
**/compliance/
**/legal/
**/audit/
IGNORE
echo ".cursorignore written with sensitive file exclusions"
# Verify .cursorignore is present and covers critical patterns
echo "=== .cursorignore Audit ==="
if [ ! -f .cursorignore ]; then
echo "FAIL: .cursorignore not found in project root"
exit 1
fi
REQUIRED_PATTERNS=(".env" "*.pem" "*.key" "id_rsa" "terraform.tfstate" ".aws/")
MISSING=0
for pattern in "${REQUIRED_PATTERNS[@]}"; do
if ! grep -qF "$pattern" .cursorignore; then
echo " MISSING: $pattern not in .cursorignore"
MISSING=$((MISSING + 1))
fi
done
if [ "$MISSING" -eq 0 ]; then
echo "PASS: All critical patterns present in .cursorignore"
else
echo "WARN: $MISSING critical patterns missing from .cursorignore"
fi
Expected result: All critical patterns present and sensitive files excluded from AI context
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC6.1 | Logical access controls |
| NIST 800-53 | AC-3 | Access enforcement |
| NIST 800-53 | SC-4 | Information in shared resources |
| OWASP LLM | LLM02 | Sensitive information disclosure |
2.4 Enable Local AI Models (L3 Maximum Security)
Profile Level: L3 (Run) NIST 800-53: SC-4, SC-7
Description
Configure Cursor to use only local AI models (running on-premises or on developer machines) instead of cloud-based AI services. This provides maximum code privacy — zero code leaves the organization’s network.
Rationale
Why This Matters:
- Zero code leaves the organization’s network
- Complete control over model and data processing
- Meets strictest compliance requirements (defense, healthcare, financial)
Use Cases:
- Government contractors with classified code
- Healthcare orgs processing PHI/ePHI
- Financial institutions with proprietary trading algorithms
ClickOps Implementation
Step 1: Install Local Model Backend
Options:
- Ollama: Local LLM runtime (supports CodeLlama, Qwen2.5-Coder, DeepSeek-Coder, etc.)
- LM Studio: Local model management with OpenAI-compatible API
- Custom OpenAI-compatible API: Self-hosted models (vLLM, TGI)
Step 2: Configure Cursor to Use Local Model
- Open Cursor → Settings
- Navigate to: Models → OpenAI API Key
- Set custom base URL pointing to local endpoint (e.g.,
http://localhost:11434/v1) - Disable all cloud AI providers
Step 3: Verify Local Model Usage
- Use Cursor AI chat
- Check network traffic — should only connect to localhost
- Verify no external API calls
Time to Complete: ~1 hour (model download + configuration)
Performance Considerations
| Model | Parameters | RAM Required | Performance | Use Case |
|---|---|---|---|---|
| Qwen2.5-Coder | 7B | 8 GB | Fast, good quality | Quick completions |
| DeepSeek-Coder-V2 | 16B | 16 GB | Balanced | General development |
| Qwen2.5-Coder | 32B | 32 GB+ | Slower, high quality | Complex code generation |
| CodeLlama | 70B | 64 GB+ | Slow, highest quality | Critical code review |
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | SC-4 | Information remnants |
| ITAR | Data Sovereignty | Code never leaves jurisdiction |
| FedRAMP | SC-7 | Boundary protection |
| NIST AI RMF | GOVERN 1.4 | AI deployment controls |
3. API Key & Credential Management
3.1 Use Environment Variables for API Keys (Never Hardcode)
Profile Level: L1 (Crawl) NIST 800-53: IA-5(1)
Description
Store Cursor AI provider API keys in environment variables or secure credential stores, never hardcoded in settings files committed to version control.
Rationale
Why This Matters:
- API keys in committed files leak to version control history
- Cursor settings files may sync to cloud or backups
- Hardcoded keys are difficult to rotate
- Developers using AI tools leak secrets at 2x the baseline rate
Attack Prevented: API key exposure via Git history, backup theft
ClickOps Implementation
Step 1: Remove Hardcoded API Keys from Settings
- Check Cursor settings for hardcoded keys:
Code Pack: Config
# INSECURE: Never hardcode API keys in Cursor settings files
# This is an example of what to SEARCH FOR and REMOVE:
# "cursor.openai.apiKey": "sk-proj-abc123..."
# "cursor.anthropic.apiKey": "sk-ant-abc123..."
# Add API keys as environment variables in your shell profile
# For zsh (default on macOS):
cat >> ~/.zshrc <<'ENVVARS'
# Cursor AI Provider API Keys (rotate quarterly — see HTH 3.2)
export OPENAI_API_KEY="sk-proj-YOUR-KEY-HERE"
export ANTHROPIC_API_KEY="sk-ant-YOUR-KEY-HERE"
ENVVARS
source ~/.zshrc
echo "API keys added to ~/.zshrc as environment variables"
# Verify no hardcoded API keys exist in Cursor settings files
echo "=== Scanning for hardcoded API keys in Cursor settings ==="
SETTINGS_PATHS=(
"${HOME}/Library/Application Support/Cursor/User/settings.json"
"${HOME}/.config/Cursor/User/settings.json"
".vscode/settings.json"
)
FOUND=0
for f in "${SETTINGS_PATHS[@]}"; do
if [ -f "$f" ]; then
if grep -qE '(sk-proj-|sk-ant-|OPENAI_API_KEY|ANTHROPIC_API_KEY)' "$f" 2>/dev/null; then
echo " FAIL: Hardcoded key found in $f"
FOUND=$((FOUND + 1))
fi
fi
done
if [ "$FOUND" -eq 0 ]; then
echo " PASS: No hardcoded API keys found in settings files"
fi
# Also check git history for accidentally committed keys
echo ""
echo "=== Checking git history for leaked keys ==="
if command -v git &>/dev/null && git rev-parse --is-inside-work-tree &>/dev/null; then
LEAKED=$(git log --all -p 2>/dev/null | grep -cE '(sk-proj-|sk-ant-)[A-Za-z0-9]{20,}' || echo "0")
if [ "$LEAKED" -gt 0 ]; then
echo " FAIL: $LEAKED potential API key(s) found in git history"
echo " ACTION: Rotate keys immediately and use git-filter-repo to purge"
else
echo " PASS: No API keys found in git history"
fi
fi
- Remove any hardcoded API keys
Step 2: Use Environment Variables
Code Pack: Config
# INSECURE: Never hardcode API keys in Cursor settings files
# This is an example of what to SEARCH FOR and REMOVE:
# "cursor.openai.apiKey": "sk-proj-abc123..."
# "cursor.anthropic.apiKey": "sk-ant-abc123..."
# Add API keys as environment variables in your shell profile
# For zsh (default on macOS):
cat >> ~/.zshrc <<'ENVVARS'
# Cursor AI Provider API Keys (rotate quarterly — see HTH 3.2)
export OPENAI_API_KEY="sk-proj-YOUR-KEY-HERE"
export ANTHROPIC_API_KEY="sk-ant-YOUR-KEY-HERE"
ENVVARS
source ~/.zshrc
echo "API keys added to ~/.zshrc as environment variables"
# Verify no hardcoded API keys exist in Cursor settings files
echo "=== Scanning for hardcoded API keys in Cursor settings ==="
SETTINGS_PATHS=(
"${HOME}/Library/Application Support/Cursor/User/settings.json"
"${HOME}/.config/Cursor/User/settings.json"
".vscode/settings.json"
)
FOUND=0
for f in "${SETTINGS_PATHS[@]}"; do
if [ -f "$f" ]; then
if grep -qE '(sk-proj-|sk-ant-|OPENAI_API_KEY|ANTHROPIC_API_KEY)' "$f" 2>/dev/null; then
echo " FAIL: Hardcoded key found in $f"
FOUND=$((FOUND + 1))
fi
fi
done
if [ "$FOUND" -eq 0 ]; then
echo " PASS: No hardcoded API keys found in settings files"
fi
# Also check git history for accidentally committed keys
echo ""
echo "=== Checking git history for leaked keys ==="
if command -v git &>/dev/null && git rev-parse --is-inside-work-tree &>/dev/null; then
LEAKED=$(git log --all -p 2>/dev/null | grep -cE '(sk-proj-|sk-ant-)[A-Za-z0-9]{20,}' || echo "0")
if [ "$LEAKED" -gt 0 ]; then
echo " FAIL: $LEAKED potential API key(s) found in git history"
echo " ACTION: Rotate keys immediately and use git-filter-repo to purge"
else
echo " PASS: No API keys found in git history"
fi
fi
Step 3: Verify API Keys Not in Settings
Code Pack: Config
# INSECURE: Never hardcode API keys in Cursor settings files
# This is an example of what to SEARCH FOR and REMOVE:
# "cursor.openai.apiKey": "sk-proj-abc123..."
# "cursor.anthropic.apiKey": "sk-ant-abc123..."
# Add API keys as environment variables in your shell profile
# For zsh (default on macOS):
cat >> ~/.zshrc <<'ENVVARS'
# Cursor AI Provider API Keys (rotate quarterly — see HTH 3.2)
export OPENAI_API_KEY="sk-proj-YOUR-KEY-HERE"
export ANTHROPIC_API_KEY="sk-ant-YOUR-KEY-HERE"
ENVVARS
source ~/.zshrc
echo "API keys added to ~/.zshrc as environment variables"
# Verify no hardcoded API keys exist in Cursor settings files
echo "=== Scanning for hardcoded API keys in Cursor settings ==="
SETTINGS_PATHS=(
"${HOME}/Library/Application Support/Cursor/User/settings.json"
"${HOME}/.config/Cursor/User/settings.json"
".vscode/settings.json"
)
FOUND=0
for f in "${SETTINGS_PATHS[@]}"; do
if [ -f "$f" ]; then
if grep -qE '(sk-proj-|sk-ant-|OPENAI_API_KEY|ANTHROPIC_API_KEY)' "$f" 2>/dev/null; then
echo " FAIL: Hardcoded key found in $f"
FOUND=$((FOUND + 1))
fi
fi
done
if [ "$FOUND" -eq 0 ]; then
echo " PASS: No hardcoded API keys found in settings files"
fi
# Also check git history for accidentally committed keys
echo ""
echo "=== Checking git history for leaked keys ==="
if command -v git &>/dev/null && git rev-parse --is-inside-work-tree &>/dev/null; then
LEAKED=$(git log --all -p 2>/dev/null | grep -cE '(sk-proj-|sk-ant-)[A-Za-z0-9]{20,}' || echo "0")
if [ "$LEAKED" -gt 0 ]; then
echo " FAIL: $LEAKED potential API key(s) found in git history"
echo " ACTION: Rotate keys immediately and use git-filter-repo to purge"
else
echo " PASS: No API keys found in git history"
fi
fi
Time to Complete: ~10 minutes
Monitoring & Maintenance
- Monthly: Rotate API keys
- Quarterly: Audit environment variable security
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| SOC 2 | CC6.1 | Secret management |
| NIST 800-53 | IA-5(1) | Password-based authentication |
| PCI DSS | 8.2.1 | Render credentials unreadable |
3.2 Rotate AI Provider API Keys Quarterly
Profile Level: L2 (Walk) NIST 800-53: IA-5(1)
Description
Establish a quarterly rotation schedule for all AI provider API keys used with Cursor.
Rationale
Why This Matters:
- Limits exposure window if keys compromised
- Follows secret management best practices
- Required by many compliance frameworks
ClickOps Implementation
Step 1: Create API Key Rotation Schedule
- Document all API keys in use (OpenAI, Anthropic, Google, custom providers)
- Set quarterly rotation reminders
Step 2: Rotate Keys
For OpenAI:
- Visit: https://platform.openai.com/api-keys
- Click Create new secret key
- Update environment variable:
Code Pack: Config
# Update environment variable with new key after generating on provider dashboard
# For zsh:
sed -i '' 's|^export OPENAI_API_KEY=.*|export OPENAI_API_KEY="sk-proj-NEW-KEY-HERE"|' ~/.zshrc
source ~/.zshrc
echo "OpenAI API key rotated in ~/.zshrc"
# For bash:
# sed -i 's|^export OPENAI_API_KEY=.*|export OPENAI_API_KEY="sk-proj-NEW-KEY-HERE"|' ~/.bashrc
# source ~/.bashrc
- Restart Cursor
- Verify new key works
- Revoke old key on OpenAI platform
For Anthropic:
- Visit: https://console.anthropic.com/settings/keys
- Generate new key → Update environment → Revoke old key
Time to Complete: ~15 minutes per provider
3.3 Monitor API Key Usage and Costs
Profile Level: L2 (Walk)
Description
Monitor AI provider API usage to detect anomalies (unusual spikes, unauthorized usage, cost overruns).
ClickOps Implementation
Step 1: Enable Usage Tracking
For OpenAI:
- Visit: https://platform.openai.com/usage
- Set up billing alerts:
- Soft limit: Warning at $X per month
- Hard limit: Block at $Y per month
For Anthropic:
- Visit: https://console.anthropic.com/settings/billing
- Configure usage alerts
Step 2: Review Usage Regularly
- Daily: Check for cost spikes
- Weekly: Review usage patterns
- Monthly: Analyze per-user usage (if using organization accounts)
4. MCP Server Security
4.1 Audit and Allowlist MCP Servers
Profile Level: L1 (Crawl) NIST 800-53: CM-7, SA-9
Description
Audit all configured MCP (Model Context Protocol) servers and restrict usage to an approved allowlist. MCP servers extend Cursor’s capabilities by connecting to external tools and services, but represent one of the most significant attack surfaces — three CVEs in 2025 directly exploited MCP configuration.
Rationale
Why This Matters:
- MCP server installation (via
pip installornpx) executes arbitrary code with full user permissions — no sandboxing by default - CVE-2025-54135 (CurXecute): Prompt injection via MCP-connected services (e.g., Slack) rewrote
mcp.jsonand executed arbitrary commands - CVE-2025-54136 (MCPoison): After initial approval, attackers silently swapped benign MCP configs with malicious payloads for persistent RCE
- CVE-2025-64106: Insufficient validation in MCP deep-link handling enabled malicious server impersonation
- 53% of MCP servers rely on static API keys or PATs that are rarely rotated
- 43% of tested MCP implementations had unsafe shell calls exposing them to command injection
Attack Prevented: Remote code execution via MCP prompt injection, persistent team-wide compromise, supply chain poisoning
Real-World Context:
- Between January-February 2026, over 30 CVEs were filed targeting MCP servers, clients, and infrastructure
- Among 2,614 MCP implementations surveyed, 82% use file operations vulnerable to path traversal
Prerequisites
- Inventory of all MCP servers in use across the organization
- Enterprise plan for centralized MCP allowlisting
ClickOps Implementation
Step 1: Audit Existing MCP Configurations
Code Pack: Config
# Audit all MCP server configurations across project and global scopes
echo "=== MCP Configuration Audit ==="
# Project-level MCP config
PROJECT_MCP=".cursor/mcp.json"
if [ -f "$PROJECT_MCP" ]; then
echo "Project MCP config found: $PROJECT_MCP"
echo " Configured servers:"
jq -r '.mcpServers // {} | keys[]' "$PROJECT_MCP" 2>/dev/null || echo " (invalid JSON)"
echo ""
echo " Full config (review for suspicious commands/URLs):"
cat "$PROJECT_MCP"
else
echo "No project-level MCP config found (OK)"
fi
echo ""
# Global MCP config
GLOBAL_MCP="${HOME}/.cursor/mcp.json"
if [ -f "$GLOBAL_MCP" ]; then
echo "Global MCP config found: $GLOBAL_MCP"
echo " Configured servers:"
jq -r '.mcpServers // {} | keys[]' "$GLOBAL_MCP" 2>/dev/null || echo " (invalid JSON)"
echo ""
echo " Full config (review for suspicious commands/URLs):"
cat "$GLOBAL_MCP"
else
echo "No global MCP config found (OK if MCP not used)"
fi
echo ""
echo "=== Review Checklist ==="
echo " [ ] Every MCP server is from a trusted source"
echo " [ ] No unexpected 'command' entries with curl, wget, or shell pipes"
echo " [ ] No servers pointing to unknown URLs or IP addresses"
echo " [ ] Config files are not writable by other users (check permissions)"
# Lock down MCP config file permissions to prevent unauthorized modification
echo "=== Securing MCP config file permissions ==="
for MCP_FILE in ".cursor/mcp.json" "${HOME}/.cursor/mcp.json"; do
if [ -f "$MCP_FILE" ]; then
chmod 600 "$MCP_FILE"
echo " Set $MCP_FILE to 600 (owner read/write only)"
fi
done
Step 2: Establish MCP Allowlist
- For Enterprise: Navigate to admin dashboard → MCP Servers → Configure allowlist
- Add only vetted, organizationally-approved MCP servers
- Block all other MCP server installations
Step 3: Secure MCP Config File Permissions
Code Pack: Config
# Audit all MCP server configurations across project and global scopes
echo "=== MCP Configuration Audit ==="
# Project-level MCP config
PROJECT_MCP=".cursor/mcp.json"
if [ -f "$PROJECT_MCP" ]; then
echo "Project MCP config found: $PROJECT_MCP"
echo " Configured servers:"
jq -r '.mcpServers // {} | keys[]' "$PROJECT_MCP" 2>/dev/null || echo " (invalid JSON)"
echo ""
echo " Full config (review for suspicious commands/URLs):"
cat "$PROJECT_MCP"
else
echo "No project-level MCP config found (OK)"
fi
echo ""
# Global MCP config
GLOBAL_MCP="${HOME}/.cursor/mcp.json"
if [ -f "$GLOBAL_MCP" ]; then
echo "Global MCP config found: $GLOBAL_MCP"
echo " Configured servers:"
jq -r '.mcpServers // {} | keys[]' "$GLOBAL_MCP" 2>/dev/null || echo " (invalid JSON)"
echo ""
echo " Full config (review for suspicious commands/URLs):"
cat "$GLOBAL_MCP"
else
echo "No global MCP config found (OK if MCP not used)"
fi
echo ""
echo "=== Review Checklist ==="
echo " [ ] Every MCP server is from a trusted source"
echo " [ ] No unexpected 'command' entries with curl, wget, or shell pipes"
echo " [ ] No servers pointing to unknown URLs or IP addresses"
echo " [ ] Config files are not writable by other users (check permissions)"
# Lock down MCP config file permissions to prevent unauthorized modification
echo "=== Securing MCP config file permissions ==="
for MCP_FILE in ".cursor/mcp.json" "${HOME}/.cursor/mcp.json"; do
if [ -f "$MCP_FILE" ]; then
chmod 600 "$MCP_FILE"
echo " Set $MCP_FILE to 600 (owner read/write only)"
fi
done
Step 4: Monitor MCP Configuration Changes
- Set up file integrity monitoring on
.cursor/mcp.json(project and global) - Alert on any modification to MCP configuration files
- Require re-approval for any MCP configuration change (enforced in Cursor 1.3+)
Time to Complete: ~30 minutes
Validation & Testing
- Verify only approved MCP servers are configured
- Attempt to add an unapproved MCP server — should be blocked (Enterprise)
- Modify an approved MCP config — should trigger re-approval prompt
Expected result: Only vetted MCP servers active, all changes require explicit approval
Operational Impact
| Aspect | Impact Level | Details |
|---|---|---|
| Developer Workflow | Medium | Must request approval for new MCP servers |
| Security Posture | Critical Improvement | Prevents the most exploited attack vector in 2025 |
| Maintenance Burden | Medium | Ongoing review of MCP server requests |
| Rollback Difficulty | Easy | Re-enable MCP servers as needed |
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | CM-7 | Least functionality |
| NIST 800-53 | SA-9 | External system services |
| OWASP LLM | LLM03 | Supply chain vulnerabilities |
| OWASP Agentic | ASI05 | Supply chain risks |
| MITRE ATLAS | AML.T0063 | Publish poisoned AI agent tool |
4.2 Enable MCP Tool Protection
Profile Level: L1 (Crawl) NIST 800-53: AC-6
Description
Enable MCP Tool Protection to require explicit user approval before any MCP tool executes. This prevents prompt injection from triggering MCP tool calls without user consent.
Rationale
Why This Matters:
- Without tool protection, a prompt injection payload in a repository file or chat message can trigger MCP tools automatically
- MCP tools can read files, execute commands, and make network requests with the developer’s full privileges
- Tool Protection ensures human-in-the-loop for all MCP operations
ClickOps Implementation
Step 1: Enable MCP Tool Protection
- Open Cursor → Settings
- Navigate to: Features → MCP
- Ensure Require approval for tool calls is enabled (this is now the default in Cursor 1.3+)
Step 2: Also Enable Dotfile Protection
- In same settings area, enable Dotfile Protection
- This prevents AI from modifying sensitive files:
.env,.ssh/config,.aws/credentials, etc.
Time to Complete: ~5 minutes
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | AC-6 | Least privilege |
| OWASP Agentic | ASI02 | Tool misuse |
| OWASP Agentic | ASI03 | Identity and privilege abuse |
5. Agent & Sandbox Security
5.1 Disable Auto-Run Mode
Profile Level: L1 (Crawl) NIST 800-53: CM-7, AC-6
Description
Disable Cursor’s auto-run mode (sometimes called “YOLO mode”) to require explicit human approval before the AI agent executes any terminal command. This is the single most impactful security control for Cursor.
Rationale
Why This Matters:
- In auto-run mode, Cursor’s agent executes terminal commands without any user approval
- The command denylist uses a blocklist approach that has been repeatedly bypassed by researchers
- CVE-2026-22708 (NomShub): Shell builtins (
export,cd,eval) bypass the command allowlist entirely because the parser only tracks external executables — enabling “deterministic, 100% reliable sandbox escape” - GHSA-82wg-qcm4-fp2w: Environment variable manipulation bypassed the terminal allowlist
- Disabling auto-run prevents the majority of documented attack scenarios
Attack Prevented: Autonomous code execution, sandbox escape via shell builtins, privilege escalation, data exfiltration
Real-World Context:
- CyberScoop reported a one-line prompt attack that morphed Cursor’s agent into a local shell with full developer privileges
- The NomShub attack chain achieved persistent remote access by chaining prompt injection → sandbox escape →
~/.zshenvoverwrite → GitHub OAuth device code hijack
ClickOps Implementation
Step 1: Disable Auto-Run
- Open Cursor → Settings
- Search for:
auto-runorautoRun - Disable: Agent Auto-Run
Code Pack: Config
# Cursor settings to disable auto-run and enforce manual approval
# Add to .vscode/settings.json or user settings:
cat <<'SETTINGS'
{
"cursor.agent.autoRun": false,
"cursor.agent.enableSandbox": true,
"cursor.agent.requireApprovalForCommands": true
}
SETTINGS
# Verify auto-run is disabled in all settings locations
echo "=== Checking Agent Auto-Run Status ==="
SETTINGS_PATHS=(
"${HOME}/Library/Application Support/Cursor/User/settings.json"
"${HOME}/.config/Cursor/User/settings.json"
".vscode/settings.json"
)
for f in "${SETTINGS_PATHS[@]}"; do
if [ -f "$f" ]; then
AUTORUN=$(grep -o '"cursor.agent.autoRun"[[:space:]]*:[[:space:]]*[a-z]*' "$f" 2>/dev/null || echo "not set")
echo " $f: $AUTORUN"
if echo "$AUTORUN" | grep -q "true"; then
echo " WARN: Auto-run is ENABLED — this allows agent to execute commands without approval"
fi
fi
done
Step 2: Verify Auto-Run is Disabled
Code Pack: Config
# Cursor settings to disable auto-run and enforce manual approval
# Add to .vscode/settings.json or user settings:
cat <<'SETTINGS'
{
"cursor.agent.autoRun": false,
"cursor.agent.enableSandbox": true,
"cursor.agent.requireApprovalForCommands": true
}
SETTINGS
# Verify auto-run is disabled in all settings locations
echo "=== Checking Agent Auto-Run Status ==="
SETTINGS_PATHS=(
"${HOME}/Library/Application Support/Cursor/User/settings.json"
"${HOME}/.config/Cursor/User/settings.json"
".vscode/settings.json"
)
for f in "${SETTINGS_PATHS[@]}"; do
if [ -f "$f" ]; then
AUTORUN=$(grep -o '"cursor.agent.autoRun"[[:space:]]*:[[:space:]]*[a-z]*' "$f" 2>/dev/null || echo "not set")
echo " $f: $AUTORUN"
if echo "$AUTORUN" | grep -q "true"; then
echo " WARN: Auto-run is ENABLED — this allows agent to execute commands without approval"
fi
fi
done
Time to Complete: ~2 minutes
Validation & Testing
- Start an agent session
- Agent proposes a terminal command
- Verify the command requires explicit “Run” approval
- Verify destructive commands (rm, git push) show warning
Expected result: Every terminal command requires explicit user approval
Operational Impact
| Aspect | Impact Level | Details |
|---|---|---|
| Developer Productivity | Low-Medium | Must click “Run” for each agent command |
| Security Posture | Critical Improvement | Prevents autonomous code execution attacks |
| Maintenance Burden | None | One-time setting |
| Rollback Difficulty | Easy | Re-enable auto-run in settings |
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | CM-7 | Least functionality |
| NIST 800-53 | AC-6 | Least privilege |
| OWASP Agentic | ASI06 | Code execution |
| OWASP Agentic | ASI03 | Identity and privilege abuse |
| MITRE ATLAS | AML.T0061 | AI agent tools abuse |
5.2 Configure Agent Sandbox
Profile Level: L2 (Walk) NIST 800-53: SC-39, CM-7
Description
Enable and configure Cursor’s agent sandbox to restrict file system access, network connectivity, and process execution for AI agent sessions.
Rationale
Why This Matters:
- The sandbox (GA on macOS since Cursor 2.0, all platforms since early 2026) provides filesystem isolation — writes are scoped to workspace only
- However, local agents have full filesystem read access by default, including
~/.ssh/,~/.aws/,.envfiles - macOS sandbox (Apple Seatbelt) permits writes anywhere in
~/rather than restricting to workspace only - Linux uses Landlock (filesystem) + seccomp (syscall blocking)
- Windows runs the Linux sandbox inside WSL2
Known Limitation: The sandbox is necessary but not sufficient. Researchers have demonstrated bypasses via shell builtins (NomShub) and the macOS Seatbelt scope. Use sandbox alongside disabled auto-run and network controls.
ClickOps Implementation
Step 1: Enable Sandbox
- Open Cursor → Settings
- Navigate to: Agent → Security
- Enable: Sandbox Mode
Step 2: Configure Network Access (Cursor 2.5+)
- In settings, navigate to: Agent → Network Access
- Choose restriction level:
- Restrict to sandbox.json domains: Most restrictive — only domains listed in project’s
sandbox.json - Restrict to allowlist + Cursor defaults: Moderate — approved domains plus Cursor’s required endpoints
- Allow all: Least restrictive (not recommended)
- Restrict to sandbox.json domains: Most restrictive — only domains listed in project’s
- For Enterprise: Enforce network allowlists/denylists from admin dashboard
Step 3: For Enterprise — Enforce Sandbox Org-Wide
- In admin dashboard, enable Require Sandbox for all agent sessions
- Configure organization-level network allowlists
Time to Complete: ~10 minutes
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | SC-39 | Process isolation |
| NIST 800-53 | CM-7 | Least functionality |
| OWASP Agentic | ASI06 | Code execution controls |
5.3 Secure Background/Cloud Agents
Profile Level: L3 (Run) NIST 800-53: SC-7, AC-6
Description
Configure security controls for Cursor’s Background Agents (remote cloud agents that run in isolated Ubuntu VMs on Cursor’s AWS infrastructure) or deploy self-hosted cloud agents for maximum control.
Rationale
Why This Matters:
- Background agents clone repositories, work on branches, and submit PRs autonomously
- Cursor acknowledges background agents have “a much bigger surface area of attacks compared to existing Cursor features”
- Cloud agents with Computer Use (Feb 2026) gave each agent its own VM with browser access and video recording — creating lateral movement risk if compromised
- Self-hosted cloud agents (March 2026) keep code and execution entirely within your infrastructure
Attack Prevented: Code exfiltration via cloud agents, lateral movement from compromised agent VMs
ClickOps Implementation
Option A: Restrict Cloud Agent Usage
- For Enterprise: In admin dashboard, disable Cloud Agents entirely
- Or configure agent run settings to require approval for all cloud agent operations
Option B: Deploy Self-Hosted Cloud Agents (Enterprise)
- Self-hosted agents run entirely within your infrastructure using outbound-only HTTPS connections
- Deploy via Helm chart or Kubernetes operator
- Code, tool execution, and build artifacts never leave your environment
- No inbound ports, firewall changes, or VPNs needed
Time to Complete: ~2 hours (self-hosted) or ~5 minutes (restrict)
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | SC-7 | Boundary protection |
| NIST 800-53 | AC-6 | Least privilege |
| NIST AI RMF | MANAGE 1.3 | AI deployment risk management |
6. Rules File & Project Security
6.1 Audit .cursorrules for Hidden Payloads
Profile Level: L1 (Crawl) NIST 800-53: SI-3, CM-7
Description
Scan .cursorrules and .cursor/rules/*.mdc files for hidden Unicode characters and suspicious instructions that could carry prompt injection payloads. Rules files define project-level AI instructions that automatically apply to all AI interactions — making them a potent supply chain attack vector.
Rationale
Why This Matters:
- Pillar Security demonstrated that invisible Unicode characters (zero-width joiners, bidirectional text markers) embedded in
.cursorrulesfiles silently instruct the AI to inject backdoors into all generated code - Instructions are invisible in code editors and GitHub diffs
- Compromised rules files affect all team members who clone the repository
- Attack survives project forking — creating downstream supply chain contamination
- No trace in chat history or coding logs; security teams have zero visibility
- Cursor disputed this as “not a vulnerability on their side”
Attack Prevented: Supply chain poisoning via rules file prompt injection, invisible backdoor insertion, team-wide code compromise
Real-World Context:
- GitHub added hidden Unicode warnings to diffs by May 2025, implicitly validating the risk
- HiddenLayer researchers demonstrated control token abuse (
<user_query>,<user_info>) to escalate malicious instructions to user-instruction privilege level
ClickOps Implementation
Step 1: Scan Rules Files for Hidden Unicode
Code Pack: Config
# Scan .cursorrules and .cursor/rules/ for hidden Unicode characters
# that could carry invisible prompt injection payloads
echo "=== Scanning for hidden Unicode in AI rules files ==="
RULES_FILES=()
[ -f ".cursorrules" ] && RULES_FILES+=(".cursorrules")
if [ -d ".cursor/rules" ]; then
while IFS= read -r -d '' f; do
RULES_FILES+=("$f")
done < <(find .cursor/rules -type f -name "*.mdc" -print0 2>/dev/null)
fi
if [ ${#RULES_FILES[@]} -eq 0 ]; then
echo " No rules files found in project (OK)"
exit 0
fi
FOUND_HIDDEN=0
for f in "${RULES_FILES[@]}"; do
# Detect zero-width characters, bidirectional markers, and other invisible Unicode
# U+200B (zero-width space), U+200C/D (zero-width non-joiner/joiner),
# U+200E/F (LTR/RTL marks), U+2060 (word joiner), U+FEFF (BOM)
HIDDEN=$(grep -cP '[\x{200B}-\x{200F}\x{2028}-\x{202F}\x{2060}\x{FEFF}]' "$f" 2>/dev/null || echo "0")
if [ "$HIDDEN" -gt 0 ]; then
echo " FAIL: $f contains $HIDDEN line(s) with hidden Unicode characters"
echo " View with: cat -v '$f' | grep -n 'M-b'"
FOUND_HIDDEN=$((FOUND_HIDDEN + 1))
else
echo " PASS: $f — no hidden Unicode detected"
fi
done
if [ "$FOUND_HIDDEN" -gt 0 ]; then
echo ""
echo "ACTION: Review flagged files with a hex editor before trusting"
echo " hexdump -C <file> | grep -E '(e2 80 8[b-f]|e2 80 a[a-f]|ef bb bf)'"
fi
# Review rules files for suspicious instructions
echo "=== Content Review of AI Rules Files ==="
SUSPICIOUS_PATTERNS=(
'curl\s'
'wget\s'
'eval\s'
'exec\('
'system\('
'subprocess'
'base64'
'reverse.shell'
'/dev/tcp'
'nc\s.*-e'
'<user_query>'
'<user_info>'
'ignore.*previous.*instructions'
'disregard.*above'
)
PATTERN=$(printf '%s|' "${SUSPICIOUS_PATTERNS[@]}")
PATTERN=${PATTERN%|}
for f in "${RULES_FILES[@]}"; do
MATCHES=$(grep -ciE "$PATTERN" "$f" 2>/dev/null || echo "0")
if [ "$MATCHES" -gt 0 ]; then
echo " WARN: $f has $MATCHES suspicious pattern(s):"
grep -niE "$PATTERN" "$f" 2>/dev/null | head -5
else
echo " PASS: $f — no suspicious patterns"
fi
done
Step 2: Review Rules File Content for Suspicious Patterns
Code Pack: Config
# Scan .cursorrules and .cursor/rules/ for hidden Unicode characters
# that could carry invisible prompt injection payloads
echo "=== Scanning for hidden Unicode in AI rules files ==="
RULES_FILES=()
[ -f ".cursorrules" ] && RULES_FILES+=(".cursorrules")
if [ -d ".cursor/rules" ]; then
while IFS= read -r -d '' f; do
RULES_FILES+=("$f")
done < <(find .cursor/rules -type f -name "*.mdc" -print0 2>/dev/null)
fi
if [ ${#RULES_FILES[@]} -eq 0 ]; then
echo " No rules files found in project (OK)"
exit 0
fi
FOUND_HIDDEN=0
for f in "${RULES_FILES[@]}"; do
# Detect zero-width characters, bidirectional markers, and other invisible Unicode
# U+200B (zero-width space), U+200C/D (zero-width non-joiner/joiner),
# U+200E/F (LTR/RTL marks), U+2060 (word joiner), U+FEFF (BOM)
HIDDEN=$(grep -cP '[\x{200B}-\x{200F}\x{2028}-\x{202F}\x{2060}\x{FEFF}]' "$f" 2>/dev/null || echo "0")
if [ "$HIDDEN" -gt 0 ]; then
echo " FAIL: $f contains $HIDDEN line(s) with hidden Unicode characters"
echo " View with: cat -v '$f' | grep -n 'M-b'"
FOUND_HIDDEN=$((FOUND_HIDDEN + 1))
else
echo " PASS: $f — no hidden Unicode detected"
fi
done
if [ "$FOUND_HIDDEN" -gt 0 ]; then
echo ""
echo "ACTION: Review flagged files with a hex editor before trusting"
echo " hexdump -C <file> | grep -E '(e2 80 8[b-f]|e2 80 a[a-f]|ef bb bf)'"
fi
# Review rules files for suspicious instructions
echo "=== Content Review of AI Rules Files ==="
SUSPICIOUS_PATTERNS=(
'curl\s'
'wget\s'
'eval\s'
'exec\('
'system\('
'subprocess'
'base64'
'reverse.shell'
'/dev/tcp'
'nc\s.*-e'
'<user_query>'
'<user_info>'
'ignore.*previous.*instructions'
'disregard.*above'
)
PATTERN=$(printf '%s|' "${SUSPICIOUS_PATTERNS[@]}")
PATTERN=${PATTERN%|}
for f in "${RULES_FILES[@]}"; do
MATCHES=$(grep -ciE "$PATTERN" "$f" 2>/dev/null || echo "0")
if [ "$MATCHES" -gt 0 ]; then
echo " WARN: $f has $MATCHES suspicious pattern(s):"
grep -niE "$PATTERN" "$f" 2>/dev/null | head -5
else
echo " PASS: $f — no suspicious patterns"
fi
done
Step 3: Establish Rules File Governance
- Treat
.cursor/directory and.cursorrulesfiles as security-critical in code review — equivalent to CI/CD pipeline configurations - Require explicit review of all changes to rules files in pull requests
- Maintain an approved rules file template for your organization (see CSA R.A.I.L.G.U.A.R.D. framework)
Time to Complete: ~15 minutes per repository
Validation & Testing
- Create a test rules file with a hidden Unicode character
- Run the scanning script — should detect and flag it
- Review flagged file with hex editor to confirm
Expected result: All rules files are free of hidden Unicode and suspicious patterns
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | SI-3 | Malicious code protection |
| NIST 800-53 | CM-7 | Least functionality |
| OWASP LLM | LLM01 | Prompt injection |
| OWASP Agentic | ASI01 | Agent goal hijacking |
| OWASP Agentic | ASI04 | Memory poisoning |
| MITRE ATLAS | AML.T0051 | LLM prompt injection |
6.2 Enforce Rules File Review in PRs
Profile Level: L2 (Walk) NIST 800-53: CM-3
Description
Require mandatory code review for any changes to AI rules files (.cursorrules, .cursor/rules/*.mdc) before they are merged. Implement CODEOWNERS rules to enforce security team review.
Rationale
Why This Matters:
- Rules file changes affect all future AI interactions for the entire team
- Malicious changes can be subtle (single-line instruction additions, Unicode injection)
- Without mandatory review, a compromised contributor can silently weaponize AI output
ClickOps Implementation
Step 1: Add Rules Files to CODEOWNERS
- In your repository, add to
.github/CODEOWNERS:.cursorrules @security-team.cursor/rules/ @security-team.cursor/mcp.json @security-team.vscode/tasks.json @security-team
- Enable branch protection requiring CODEOWNERS approval
Step 2: Configure Pre-Commit Hook (Optional)
- Add a pre-commit hook that runs the Unicode scanning script from Control 6.1
- Block commits containing hidden Unicode in rules files
Time to Complete: ~15 minutes
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | CM-3 | Configuration change control |
| SOC 2 | CC8.1 | Changes are authorized |
| NIST SP 800-218A | PW.7.1 | Review code changes |
7. Workspace Trust & Code Security
7.1 Enable Workspace Trust for All Repositories
Profile Level: L1 (Crawl) NIST 800-53: CM-7
Description
Enable VSCode/Cursor Workspace Trust to prevent automatic execution of untrusted code when opening new repositories. Cursor ships with Workspace Trust disabled by default — a deliberate design choice that creates a critical attack vector.
Rationale
Why This Matters:
- With Workspace Trust disabled (Cursor’s default), a malicious
.vscode/tasks.jsonwithrunOptions.runOn: "folderOpen"auto-executes arbitrary code the moment a developer opens a folder — no prompt, no consent, no AI involvement needed - Developer laptops typically hold cloud keys, PATs, API tokens, and SaaS sessions — a booby-trapped repo pivots immediately to CI/CD and cloud infrastructure
- Cursor stated they “intended to keep the autorun behavior” because “Workspace Trust disables AI and other features our users want to use”
- This vulnerability has no CVE assigned (it’s a design choice) but was disclosed by Oasis Security in September 2025
Attack Prevented: Arbitrary code execution from malicious repositories on folder open
Real-World Context:
- Oasis Security demonstrated complete exploitation: clone repo → open in Cursor → immediate code execution with developer privileges
Prerequisites
- Understanding of which repositories are trusted (internal, verified sources)
- Communication to developers about trust prompts (they will see new prompts after enabling)
ClickOps Implementation
Step 1: Enable Workspace Trust
- Open Cursor → Settings
- Search for:
security.workspace.trust - Apply the following settings:
Code Pack: Config
# Cursor user settings to enable Workspace Trust (disabled by default in Cursor)
# Add to user settings.json:
cat <<'SETTINGS'
{
"security.workspace.trust.enabled": true,
"security.workspace.trust.startupPrompt": "always",
"security.workspace.trust.emptyWindow": false,
"security.workspace.trust.untrustedFiles": "prompt",
"task.allowAutomaticTasks": "off"
}
SETTINGS
# Verify Workspace Trust is enabled (Cursor defaults it to OFF)
echo "=== Workspace Trust Verification ==="
SETTINGS_PATHS=(
"${HOME}/Library/Application Support/Cursor/User/settings.json"
"${HOME}/.config/Cursor/User/settings.json"
)
for f in "${SETTINGS_PATHS[@]}"; do
if [ -f "$f" ]; then
TRUST=$(grep -o '"security.workspace.trust.enabled"[[:space:]]*:[[:space:]]*[a-z]*' "$f" 2>/dev/null || echo "not set")
TASKS=$(grep -o '"task.allowAutomaticTasks"[[:space:]]*:[[:space:]]*"[^"]*"' "$f" 2>/dev/null || echo "not set")
echo " $f:"
echo " Workspace Trust: $TRUST"
echo " Auto Tasks: $TASKS"
if echo "$TRUST" | grep -q "false" || echo "$TRUST" | grep -q "not set"; then
echo " FAIL: Workspace Trust is disabled — repos with malicious .vscode/tasks.json can auto-execute code"
fi
fi
done
Step 2: Configure Trusted Folders
- Add trusted parent directories:
- Company code:
~/work/company-name/ - Personal projects:
~/projects/personal/
- Company code:
Step 3: Verify Trust Prompts
- Clone a new repository outside trusted folders
- Open in Cursor
- Should see: “Do you trust the authors of the files in this folder?”
- Select “No, I don’t trust the authors” for untrusted repos
Step 4: Verify Workspace Trust is Active
Code Pack: Config
# Cursor user settings to enable Workspace Trust (disabled by default in Cursor)
# Add to user settings.json:
cat <<'SETTINGS'
{
"security.workspace.trust.enabled": true,
"security.workspace.trust.startupPrompt": "always",
"security.workspace.trust.emptyWindow": false,
"security.workspace.trust.untrustedFiles": "prompt",
"task.allowAutomaticTasks": "off"
}
SETTINGS
# Verify Workspace Trust is enabled (Cursor defaults it to OFF)
echo "=== Workspace Trust Verification ==="
SETTINGS_PATHS=(
"${HOME}/Library/Application Support/Cursor/User/settings.json"
"${HOME}/.config/Cursor/User/settings.json"
)
for f in "${SETTINGS_PATHS[@]}"; do
if [ -f "$f" ]; then
TRUST=$(grep -o '"security.workspace.trust.enabled"[[:space:]]*:[[:space:]]*[a-z]*' "$f" 2>/dev/null || echo "not set")
TASKS=$(grep -o '"task.allowAutomaticTasks"[[:space:]]*:[[:space:]]*"[^"]*"' "$f" 2>/dev/null || echo "not set")
echo " $f:"
echo " Workspace Trust: $TRUST"
echo " Auto Tasks: $TASKS"
if echo "$TRUST" | grep -q "false" || echo "$TRUST" | grep -q "not set"; then
echo " FAIL: Workspace Trust is disabled — repos with malicious .vscode/tasks.json can auto-execute code"
fi
fi
done
Time to Complete: ~5 minutes
What Gets Restricted in Untrusted Workspaces
| Feature | Trusted | Untrusted |
|---|---|---|
| Tasks | Run automatically | Blocked |
| Debugging | Enabled | Disabled |
| Extensions | Full functionality | Limited/disabled |
| Settings (workspace) | Applied | Ignored |
| AI Features | Full | May be limited |
Operational Impact
| Aspect | Impact Level | Details |
|---|---|---|
| Developer Workflow | Medium | Must trust repos to use full features; prompts on first open |
| Security Posture | Critical Improvement | Prevents auto-execution from malicious repos |
| Maintenance Burden | Low | One-time trust decision per workspace |
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | CM-7 | Least functionality |
| SOC 2 | CC6.6 | Logical access — malware protection |
| OWASP Agentic | ASI01 | Agent goal hijacking |
7.2 Scan for Secrets in Code Before AI Processing
Profile Level: L2 (Walk) NIST 800-53: IA-5
Description
Use secret scanning tools to detect and remove secrets from code before allowing AI processing. Prevents accidental credential leakage to AI providers.
Rationale
Why This Matters:
- Cursor sends code snippets to AI providers (unless Privacy Mode enabled)
- Secrets in code sent to AI may be logged or retained by provider
- AI chat history may contain secrets if discussing code with credentials
- Researchers demonstrated that prompt injection can instruct Cursor to use
grepto find API keys and exfiltrate them viacurl
Attack Prevented: Credential leakage via AI context, secret exfiltration via prompt injection
ClickOps Implementation
Step 1: Install Secret Scanning Extension
- In Cursor, open Extensions (Cmd/Ctrl + Shift + X)
- Install: GitGuardian or TruffleHog extension
- Configure to scan on save
Step 2: Enable Pre-Commit Hooks
- Install
pre-commitframework - Add secret scanning hooks (e.g.,
detect-secrets,gitleaks,trufflehog) - Run
pre-commit installin repository
Step 3: Verify Secret Scanning
- Create test file with fake secret
- Attempt commit — should be blocked
- Remove secret and retry
8. Extension & Integration Security
8.1 Audit and Restrict VSCode Extensions
Profile Level: L1 (Crawl) NIST 800-53: CM-7
Description
Review all installed VSCode extensions and remove unnecessary or untrusted ones. Extensions have broad permissions and can access code, secrets, and network. Cursor uses the Open VSX registry instead of Microsoft’s official Marketplace — introducing unique supply chain risks.
Rationale
Why This Matters:
- VSCode extensions can read all workspace files and make network requests
- Cursor uses Open VSX, which has weaker verification than Microsoft’s Marketplace
- In June 2025, a fake “Solidity Language” extension on Open VSX led to a confirmed $500,000 cryptocurrency theft — the extension was a dropper that installed remote access tools and credential stealers
- In December 2025, researchers found Cursor was recommending extensions that didn’t exist in Open VSX, enabling attackers to register those names and publish malware that the IDE actively recommended
Attack Prevented: Malicious extension data exfiltration, cryptomining, credential theft, supply chain compromise
Real-World Context:
- $500K crypto theft via malicious Open VSX extension (Kaspersky, July 2025)
- Extension name squatting across Cursor, Windsurf, and Google Antigravity (December 2025)
ClickOps Implementation
Step 1: Audit Installed Extensions
Code Pack: Config
# List all installed extensions with install counts and publisher info
echo "=== Installed Extension Audit ==="
if command -v cursor &>/dev/null; then
cursor --list-extensions --show-versions 2>/dev/null | while IFS= read -r ext; do
echo " $ext"
done
echo ""
TOTAL=$(cursor --list-extensions 2>/dev/null | wc -l | tr -d ' ')
echo "Total extensions: $TOTAL"
else
echo "Cursor CLI not found in PATH"
echo "Check: /Applications/Cursor.app/Contents/MacOS/Cursor --list-extensions"
fi
echo ""
echo "=== Extension Risk Checklist ==="
echo " [ ] Remove extensions not updated in >1 year"
echo " [ ] Remove extensions with <10K installs (less vetted)"
echo " [ ] Verify publisher identity for all security-relevant extensions"
echo " [ ] Check that no extensions were side-loaded from .vsix files"
echo " [ ] Confirm extensions come from Open VSX with verified publishers"
Step 2: Remove Unnecessary Extensions
- Click extension → Uninstall
- Focus on:
- Extensions with <10K installs (less vetted)
- Extensions not updated in >1 year
- Extensions requesting network/filesystem permissions unnecessarily
- Extensions side-loaded from
.vsixfiles
Step 3: Use Extension Allowlist (Enterprise)
- Configure
AllowedExtensionsMDM policy (JSON configuration specifying permitted publishers) - Deploy via MDM (macOS) or Group Policy/Intune (Windows)
- Third-party plugin imports default to OFF on Enterprise (require explicit admin override)
Recommended Extensions Security Posture
| Extension Category | Risk Level | Recommendation |
|---|---|---|
| Official Microsoft | Low | Generally safe |
| GitHub Official | Low | Safe |
| Popular (>1M installs, verified publisher) | Low-Medium | Review permissions |
| Niche (<10K installs) | Medium-High | Audit code before use |
| Side-loaded .vsix | High | Avoid; verify publisher and integrity |
| Deprecated/Unmaintained | High | Remove immediately |
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | CM-7 | Least functionality |
| OWASP LLM | LLM03 | Supply chain vulnerabilities |
9. Network & Telemetry Controls
9.1 Disable Telemetry and Crash Reporting
Profile Level: L2 (Walk) NIST 800-53: SC-4
Description
Disable telemetry data collection and crash reporting to prevent code snippets or metadata from being sent to Cursor/Microsoft.
Rationale
Why This Matters:
- Telemetry may include code snippets, file paths, or project metadata
- Crash reports can contain sensitive information
- Reduces data exposure to third parties
ClickOps Implementation
Step 1: Disable All Telemetry
- Open Cursor → Settings
- Search for
telemetry - Apply the telemetry-disabling settings:
Code Pack: Config
# Cursor/VSCode settings to disable all telemetry and data collection
# Add to user settings.json:
cat <<'SETTINGS'
{
"telemetry.telemetryLevel": "off",
"telemetry.enableCrashReporter": false,
"telemetry.enableTelemetry": false,
"cursor.general.enableShadowWorkspace": false,
"cursor.general.allowAnonymousUsage": false
}
SETTINGS
Step 2: Verify Telemetry Disabled
- Check network traffic — should not see telemetry endpoints
- Use tools like Little Snitch (macOS) or Wireshark to monitor
9.2 Configure Network Allowlisting
Profile Level: L3 (Run) NIST 800-53: SC-7
Description
Use enterprise firewall or endpoint security to allowlist only required Cursor network endpoints, blocking all other traffic.
Required Endpoints
| Endpoint | Purpose | Required For |
|---|---|---|
*.cursor.com |
Core application services | All users |
*.cursor.sh |
Authentication and SSO | All users |
*.cursorapi.com |
API services and marketplace | All users |
cursor-cdn.com |
CDN for static assets | All users |
downloads.cursor.com |
Client downloads and updates | All users |
anysphere-binaries.s3.us-east-1.amazonaws.com |
Binary updates | All users |
marketplace.visualstudio.com |
Extension downloads (fallback) | Extension management |
Block all other network traffic from Cursor.
Network Verification
Code Pack: Config
# Monitor Cursor network connections and verify only approved endpoints
echo "=== Active Cursor Network Connections ==="
if [[ "$OSTYPE" == "darwin"* ]]; then
lsof -i -n -P 2>/dev/null | grep -i cursor | grep ESTABLISHED
elif [[ "$OSTYPE" == "linux"* ]]; then
ss -tnp 2>/dev/null | grep cursor
fi
echo ""
echo "=== Verify against approved endpoints ==="
echo "Required domains (allowlist these in firewall):"
echo " *.cursor.com — Core application services"
echo " *.cursor.sh — Authentication and SSO"
echo " *.cursorapi.com — API services and marketplace"
echo " cursor-cdn.com — CDN for static assets"
echo " downloads.cursor.com — Client downloads and updates"
echo ""
echo "Optional (only if using cloud AI providers):"
echo " api.openai.com — OpenAI API (routed through Cursor proxy)"
echo " api.anthropic.com — Anthropic API (routed through Cursor proxy)"
echo ""
echo "Block all other outbound connections from Cursor."
Important: All AI model requests route through Cursor’s infrastructure (the domains above), not directly to api.openai.com or api.anthropic.com. Blocking direct access to AI provider APIs forces traffic through Cursor’s Privacy Mode proxy.
10. Monitoring & Audit Logging
10.1 Enable Cursor Usage Logging
Profile Level: L2 (Walk) NIST 800-53: AU-2
Description
Configure logging of Cursor AI usage for audit and compliance purposes. Ensure Cursor is running a patched version to benefit from all security fixes.
Rationale
Why This Matters:
- Compliance frameworks require logging of AI usage
- Detect anomalous usage patterns (insider threats)
- Attribution of AI-generated code
- Version tracking prevents use of vulnerable Cursor releases
ClickOps Implementation
Step 1: Verify Cursor Version
Code Pack: Config
# Verify Cursor is running a patched version (minimum 1.7+ for 2025 CVE fixes)
echo "=== Cursor Version Check ==="
CURSOR_VERSION=""
if command -v cursor &>/dev/null; then
CURSOR_VERSION=$(cursor --version 2>/dev/null | head -1)
elif [ -f "/Applications/Cursor.app/Contents/Resources/app/package.json" ]; then
CURSOR_VERSION=$(grep '"version"' "/Applications/Cursor.app/Contents/Resources/app/package.json" 2>/dev/null | head -1)
fi
if [ -n "$CURSOR_VERSION" ]; then
echo " Installed version: $CURSOR_VERSION"
echo ""
echo " Minimum safe versions:"
echo " >= 1.3 Patches CVE-2025-54135 (CurXecute) and CVE-2025-54136 (MCPoison)"
echo " >= 1.7 Patches CVE-2025-59944, CVE-2025-61590 through CVE-2025-61593"
echo " >= 2.0 Adds agent sandbox (macOS), improved MCP approval"
echo " >= 2.5 Adds sandbox network access controls"
else
echo " Unable to determine Cursor version"
fi
Step 2: Enable Built-in Logging (Enterprise)
- In admin dashboard, navigate to Compliance and Monitoring
- Enable audit logging — tracks:
- Authentication events (logins, logouts)
- User management (additions, removals, role changes)
- API key management (creation, revocation)
- Team settings changes
- Privacy Mode changes
- MCP server configuration changes
- Note: Agent responses and generated code content are NOT captured in audit logs
Step 3: Configure Log Streaming (Enterprise)
- Configure log forwarding to your SIEM platform
- Supported destinations: Splunk, Datadog, Sumo Logic, webhook endpoints, S3 buckets, Elasticsearch, CloudWatch
- Logs are JSON format with timestamps, event IDs, user details, IP addresses
10.2 Monitor for Suspicious Agent Activity
Profile Level: L2 (Walk) NIST 800-53: AU-6, SI-4
Description
Monitor developer workstations for indicators of Cursor-based attacks including unexpected process spawning, shell startup file modifications, and suspicious network connections.
Rationale
Why This Matters:
- The NomShub attack chain persisted via
~/.zshenvoverwrite - Prompt injection can spawn
curlto exfiltrate data via agent terminal access - Unexpected
cursor-tunnelprocesses may indicate remote access exploitation
ClickOps Implementation
Key Indicators to Monitor:
- Shell startup file modifications: Watch
~/.zshenv,~/.bashrc,~/.zprofilefor unexpected changes - Process tree anomalies: AI agents spawn child processes — EDR should monitor the full process tree from Cursor
- Unexpected network connections: Flag outbound connections from Cursor subprocesses to non-allowlisted endpoints
- MCP config changes: File integrity monitoring on
.cursor/mcp.json(project and global) - Cursor application file tampering: Monitor for modifications to Cursor’s
main.js(malicious npm packages have overwritten this) cursor-tunnelprocesses: Monitor for unexpected remote tunnel activity
Compliance Mappings
| Framework | Control ID | Control Description |
|---|---|---|
| NIST 800-53 | AU-6 | Audit record review |
| NIST 800-53 | SI-4 | System monitoring |
| OWASP Agentic | ASI10 | Rogue agents |
11. Organization & Team Controls
11.1 Deploy Cursor Teams or Enterprise for Centralized Management
Profile Level: L2 (Walk)
Description
Use Cursor Teams or Enterprise edition to enforce organizational policies, manage licenses, and control AI provider access centrally.
Rationale
Why This Matters:
- Centralized policy enforcement (Privacy Mode, allowed providers, MCP servers)
- License management and usage tracking
- Audit logging at organization level
- Over half of Fortune 500 now use Cursor — enterprise governance is critical
ClickOps Implementation
Step 1: Set Up Cursor Teams or Enterprise
- Visit: https://cursor.com/pricing
- Choose plan:
- Teams ($40/user/month): SSO, org-wide Privacy Mode, usage analytics, shared rules
- Enterprise (custom pricing): All Teams features plus SCIM, MDM policies, audit logs, CMEK, self-hosted agents, AI Code Tracking API
- Create organization and invite team members
Step 2: Configure Organization Policies
- In admin dashboard:
- Privacy Mode: Enforce for all users (cannot be overridden)
- Allowed AI Models: Restrict to approved models
- MCP Servers: Configure allowlist
- Agent Settings: Disable auto-run, require sandbox
- Extensions: Configure allowlist
- Telemetry: Disable for all users
- Cloud Agents: Enable or disable per policy
- BYOK: Disable if using only org-managed keys
Enterprise-Only Features
| Feature | Description |
|---|---|
| SCIM 2.0 | Automated user provisioning/deprovisioning |
| MDM Policies | Deploy settings via Jamf, Intune, Kandji |
| Audit Logs | Authentication, settings changes, API key management |
| Log Streaming | Export to SIEM (Splunk, Datadog, etc.) |
| CMEK | Customer-managed encryption keys for embeddings |
| Self-Hosted Agents | Cloud agents running in your infrastructure |
| AI Code Tracking API | Per-commit AI attribution (alpha) |
| Cursor Blame | AI vs. human code attribution in git blame |
| Billing Groups | Cross-team spend allocation |
| Service Accounts | Automated workflow authentication |
11.2 Enforce Organizational Policies via MDM
Profile Level: L3 (Run)
Description
Use MDM (Mobile Device Management) to deploy and enforce Cursor security settings across all developer machines. MDM-deployed policies cannot be overridden locally.
Rationale
Why This Matters:
- Without MDM enforcement, developers can disable Privacy Mode, enable auto-run, or install unapproved MCP servers locally
- 78% of AI coding tool usage is shadow IT — MDM ensures governance even for unmanaged adoption
- MDM-deployed settings survive Cursor updates and reinstalls
ClickOps Implementation
Step 1: Create MDM Configuration Profile
- Create a configuration profile with these key policies:
- Allowed Team IDs: Comma-separated list restricting which team IDs can authenticate (prevents personal account usage on corporate devices)
- Allowed Extensions: JSON configuration controlling permitted extension publishers
- Privacy Mode: Force-enabled
- Workspace Trust: Force-enabled
Step 2: Deploy via MDM
- macOS (Jamf/Kandji): Deploy as
.mobileconfigXML profile - Windows (Intune/SCCM): Deploy equivalent Group Policy Objects
- Linux: Deploy via configuration management (Ansible, Puppet, Chef)
Step 3: Distribute Compliance Hooks
- Use Cursor Hooks to enforce compliance policies at runtime
- Hooks can intercept agent actions, block unapproved commands, and scrub secrets
- Deploy hook configurations through MDM alongside editor settings
Time to Complete: ~2 hours (initial setup), ongoing maintenance for policy updates
Appendix A: Edition Compatibility
| Control | Cursor Free | Cursor Pro | Cursor Teams | Cursor Enterprise |
|---|---|---|---|---|
| Account Authentication (1.1) | ✅ | ✅ | ✅ | ✅ |
| MFA (1.2) | ✅ | ✅ | ✅ | ✅ |
| SSO/SAML (1.3) | ❌ | ❌ | ✅ | ✅ |
| SCIM Provisioning (1.4) | ❌ | ❌ | ❌ | ✅ |
| Privacy Mode (2.1) | ✅ (opt-in) | ✅ (opt-in) | ✅ (enforceable) | ✅ (enforceable) |
| .cursorignore (2.3) | ✅ | ✅ | ✅ | ✅ |
| Local Models (2.4) | ✅ | ✅ | ✅ | ✅ |
| MCP Allowlisting (4.1) | Manual | Manual | Manual | ✅ Centralized |
| MCP Tool Protection (4.2) | ✅ | ✅ | ✅ | ✅ |
| Disable Auto-Run (5.1) | ✅ | ✅ | ✅ | ✅ (enforceable) |
| Agent Sandbox (5.2) | ✅ | ✅ | ✅ | ✅ (enforceable) |
| Self-Hosted Agents (5.3) | ❌ | ❌ | ❌ | ✅ |
| Rules File Audit (6.1) | ✅ | ✅ | ✅ | ✅ |
| Workspace Trust (7.1) | ✅ | ✅ | ✅ | ✅ (enforceable) |
| Extension Allowlist (8.1) | ❌ | ❌ | ❌ | ✅ |
| Telemetry Control (9.1) | ✅ | ✅ | ✅ | ✅ |
| Audit Logs (10.1) | ❌ | ❌ | Basic | ✅ Full |
| Log Streaming (10.1) | ❌ | ❌ | ❌ | ✅ |
| Organization Policies (11.1) | ❌ | ❌ | Partial | ✅ |
| MDM Enforcement (11.2) | ❌ | ❌ | ❌ | ✅ |
| CMEK (11.1) | ❌ | ❌ | ❌ | ✅ |
| AI Code Tracking API (11.1) | ❌ | ❌ | ❌ | ✅ (alpha) |
Appendix B: Security Incidents and CVEs
| Date | CVE/ID | Name | Severity | Description | Fixed In |
|---|---|---|---|---|---|
| Mar 2025 | None | Rules File Backdoor | High | Hidden Unicode in .cursorrules injects invisible backdoors (Pillar Security) |
Attack class |
| Jun 2025 | None | Malicious Extension | High | Fake Solidity extension on Open VSX → $500K crypto theft (Kaspersky) | Removed |
| Aug 2025 | CVE-2025-54135 | CurXecute | 8.6 | RCE via MCP prompt injection (AIM Security) | v1.3 |
| Aug 2025 | CVE-2025-54136 | MCPoison | 7.2 | Persistent MCP trust bypass (Check Point) | v1.3 |
| Sep 2025 | None | Workspace Trust Bypass | High | Auto-exec via disabled Workspace Trust (Oasis Security) | Design choice |
| Sep 2025 | CVE-2025-59944 | Case-Sensitivity Bypass | 8.0 | File protection bypass on macOS/Windows (Lakera) | v1.7 |
| Sep 2025 | CVE-2025-61590 | Workspace RCE | High | RCE via .code-workspace files (Geordie AI) | v1.7 |
| Sep 2025 | CVE-2025-61591 | OAuth MCP Impersonation | High | MCP server impersonation via OAuth (Geordie AI) | v1.7 |
| Sep 2025 | CVE-2025-61592 | CLI Config Exploit | High | RCE via manipulated CLI config (Geordie AI) | v1.7 |
| Sep 2025 | CVE-2025-61593 | CLI Agent Overwrite | High | Sensitive file overwrite via CLI agent (Geordie AI) | v1.7 |
| Nov 2025 | GHSA-vhc2-fjv4-wqch | Cursorignore Bypass | Medium | AI agents read files protected by .cursorignore |
Cursor 1.7.23 |
| Nov 2025 | CVE-2025-64106 | MCP Install Trust | 8.8 | MCP deep-link handling trust bypass (Cyata) | Patched |
| Dec 2025 | None | Extension Recommendation | Medium | IDE recommends non-existent extensions on Open VSX (Koi Security) | Dec 1, 2025 |
| Dec 2025 | GHSA-82wg-qcm4-fp2w | Terminal Allowlist Bypass | High | Environment variable manipulation bypasses command denylist | Patched |
| 2025 | None | NomShub | Critical | Persistent remote access via sandbox breakout (Straiker) | v3.0 |
| 2026 | CVE-2026-22708 | Shell Builtin Bypass | High | Shell builtins bypass command allowlist for sandbox escape (Straiker) | Patched |
| Ongoing | 94+ CVEs | Chromium N-Days | Various | Cursor runs Chromium 6 major versions behind; 94+ unpatched CVEs (OX Security) | Unresolved |
Minimum safe version: Cursor 1.7+ (patches all September 2025 CVEs). Recommended: latest stable release.
Appendix C: Compliance Framework Mappings
OWASP Top 10 for LLM Applications (2025)
| OWASP LLM ID | Risk | Guide Controls |
|---|---|---|
| LLM01 | Prompt Injection | 4.1, 4.2, 5.1, 6.1, 6.2, 7.1 |
| LLM02 | Sensitive Information Disclosure | 2.1, 2.3, 3.1, 7.2, 9.1 |
| LLM03 | Supply Chain | 4.1, 6.1, 8.1 |
| LLM05 | Improper Output Handling | 5.1, 7.2 |
| LLM06 | Excessive Agency | 4.2, 5.1, 5.2, 5.3 |
OWASP Top 10 for Agentic Applications (2026)
| OWASP Agentic ID | Risk | Guide Controls |
|---|---|---|
| ASI01 | Agent Goal Hijacking | 5.1, 6.1, 7.1 |
| ASI02 | Tool Misuse | 4.2, 5.1 |
| ASI03 | Identity and Privilege Abuse | 4.2, 5.1, 5.2 |
| ASI04 | Memory Poisoning | 6.1, 6.2 |
| ASI05 | Supply Chain Risks | 4.1, 8.1 |
| ASI06 | Code Execution | 5.1, 5.2 |
| ASI10 | Rogue Agents | 10.2 |
NIST AI RMF and MITRE ATLAS
| Framework | Reference | Guide Controls |
|---|---|---|
| NIST AI RMF GOVERN 1.4 | AI deployment controls | 2.4, 5.3, 11.1 |
| NIST AI RMF GOVERN 1.7 | AI data governance | 2.1, 2.3 |
| NIST AI RMF MAP 1.5 | Risk characterization | 4.1 |
| NIST AI RMF MANAGE 1.3 | Deployment risk mgmt | 5.3, 11.1 |
| NIST SP 800-218A PW.5.1 | Injection prevention | 4.1, 6.1 |
| NIST SP 800-218A PW.7.1 | Code review | 6.2 |
| MITRE ATLAS AML.T0051 | LLM prompt injection | 6.1, 7.1 |
| MITRE ATLAS AML.T0061 | AI agent tools | 5.1, 5.2 |
| MITRE ATLAS AML.T0063 | Poisoned AI agent tool | 4.1 |
Appendix D: References
Official Cursor Documentation:
- Cursor Trust Center
- Cursor Security
- Cursor Data Use & Privacy
- Cursor Enterprise
- Cursor Privacy and Data Governance Docs
- Cursor Identity and Access Management
- Cursor Network Configuration
- Cursor Deployment Patterns
- Cursor Ignore Files
- Cursor Agent Sandboxing Blog
- Cursor Self-Hosted Cloud Agents
- Cursor DPA
VSCode Security (Cursor inherits):
CVE and Vulnerability Research:
- Tenable: CurXecute and MCPoison FAQ
- Check Point Research: MCPoison
- Lakera: CVE-2025-59944
- Cyata: CVE-2025-64106
- Geordie AI: Multiple Cursor CVEs
- Oasis Security: Workspace Trust Bypass
- Straiker: NomShub Sandbox Breakout
- Pillar Security: Rules File Backdoor
- HiddenLayer: Control Token Abuse
- Kaspersky: $500K Crypto Theft via Extension
- OX Security: 94 Chromium Vulnerabilities
Industry Frameworks:
- OWASP Top 10 for LLM Applications (2025)
- OWASP Top 10 for Agentic Applications (2026)
- NIST AI Risk Management Framework
- NIST SP 800-218A: Secure Software Development for GenAI
- MITRE ATLAS
- OpenSSF: Security-Focused Guide for AI Code Assistant Instructions
- CSA R.A.I.L.G.U.A.R.D. Framework
Government Guidance:
- NCSC: Guidelines for Secure AI System Development
- NSA: Deploying AI Systems Securely
- NSA: AI Data Security
Community Resources:
- Endor Labs: Cursor Security 2026
- MintMCP: Cursor Security Guide
- matank001/cursor-security-rules (GitHub)
- brighton-labs/railguard-cursor-coding (GitHub)
- slowmist/MCP-Security-Checklist (GitHub)
Changelog
| Date | Version | Maturity | Changes | Author |
|---|---|---|---|---|
| 2026-04-15 | 0.3.0 | draft | [SECURITY] Major update: add MCP Server Security (sec 4), Agent & Sandbox Security (sec 5), Rules File Security (sec 6), SSO/SCIM (1.3-1.4), .cursorignore (2.3), extension supply chain (8.1), agent monitoring (10.2), MDM enforcement (11.2). Update Security Incidents appendix with 12+ new CVEs/vulns. Add OWASP Agentic/LLM, NIST AI RMF, MITRE ATLAS compliance mappings. Update edition compatibility for Teams/Enterprise tiers. Create 12 code pack files. | Claude Code (Opus 4.6) |
| 2026-02-19 | 0.2.0 | draft | Migrate all inline code blocks to Code Packs (sections 2.1, 3.1, 3.2, 3.3, 4.2, 7.1) | Claude Code (Opus 4.6) |
| 2025-12-15 | 0.1.0 | draft | Initial Cursor hardening guide | Claude Code (Opus 4.5) |
Contributing
Found an issue or want to improve this guide?
- Report outdated information: Open an issue with tag
content-outdated - Propose new controls: Open an issue with tag
new-control - Submit improvements: See Contributing Guide
Questions or feedback?
- GitHub Discussions: [Link]
- GitHub Issues: [Link]
Built with focus on securing AI-powered development tools while maintaining developer productivity.