Version: @optimizely/mcp-server-exp@1.0.7
Environment: VS Code 1.104.1, Node.js v24.8.0
Testing Date: September 22, 2025
Reporter: SumUp
The Optimizely MCP Server has three critical bugs that prevent normal operation, affecting 63% of all tools (19/30) for enterprise users with migrated projects. Testing was conducted with a clean VS Code instance using only the Optimizely MCP server.
Testing Methodology: Testing performed across two project types:
- Native Project (6495345115398144): Created directly in Feature Experimentation - β Works
- Migrated Project (18423480104): Legacy project migrated from Web Experimentation - β Broken
This comparison proves that Bug #2 specifically affects migrated projects.
- π¨ Server Startup: ES module import failures prevent initialization
- π¨ Enterprise Readiness: Analytics views empty for migrated projects
- π¨ System Stability: Memory leaks cause server crashes with large datasets
- π Performance: 8.3x degradation (4 seconds for 908 flags), possibily due to misconfigured rate limiting
Impact: Complete server startup failure
Status: π§ WORKAROUND available with pnpm patch
The server fails to start due to 8 dynamic imports missing .js extensions across 3 files. This prevents any MCP functionality.
Technical Details & Fix
Error Example:
Error: Cannot find module '/var/folders/.../node_modules/@optimizely/mcp-server-exp/dist/tools/semantic_search.js'
Affected Files:
dist/cli/orchestration-template-cli.js(6 imports at lines 126, 221, 364, 404, 510, 541)dist/tools/OptimizelyMCPTools.js(PaginationParameterExtractor import)dist/tools/individual/AnalyzeData.js(TemplateQueryTranslator import)
Quick Fix:
pnpm add -D @optimizely/mcp-server-exp@1.0.7
# Apply pre-created 5.9KB patch fixing all 8 imports
pnpm installPackage.json Configuration:
{
"pnpm": {
"patchedDependencies": {
"@optimizely/mcp-server-exp@1.0.7": "patches/optimizely-mcp-server-exp-1.0.7.patch"
}
}
}Impact: CRITICAL - Cascade failure affecting 63% of tools (19/30) + 83% data loss
Root Cause: SQLite cache sync fundamentally broken for migrated projects
Enterprise Optimizely accounts with migrated projects (Web β Feature Experimentation) experience catastrophic data loss with only 160 of 908+ entities successfully cached.
Technical Analysis
Symptoms:
- β
analyze_dataonflags_unified_viewβ "No data found" (despite 908 flags existing) - β
analyze_dataonexperiments_unified_viewβ "No data found" - β
list_entities('flag')β 908 flags successfully retrieved (raw API works)
Comparative Evidence:
Native Project (6495345115398144): β Works
analyze_data(flags_unified_view) β Returns 2 flags successfullyMigrated Project (18423480104): β Broken
analyze_data(flags_unified_view) β "NoDataFound" (despite 908 flags synced)Root Cause:
The MCP server uses a local SQLite database (~/.optimizely-mcp/cache.db) for analytics views. The cache sync pipeline has fundamental compatibility issues with migrated projects, resulting in:
- Massive data loss: Only 160/908+ flags successfully synced
- Table misrouting: Flags stored in
featurestable but queries searchflagstable - Complete experiment loss: All A/B tests missing from cache (experiments table empty)
- False success reporting: Cache reports "success" while missing 83% of data
Cache vs Query Disconnect:
manage_cache(refresh) β β
"Synced entities successfully"
analyze_data(any_view) β β "NoDataFound"
list_entities(flag) β β
Returns flag data (bypasses query system)The issue appears to be incorrect database table routing for migrated projects:
- Cache operations succeed and store data in
featurestable (160 entities) - Query operations search
flagstable (empty) for migrated projects - Direct entity access bypasses broken query system and works
CRITICAL DISCOVERY: Cache sync is fundamentally broken - 83% data loss confirmed
Database Table Analysis:
features table: 160 items (flags mislabeled as "features" for migrated projects)
experiments table: 0 items (empty - all experiments missing)
flags table: 1 item (only native project data)
rules table: 1 item (only native project data)Missing Entity Evidence - Critical Flags Not Synced:
_shop__self_service_payment_plus- β NOT FOUND in cache_shop__payment_component- β Found in cache (1 of few)_dashboard_payment_links_page- β Found in cache_lending_apm_self_service_-_br- β Found in cache
Sync Failure Rate: Only 160 of 908+ flags successfully cached (83% data loss)
Performance Impact:
-
8.3x performance degradation (4 seconds for 908 flags)
-
CRITICAL: Wrong API Rate Limits Configured π¨
Current MCP Server Config (WRONG):
featureExperimentationAPI: { requestsPerMinute: 60, // Wrong requestsPerSecond: 10 // 5x TOO HIGH! }
Actual Optimizely API Limits (from official docs):
- Feature Experimentation API: 2 requests/second, 120 requests/minute (server allows 10/sec = 5x over limit)
- Legacy Optimizely API: 100 requests/minute (server allows 60/min = within limit)
Root Cause: Rate limiting IS implemented but Feature Experimentation API limits are 5x too high, causing API throttling/rejections
Required Optimizations:
// Fix rate limits in apiConfig.js (from official Feature Experimentation API docs)
featureExperimentationAPI: {
requestsPerMinute: 120, // β
CORRECT: Official documented limit
requestsPerSecond: 2 // β
CORRECT: Official documented limit (currently 10 = 5x too high)
}# Smart pagination (from Optimizely docs) - Already needed
GET /flags/v1/projects/{project_id}/flags?page=1&per_page=50
GET /v2/attributes?page=1&per_page=25&environment_key=productionExpected Improvement: 80%+ performance gain (4s β 0.8s) by fixing rate limits + pagination
Permission & Scope Risk Amplification with MCP
Note: SumUp's Optimizely setup uses a single project (18423480104) containing flags across multiple teams. The MCP server's conversational and bulk-operation capabilities substantially increase the risk of accidental cross-team changes because:
- There is no UI-based ownership context in a conversational interface
- Bulk tools (e.g.
update_flags_bulk,archive_flags_bulk) can target matching patterns across teams - Autocomplete/suggestion from editor integrations may propose flags from other teams without ownership metadata
Recommendation: Consider splitting large company-wide projects into team-scoped projects or adding mandatory confirmation prompts and ownership checks for any bulk or pattern-based operations.
Impact: Server becomes unusable with large projects
Trigger: get_entity_details with 900+ entities
Crash Analysis
Latest Crash (September 21, 2025):
Server start: 11:12:45, Tools discovered: 11:12:47
Crash occurred: 11:52:43 (during MCP tool execution)
Peak Memory: 4,090MB before crash
Error: FATAL ERROR: Ineffective mark-compacts near heap limitAdditional Crash (September 23, 2025) - analyze_data Memory Consumption:
{
"structured_query": {
"from": "change_history_flat",
"limit": 10,
"where": {
"project_id": "18423480104",
"change_description": "contains:carousel"
}
}
}Output observed: "Canceled: Canceled" after the process encountered a JavaScript heap out of memory error. Server logs showed the following GC and OOM trace:
2025-09-23 12:45:46.046 [info] Starting server optimizely
2025-09-23 12:45:46.049 [info] Connection state: Starting
2025-09-23 12:45:46.049 [info] Starting server from LocalProcess extension host
2025-09-23 12:45:46.058 [info] Connection state: Starting
2025-09-23 12:45:46.059 [info] Connection state: Running
2025-09-23 12:45:48.583 [info] Discovered 30 tools
2025-09-23 12:49:32.090 [warning] [server stderr]
2025-09-23 12:49:32.090 [warning] [server stderr] <--- Last few GCs --->
2025-09-23 12:49:32.090 [warning] [server stderr]
2025-09-23 12:49:32.090 [warning] [server stderr] [74892:0xa9340c000] 222333 ms: Scavenge 4030.8 (4048.9) -> 4017.4 (4082.1) MB, pooled: 0 MB, 28.12 / 0.00 ms (average mu = 0.292, current mu = 0.238) allocation failure;
2025-09-23 12:49:32.090 [warning] [server stderr] [74892:0xa9340c000] 224015 ms: Mark-Compact (reduce) 4017.4 (4082.1) -> 4010.9 (4019.1) MB, pooled: 0 MB, 1671.08 / 0.00 ms (+ 2.3 ms in 1 steps since start of marking, biggest step 2.3 ms, walltime since start of marking 1682 ms) (average mu = 0.247,
2025-09-23 12:49:32.091 [warning] [server stderr] FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
2025-09-23 12:49:32.091 [warning] [server stderr] ----- Native stack trace -----
2025-09-23 12:49:32.091 [warning] [server stderr]
2025-09-23 12:49:32.092 [warning] [server stderr] 1: 0x1050f8e58 node::OOMErrorHandler(char const*, v8::OOMDetails const&) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.093 [warning] [server stderr] 2: 0x1052f027c v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, v8::OOMDetails const&) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.094 [warning] [server stderr] 3: 0x10554e408 v8::internal::Heap::stack() [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.095 [warning] [server stderr] 4: 0x1055517dc v8::internal::Heap::HasLowYoungGenerationAllocationRate() [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.096 [warning] [server stderr] 5: 0x1055629f8 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags)::$_1::operator()() const [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.097 [warning] [server stderr] 6: 0x1055622cc void heap::base::Stack::SetMarkerAndCallbackImpl<v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags)::$_1>(heap::base::Stack*, void*, void const*) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.098 [warning] [server stderr] 7: 0x105f17864 PushAllRegistersAndIterateStack [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.099 [warning] [server stderr] 8: 0x10554c10c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.100 [warning] [server stderr] 9: 0x1054c3fe8 v8::internal::StackGuard::HandleInterrupts(v8::internal::StackGuard::InterruptLevel) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.101 [warning] [server stderr] 10: 0x1059b0cec v8::internal::Runtime_StackGuard(int, unsigned long*, v8::internal::Isolate*) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.102 [warning] [server stderr] 11: 0x106005f74 Builtins_CEntry_Return1_ArgvOnStack_NoBuiltinExit [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.103 [warning] [server stderr] 12: 0x105fa7684 Builtins_AsyncFunctionAwaitResolveClosure [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.104 [warning] [server stderr] 13: 0x10607c5f8 Builtins_PromiseFulfillReactionJob [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.104 [warning] [server stderr] 14: 0x105f96a50 Builtins_RunMicrotasks [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.105 [warning] [server stderr] 15: 0x105f667b0 Builtins_JSRunMicrotasksEntry [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.106 [warning] [server stderr] 16: 0x105492998 v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.106 [warning] [server stderr] 17: 0x10549320c v8::internal::(anonymous namespace)::InvokeWithTryCatch(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.107 [warning] [server stderr] 18: 0x105493348 v8::internal::Execution::TryRunMicrotasks(v8::internal::Isolate*, v8::internal::MicrotaskQueue*) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.108 [warning] [server stderr] 19: 0x1054c224c v8::internal::MicrotaskQueue::RunMicrotasks(v8::internal::Isolate*) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.108 [warning] [server stderr] 20: 0x1054c2bac v8::internal::MicrotaskQueue::PerformCheckpoint(v8::Isolate*) [/Users/davimedeiros/.npm/_npx/dfd121ea5f66bed8/node_modules/node/bin/node]
2025-09-23 12:49:32.109 [info] Connection state: Error Process exited with code nullPrevious Crash Pattern:
Timeline: 10:33:55 β 10:40:33 (6m 38s processing)
Memory: 4089.3MB β 4086.7MB (ineffective GC)
Entity: test_flag_native, Project: 6495345115398144Garbage Collection Log:
[6088:0xa9980c000] 392826 ms: Scavenge (interleaved) 4089.3 (4094.8) -> 4086.7 (4095.8) MB
[6088:0xa9980c000] 393835 ms: Mark-Compact (reduce) 4086.8 (4095.8) -> 4086.7 (4089.1) MB
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed
Performance Issues:
- Memory usage grows exponentially during entity retrieval
- Operations take several seconds instead of being instant
- 4GB+ heap allocation before crash
- Server requires restart after crashes
- Memory consumption reached 4,089MB during single entity query
- Garbage collection attempts failed to reclaim memory
- "Ineffective mark-compacts" indicates memory leak or object retention
Confirmed Memory Leak Evidence:
- β Reproducible across different query patterns
- β Affects both specific entity queries and broader operations
- β No correlation with project type (native vs migrated)
- β Memory usage patterns identical across crashes
- β No effective workaround through query filtering
Testing Coverage: 25/30 tools conclusively tested (83% coverage) Result: 63% of tools broken due to analytics view cascade failure
pie title MCP Tool Testing Results (30 Total Tools)
"Working (6)" : 6
"Analytics Cascade Failure (19)" : 19
"Unable to Test (5)" : 5
Core functionality that works with migrated projects:
| Tool | Category | Status | Use Case |
|---|---|---|---|
list_projects |
Discovery | β | Project discovery and overview |
list_entities |
Discovery | β | Entity listing with filters |
get_system_status |
System | β | Health checks, connectivity |
manage_cache |
System | β | Cache management and refresh |
manage_flag_state |
Flag Ops | β | Enable/disable flags |
get_entity_documentation |
Documentation | β | Field discovery for queries |
Analytics Cascade Failures (12 tools) - Due to Empty Views
| Tool | Category | Dependency Issue |
|---|---|---|
compare_environments |
Analytics | Requires flags_unified_view |
get_flag_history |
Analytics | Requires unified views |
get_optimization_analysis |
Analytics | Requires unified views |
get_recommendations |
Analytics | Requires unified views |
get_results |
Analytics | Requires experiments views |
export_data |
Export | Analytics mode uses views |
get_project_data |
Discovery | Aggregates from unified views |
get_openapi_reference |
Documentation | Examples use unified views |
get_optimizely_api_reference |
Documentation | Examples use unified views |
get_tool_reference |
Documentation | Examples use unified views |
get_entity_templates |
Templates | Examples use unified views |
validate_template |
Templates | Validation uses unified views |
Direct Failures (2 tools) - Memory & Data Issues
| Tool | Status | Error Type | Issue |
|---|---|---|---|
get_entity_details |
β CRITICAL | Memory Leak | CONFIRMED: 4GB+ heap crash |
analyze_data |
β CRITICAL | Memory Leak | CONFIRMED: OOM on large queries (e.g. change_history_flat); output: "Canceled: Canceled" |
API/Implementation Issues (5 tools)
| Tool | Status | Error Type | Issue |
|---|---|---|---|
archive_flags_bulk |
β | API 400 | Bad request format |
get_flag_entities |
β | API 404 | Endpoint doesn't exist |
update_flags_bulk |
β | API 404 | Endpoint doesn't exist |
update_ruleset |
β | JSON Patch | Format validation errors |
manage_entity_lifecycle |
Partial | Some operations not supported |
Show Cascade Failure Diagram
graph TD
A[analyze_data β<br/>Unified Views Empty] --> B[Cache/Migration Pipeline β]
B --> C[flags_unified_view β]
B --> D[experiments_unified_view β]
B --> E[audiences_flat β]
B --> F[pages_flat β]
C --> G[compare_environments β]
C --> H[get_flag_history β]
C --> I[get_optimization_analysis β]
D --> J[get_results β]
D --> K[get_recommendations β]
E --> L[export_data β<br/>Analytics Mode]
F --> L
M[list_entities β
<br/>Raw API] --> N[Some Basic Operations β
]
O[get_entity_details β<br/>Memory Leak] --> P[Server Crash]
style A fill:#f8d7da,stroke:#721c24,color:#000
style B fill:#f8d7da,stroke:#721c24,color:#000
style P fill:#dc3545,stroke:#721c24,color:#fff
style N fill:#d4edda,stroke:#155724,color:#000
| Component | Version/Details |
|---|---|
| MCP Server | @optimizely/mcp-server-exp@1.0.7 |
| VS Code | 1.104.1 (Universal) - Electron 37.3.1 |
| Node.js | v24.8.0 (also tested on v23.7.0) |
| OS | macOS 15.0.0 (Darwin Kernel 25.0.0) ARM64 |
| Project Types | Feature Experimentation (native vs migrated) |
| Test Projects | Native: 6495345115398144, Migrated: 18423480104 |
| Environment | Clean VS Code instance, no other MCP servers |
-
Configure MCP Server for migrated project
18423480104("FX: SumUp Web") -
Execute cache refresh:
manage_cache({ operation: 'refresh', options: { force: true, wait_for_completion: false }, });
-
Test Analysis Tools:
// This fails on migrated projects analyze_data({ from: 'flags_unified_view', project_id: '18423480104' }); // This works (bypasses query system) list_entities({ entity_type: 'flag', project_id: '18423480104' });
Observed Issues:
- Cache refresh blocks for ~2 minutes (expected: immediate return)
- Cache reports success but analytics views remain empty
- Database contains 160 entities in
featurestable, 0 inflagstable analyze_dataoperations fail with "NoDataFound" despite successful cache
Show Testing Evidence
Cache Success vs Query Failure Pattern:
# Step 1: Cache reports success
manage_cache(refresh) β β
"Synced 908 flags, 160 entities successfully"
# Step 2: Direct SQLite shows data exists
sqlite3 ~/.optimizely-mcp/cache.db
SELECT COUNT(*) FROM features; β 160 (data exists)
SELECT COUNT(*) FROM flags; β 0 (wrong table for queries)
# Step 3: Analytics queries fail
analyze_data(flags_unified_view) β β "NoDataFound"
analyze_data(experiments_unified_view) β β "NoDataFound"
# Step 4: Direct entity access works
list_entities(flag) β β
Returns 908 flags (bypasses broken views)