Cursor + GPT models: Provider Error caused by overly-complex MCP tool JSON schemas (OpenAI nesting depth limit)
When using Cursor with MCP (Model Context Protocol) servers, GPT models (gpt-5.3-codex-, gpt-5.2-codex-, etc.) return a generic "Provider Error" that gives no actionable information:
Provider Error: We're having trouble connecting to the model provider. This might be temporary - please try again in a moment.
Root cause: OpenAI's API rejects requests when any tool's JSON schema exceeds its nesting depth limit (approximately 5 levels). MCP servers that auto-generate schemas from deeply recursive data structures can produce schemas with 20+ levels of nesting, which silently causes this error for all requests — even simple ones like "say hello" — as long as those tools are loaded.
- Cursor CLI (
cursor-agent) - A GPT model (e.g.
gpt-5.3-codex) - Any MCP server that serves a tool with a deeply nested JSON schema
# 1. Pull a Debian image and install Node/Python
docker run -it --name cursor-repro debian:bookworm bash
apt-get update && apt-get install -y python3 nodejs curl
# 2. Copy your Cursor auth
docker cp ~/.config/cursor/auth.json cursor-repro:/root/.config/cursor/auth.json
docker cp ~/.cursor/mcp.json cursor-repro:/root/.cursor/mcp.json
# Copy the project's .cursor dir (contains mcp-approvals.json and mcp-cache.json)
docker cp ~/.cursor/projects/<your-project>/ cursor-repro:/root/.cursor/projects/<your-project>/
# 3. Install cursor-agent binary (get path from: which cursor-agent or find ~/.local -name cursor-agent)
# Copy the binary and its node runtime to the same paths inside Docker
# 4. Create a mock MCP server with a deeply nested schema
cat > /tmp/deep-schema-mcp.py << 'PYEOF'
#!/usr/bin/env python3
import sys, json
# Build a schema with 20 levels of nesting
def make_nested(depth):
if depth == 0:
return {"type": "string"}
return {"type": "object", "properties": {"child": make_nested(depth - 1)}}
TOOLS = [{
"name": "complex_tool",
"description": "A tool with a deeply nested schema",
"inputSchema": {
"type": "object",
"properties": {"data": make_nested(20)},
}
}]
for line in sys.stdin:
req = json.loads(line.strip())
method, req_id = req.get("method"), req.get("id")
if method == "initialize":
resp = {"jsonrpc":"2.0","id":req_id,"result":{"protocolVersion":"2024-11-05","capabilities":{"tools":{}},"serverInfo":{"name":"deep-schema","version":"1.0.0"}}}
elif method == "notifications/initialized":
continue
elif method == "tools/list":
resp = {"jsonrpc":"2.0","id":req_id,"result":{"tools":TOOLS}}
else:
resp = {"jsonrpc":"2.0","id":req_id,"result":{}}
print(json.dumps(resp), flush=True)
PYEOF
chmod +x /tmp/deep-schema-mcp.py
# 5. Configure the MCP server in ~/.cursor/mcp.json
cat > /root/.cursor/mcp.json << 'MCPEOF'
{
"mcpServers": {
"deep-schema": {
"type": "stdio",
"command": "python3",
"args": ["/tmp/deep-schema-mcp.py"]
}
}
}
MCPEOF
# 6. Compute the approval fingerprint so it's pre-approved (no interactive prompt)
node -e "
const crypto = require('crypto');
const cwd = '/your/project/path';
// NOTE: Zod schema strips the 'type' field before fingerprinting!
const config = {command: 'python3', args: ['/tmp/deep-schema-mcp.py']};
const s = {path: cwd, server: config};
const h = crypto.createHash('sha256').update(JSON.stringify(s)).digest('hex').substring(0,16);
console.log(JSON.stringify(['deep-schema-' + h]));
"
# Put the output into the project's mcp-approvals.json
# 7. Run with GPT — observe the Provider Error
cursor-agent --api-key <your-key> --model gpt-5.3-codex -p "say hello"
# Output: "Provider Error: We're having trouble connecting..."
# 8. Fix: reduce schema nesting depth to <= 5 levels
# Update the mock to use a flat schema and re-run — it works.Cursor computes an approval fingerprint for each MCP server:
// From cursor-agent index.js (minified)
function l(serverName, serverConfig, cwd) {
const s = { path: cwd, server: serverConfig };
return `${serverName}-${crypto.createHash("sha256")
.update(JSON.stringify(s))
.digest("hex")
.substring(0, 16)}`;
}Critical: The serverConfig object is parsed through a Zod schema that
strips the type field ("type": "stdio" / "type": "http"). So when
computing fingerprints manually, omit type:
// WRONG - includes type field, fingerprint won't match
const config = { type: "stdio", command: "python3", args: [...] };
// CORRECT - type is stripped by Zod before fingerprinting
const config = { command: "python3", args: [...] };Cursor catches the OpenAI API error and maps it to the generic "Provider Error" message regardless of the underlying cause. The actual error from OpenAI is something like "Invalid schema: maximum nesting depth exceeded."
All OpenAI-backed models fail (GPT-5.3-codex-, GPT-5.2-codex-, GPT-4o, etc.). Claude and Gemini models appear more tolerant of deep schemas and may not fail.
- Check if the error happens even for trivial prompts ("say hello")
- Try with zero approved MCPs (
[]in mcp-approvals.json) — if it works, MCPs are the cause - Binary search: approve MCPs one at a time to find the culprit
- For each suspicious MCP, check max schema nesting depth:
import json
def max_depth(obj, d=0):
if isinstance(obj, dict):
return max([max_depth(v, d+1) for v in obj.values()], default=d)
if isinstance(obj, list):
return max([max_depth(v, d+1) for v in obj], default=d)
return d
cache = json.load(open('~/.cursor/projects/<project>/mcp-cache.json'))
for server, data in cache.items():
for tool in data.get('tools', []):
depth = max_depth(tool.get('inputSchema', {}))
if depth > 5:
print(f"{server}/{tool['name']}: depth={depth}, size={len(json.dumps(tool['inputSchema']))} bytes")Flatten the problematic tool schemas to <= 5 nesting levels. For schemas
that represent complex nested objects, use type: object without deep property
nesting — the LLM can be guided by documentation/resources rather than schema
structure.
Real-world example: SigNoz signoz-mcp-server PR #60 — signoz_create_dashboard had 20 levels of nesting (20,461 bytes), fixed to 3 levels (780 bytes).
It would help if Cursor:
- Detected schema nesting depth issues before sending to the provider and warned the user
- Surfaced the underlying API error message instead of the generic "Provider Error"
- Indicated which MCP server/tool is causing the rejection