Anticipated Question: "Does my local Jira data get sent to OpenAI/Anthropic? How does the IDE securely talk to our internal networks?"
Talking Points:
Local Execution: MCP servers run locally (via uv, npx, or podman). They act as proxies securely nestled inside Red Hat's VPN/Intranet.
Cloud LLM Boundary: The LLM does not have direct access to internal APIs. It only receives the specific schemas of available tools.
Just-in-Time Context: When the LLM decides it needs data, it asks Cursor to execute the local tool. Cursor runs the tool, grabs the output, and sends only that result back to the LLM context window.
sequenceDiagram
autonumber
participant LLM as Claude 3.5 Sonnet / GPT-4o
participant IDE as Cursor (Local)
participant MCP as MCP Server (Local Process)
participant API as Red Hat Internal (Jira/Slack)
Note over IDE, MCP: 1. Initialization Phase
IDE->>MCP: Initialize connection (stdio or SSE)
MCP-->>IDE: Return available Tool Schemas (e.g., search_messages)
Note over LLM, API: 2. The Agentic Loop
IDE->>LLM: User Prompt + System Context + Tool Schemas
LLM-->>IDE: Tool Call Request: search_messages(query="RHOAI 3.3")
IDE->>MCP: Execute Tool (JSON-RPC Call)
MCP->>API: HTTP API Request (using injected PAT/Tokens)
API-->>MCP: JSON Response (Tickets/Threads)
MCP-->>IDE: Formatted Tool Result
IDE->>LLM: Inject Tool Result into Conversation Context
LLM-->>IDE: Final synthesized response / Code generation
Anticipated Question: "How are the IDE and the MCP server actually communicating? Is it a REST API?"
Talking Points:
MCP relies on JSON-RPC 2.0.
stdio (Standard I/O): The most common for local execution. Cursor spawns the Python/Node process and communicates directly over stdin/stdout. Highly secure, zero network ports opened.
SSE (Server-Sent Events): Used for remote/gateway servers. Operates over HTTP, allowing a centralized MCP server (like the IBM Context Forge mentioned in the setup) to serve multiple clients.
graph TD
subgraph Cursor Client
A[MCP Client Service]
end
subgraph Transport Layer
direction LR
B{Transport Type}
B -->|stdio| C[Standard I/O Pipes]
B -->|SSE| D[HTTP Server-Sent Events]
end
subgraph MCP Server Environment
E[MCP SDK / FastMCP]
F[Tool Implementations]
G[Resource Exporters]
E --> F
E --> G
end
A <--> B
C <--> E
D <--> E
style B fill:#2b2b2b,stroke:#00a8cc,stroke-width:2px,color:#fff
Anticipated Question: "You mentioned putting tokens in plaintext config files or exfiltrating browser cookies. How do we do this securely?"
Talking Points:
We've mapped out three tiers of secret management for MCPs.
Tier 1 (Dev/Hack): Hardcoded .cursorrules or workspace settings. High risk of accidental git commits.
Tier 2 (OS-Native): The jira-touchid.swift or secret-tool approach shown in the guide. The MCP spawn script blocks and prompts the OS secure enclave (TouchID/Gnome Keyring) just-in-time.
Tier 3 (Enterprise): The MCP Gateway (mcp-context-forge). Secrets never live on the developer's laptop directly; the gateway handles authentication and rate limiting.
graph LR
subgraph tier1 ["Tier 1: Plaintext (Not Recommended)"]
A[cursor settings.json] -->|Injects| B(MCP Process)
end
subgraph tier2 ["Tier 2: Secure OS Enclave (Current Best Local)"]
C[Swift/Bash Wrapper] -->|Prompts| D{TouchID / Keyring}
D -->|Valid| E[Extracts Secret to ENV]
E -->|Spawns| F(MCP Process via uvx/podman)
end
subgraph tier3 ["Tier 3: Gateway (Enterprise Ready)"]
G[Cursor] -->|SSE| H[MCP Context Forge Gateway]
H -->|Vault Integration| I(Jira/Slack APIs)
end
style D fill:#f96,stroke:#333,stroke-width:2px
style H fill:#9cf,stroke:#333,stroke-width:2px
Anticipated Question: "Why are we scraping cookies for Slack instead of just creating a Bot Token?"
Talking Points:
Red Hat IT has strict reviews for standard xoxb- (Bot) tokens. Approval can take weeks and requires specific organizational scopes.
For personal agentic productivity, we utilize the user's existing session tokens.
xoxc- (Web Token): Found in localStorage. This is the client-side API token used by the Slack Web App.
xoxd- (Cookie Token): The d cookie. This acts as the session authenticator. Together with xoxc-, they mimic a legitimate browser session.
Warning: Treat these like radioactive material. If compromised, an attacker has full impersonation rights to your Slack account.
flowchart TD
subgraph Slack Web Application
A[Login / SSO] --> B[Set 'd' Cookie]
A --> C[Generate xoxc- Token in localStorage]
end
B -.->|Exfiltrate via DevTools| D[SLACK_XOXD_TOKEN]
C -.->|Exfiltrate via DevTools| E[SLACK_XOXC_TOKEN]
D --> F{slack_mcp_server.py}
E --> F
F -->|Make Request| G[HTTPX Async Client]
G -->|Headers: Bearer xoxc-... \n Cookies: d=xoxd-...| H((Slack API: \n /search.messages \n /chat.postMessage))
style H fill:#4A154B,color:#fff
Anticipated Question: "How does Cursor know to search Jira and then Slack to write a report?"
Talking Points:
Cursor's Agent Mode runs a specialized ReAct (Reason + Act) loop.
Instead of standard Chat, it executes sequentially, interpreting the output of one MCP tool to parameterize the next.
Example Execution: 1. Action: Get Jira Ticket (RHOAIENG-43037). 2. Observation: Ticket mentions "Authentication bug". 3. Action: Search Slack for "RHOAIENG-43037 Authentication bug" in the last 10 days. 4. Observation: Finds thread resolving the issue. 5. Action: Output final HTML report.
stateDiagram-v2
[*] --> Understand_Prompt
Understand_Prompt --> Evaluate_Available_Tools: User asks for 3.3 Status
Evaluate_Available_Tools --> Call_Jira_MCP: mcp-atlassian (jql="project=RHOAI")
Call_Jira_MCP --> Read_Jira_Response
Read_Jira_Response --> Decide_Next_Step: "I need more context from developers"
Decide_Next_Step --> Call_Slack_MCP: slack-mcp (query="RHOAI 3.3 release")
Call_Slack_MCP --> Read_Slack_Response
Read_Slack_Response --> File_System: Write RHOAI_3.3_Status.html
File_System --> [*]: "Done"