Used during generate_skill BFS/DFS exploration for component classification.
- Code:
SamplingBridge.swift/SamplingClassifier - How it works: Sends
sampling/createMessageto the MCP client, which forwards it to whatever LLM the client is using (e.g. Claude via Claude Code) - No API keys needed — the client's own LLM handles it
- Config:
componentDetectionin settings.json controls the mode:heuristic— no LLM, component.md match rules onlyllm_first_screen(DEFAULT) — LLM classifies first screen, heuristics for restllm_every_screen— LLM classifies every new screenllm_fallback— heuristics first, LLM when no confident match
Used by mirroir-mcp test --agent <model> skill.yaml for diagnosing compiled test failures.
- Code:
AIAgentProvider.swift,OpenAIProvider.swift,OllamaProvider.swift,AnthropicProvider.swift,CommandProvider.swift - How it works: Direct HTTP calls to external LLM APIs (OpenAI, Ollama, Anthropic) or shell commands
- Requires API keys — needs
OPENAI_API_KEY,ANTHROPIC_API_KEY, or Ollama endpoint - Added: Feb 19, 2026 (commit
1a777eb) - Tests: 34 unit tests in
AIAgentProviderTests.swift(480 lines) — all mock-based - Real-device testing: Never done
These timeout/token settings are only for the CLI agent diagnosis system:
| Key | Default | Purpose |
|---|---|---|
openAITimeoutSeconds |
30 | OpenAI API request timeout |
ollamaTimeoutSeconds |
120 | Ollama local model timeout |
anthropicTimeoutSeconds |
30 | Anthropic API request timeout |
commandTimeoutSeconds |
60 | Command-based agent process timeout |
defaultAIMaxTokens |
1024 | Max tokens for AI model responses |
mirroir-mcp test --agent skill.yaml # deterministic diagnosis (no LLM)
mirroir-mcp test --agent claude-sonnet-4-6 skill.yaml # AI diagnosis via Anthropic
mirroir-mcp test --agent gpt-4o skill.yaml # AI diagnosis via OpenAI- Real-device test of
test --agentwith an actual API key - Evaluate whether the AIAgentProvider system is worth keeping or should be simplified
- Consider whether the config dump should hide these when no AI provider is configured