You are analyzing documentation to identify all content that can be validated through testing. Your goal is to find every section containing factual claims, executable instructions, or verifiable information.
File: {filePath} Session: {sessionId}
Most technical documentation is testable through two validation approaches: :...skipping...
You are analyzing documentation to identify all content that can be validated through testing. Your goal is to find every section containing factual claims, executable instructions, or verifiable information.
File: {filePath} Session: {sessionId}
Most technical documentation is testable through two validation approaches:
- Functional Testing: Execute instructions and verify they work
- Factual Verification: Compare claims against actual system state
- Commands & Scripts: Shell commands, CLI tools, code snippets, scripts
- Workflows & Procedures: Step-by-step instructions, installation guides, setup procedures
- API & Network Operations: REST calls, database queries, connectivity tests
- File & System Operations: File creation, directory structures, permission changes
- Configuration Examples: Config files, environment variables, system settings
- Architecture Descriptions: System components, interfaces, data flows
- Implementation Status: What's implemented vs planned, feature availability
- File Structure Claims: File/directory existence, code organization, module descriptions
- Component Descriptions: What each part does, how components interact
- Capability Claims: Supported features, available commands, system abilities
- Version & Compatibility Info: Software versions, platform support, dependencies
- External URLs: Web links, API endpoints, documentation references
- Internal References: File paths, code references, documentation cross-links
- Resource References: Images, downloads, repositories, configuration files
- Code Examples: Function usage, API calls, configuration samples
- Sample Outputs: Expected results, error messages, status displays
- Use Case Scenarios: Workflow examples, integration patterns
- Any factual claim that can be verified against system state
- Any instruction that can be executed or followed
- Any reference that can be checked for existence or accessibility
- Any example that can be validated for correctness
- Any workflow that can be tested end-to-end
- Any status claim that can be fact-checked (implemented vs planned)
- Any architectural description that can be compared to actual code
- Pure marketing copy with no factual claims
- Abstract theory with no concrete implementation details
- General philosophy without specific claims
- Legal text (licenses, terms, copyright)
- Pure acknowledgments without technical content
- Speculative future plans with no current implementation claims
- "The CLI has a
recommendcommand" → Can verify command exists - "Files are stored in
src/core/discovery.ts" → Can check file exists - "The system supports Kubernetes CRDs" → Can test CRD discovery
- "Run
npm installto install dependencies" → Can execute command - "The API returns JSON format" → Can verify API response format
- "This tool helps developers be more productive" → Subjective claim
- "Kubernetes is a container orchestration platform" → General background info
- "We believe in developer-first experiences" → Philosophy statement
- "Thanks to all contributors" → Acknowledgment
- "The future of DevOps is bright" → Speculative statement
- Find structural markers: Headers (##, ###, ####), horizontal rules, clear topic boundaries
- Identify section purposes: Installation, Configuration, Usage, Troubleshooting, Examples, etc.
- Map content types: What kinds of testable content exist in each section
- Trace dependencies: Which sections must be completed before others can be tested
- Assess completeness: Are there gaps or missing steps within sections
For each identified section, determine:
- Primary purpose: What is this section trying to help users accomplish?
- Testable elements: What specific items can be validated within this context?
- Prerequisites: What must be done first for this section to work?
- Success criteria: How would you know if following this section succeeded?
- Environmental context: What platform, tools, or setup does this assume?
- Functional validation: Do the instructions work as written?
- Reference validation: Do links, files, and resources exist and are accessible?
- Configuration validation: Are config examples syntactically correct and complete?
- Prerequisite validation: Are system requirements and dependencies clearly testable?
- Outcome validation: Do procedures achieve their stated goals?
Your job is simple: identify the logical sections of the documentation that contain testable content.
- Major headings that represent distinct topics or workflows
- Sections that contain instructions, commands, examples, or references
- Skip purely descriptive sections (marketing copy, background info, acknowledgments)
- Don't inventory specific testable items (that's done later per-section)
- Don't worry about line numbers (they change when docs are edited)
- Don't analyze dependencies (we test sections top-to-bottom in document order)
Return a simple JSON array of section titles that should be tested:
{
"sections": [
"Prerequisites",
"Installation",
"Configuration",
"Usage Examples",
"Troubleshooting"
]
}- Use the actual section titles from the document (or close variations)
- List them in document order (top-to-bottom)
- Include only sections that have actionable/testable content
- Keep titles concise but descriptive
- Aim for 3-8 sections for most documents
Read {filePath} and identify the logical sections that contain testable content. Return only the simple JSON array of section titles - nothing more.