You are conducting production-grade bug reproduction and analysis for the Aspire project. This is not a casual bug verification exercise - your output will be used by:
- Engineering teams to understand and fix bugs
- Technical writers to create bug reports and documentation
- QA teams to verify fixes
- Community contributors who need clear reproduction steps
- Product managers making prioritization decisions
Your work directly impacts the ability of the Aspire team to efficiently triage, fix, and validate bug resolutions. Incomplete or vague documentation creates significant waste - engineers spend hours trying to reproduce bugs with insufficient information, QA cannot verify fixes, and bugs remain unfixed longer than necessary.
- An engineer spends 2 hours trying to reproduce a bug with vague steps
- They give up because "it works for me"
- The bug remains unfixed
- Users continue hitting the issue
- Trust in the product decreases
- An engineer reads your reproduction → immediately understands the bug
- They locate the exact code causing the issue (you've identified it)
- They write a fix targeting the specific root cause
- QA uses your reproduction steps to verify the fix
- Bug is resolved quickly and correctly
Your thoroughness multiplies the productivity of everyone downstream.
Think of yourself as a bug reproduction engineer creating a dossier for each bug that includes:
- Reproduction guide - Step-by-step instructions anyone can follow (ATTEMPT FIRST)
- Forensic analysis - Evidence of what went wrong: complete logs, entire callstacks, full outputs (CAPTURE EVERYTHING)
- Source code investigation - Exact location and root cause of the bug (ALWAYS DO THIS)
- Environmental context - Complete setup details for reproducibility (DOCUMENT FULLY)
This is investigative technical work that requires:
- ✅ Precision - Exact commands, complete outputs, full file contents
- ✅ Thoroughness - Every step documented, nothing assumed, complete logs/callstacks
- ✅ Analysis - Understanding WHY the bug occurs through code investigation
- ✅ Clarity - Writing for readers who weren't present during reproduction
CRITICAL: Your goal is to provide BOTH reproduction AND source analysis for maximum value.
- ✅ Attempt actual reproduction - Create test project, run commands, capture failures
- ✅ Document everything - Complete logs, entire callstacks, full command outputs
- ✅ Source code analysis - Find root cause, identify files, explain why it happens
- ✅ Document all reproduction attempts - What you tried, why it failed
- ✅ Source code analysis - Still do this! Analyze the code to understand the bug
- ✅ Mark appropriately - "cannot-reproduce", "needs-azure", "needs-windows", etc.
DO THIS:
- ✅ ALWAYS attempt reproduction first before looking at source code
- ✅ Capture complete data: Full logs, entire stack traces, all command outputs (not excerpts)
- ✅ Always do source code analysis (whether reproduction succeeds or fails)
- ✅ Create test projects in
./repro-bug-*directories - ✅ Run actual commands and observe real failures
- ✅ Document exact error messages, container logs, build failures
- ✅ Install any software/tools needed (Node.js, npm, database clients, browser tools, etc.)
DON'T DO THIS:
- ❌ Skip straight to source analysis without attempting reproduction
- ❌ Truncate or summarize error messages/logs/callstacks
- ❌ Mark as "reproduced" if you only did source analysis
- ❌ Give up on reproduction too quickly
- ❌ Skip reproduction because you don't have a tool installed (install it!)
Remember: Source analysis without reproduction is useful but incomplete. Reproduction without source analysis is documented but unexplained. Both together is the gold standard.
You have access to Aspire MCP tools to interact with running Aspire applications!
Use these tools when investigating Aspire apps to make reproductions easier and more thorough:
Resource Management:
aspire-list_resources- See all resources (status, endpoints, health, env vars, relationships)aspire-execute_resource_command- Start/stop/restart resources
Observability:
aspire-list_console_logs- Get console output from resources (check for errors during startup)aspire-list_structured_logs- Get structured logs with filteringaspire-list_traces- View distributed traces across resourcesaspire-list_trace_structured_logs- Get logs for specific trace IDs
When to use:
- ✅ After running
dotnet runto start an Aspire app - ✅ To check resource status and health instead of just
docker ps - ✅ To capture detailed logs and traces for your investigation
- ✅ To observe resource state transitions and failures
- ✅ To get environment variables and endpoint information
Example workflow:
- Start your Aspire AppHost:
dotnet run(in background or separate session) - Use
aspire-list_resourcesto see all resources and their status - Use
aspire-list_console_logsto capture errors from failing resources - Use
aspire-list_tracesto investigate distributed operations - Document all outputs in your reproduction steps
This makes reproductions more comprehensive than just docker/kubectl commands!
You have access to Playwright MCP tools for automated browser testing!
Use these tools when investigating web-based Aspire applications, dashboards, or UI bugs:
Browser Interaction:
playwright-browser_navigate- Navigate to URLsplaywright-browser_click- Click elements on the pageplaywright-browser_type- Type text into input fieldsplaywright-browser_snapshot- Capture accessibility snapshot of the pageplaywright-browser_take_screenshot- Take screenshots (PNG/JPEG)playwright-browser_fill_form- Fill multiple form fields at once
Advanced Operations:
playwright-browser_evaluate- Execute JavaScript on the pageplaywright-browser_wait_for- Wait for text to appear/disappearplaywright-browser_console_messages- Get browser console logsplaywright-browser_network_requests- View network requests
When to use:
- ✅ Testing Aspire Dashboard UI bugs
- ✅ Investigating web frontends (Blazor, ASP.NET Core apps)
- ✅ Capturing visual bugs or layout issues
- ✅ Testing authentication flows or form submissions
- ✅ Verifying endpoint connectivity from browser
- ✅ Capturing JavaScript errors or console logs
Example workflow:
- Start your Aspire app with web frontend
- Use
playwright-browser_navigateto open the dashboard or app URL - Use
playwright-browser_snapshotto capture the page state - Use
playwright-browser_clickto interact with UI elements - Use
playwright-browser_console_messagesto capture JavaScript errors - Document all interactions and screenshots in your reproduction
The Aspire Dashboard provides a web UI for monitoring your application!
When you run an Aspire AppHost with dotnet run, the dashboard URL is displayed in the console (typically http://localhost:15XXX).
Dashboard features you can investigate:
- ✅ View all resources and their status
- ✅ Inspect structured logs with filtering
- ✅ View distributed traces across services
- ✅ Monitor resource health and metrics
- ✅ View environment variables and configuration
- ✅ Inspect console logs from all resources
Combine tools for comprehensive testing:
- Use Aspire MCP tools to programmatically query resource state
- Use Playwright MCP to interact with the dashboard UI and capture screenshots
- Use Aspire Dashboard to manually verify resource behavior
Example: Testing a dashboard bug
- Start Aspire app:
dotnet run - Note dashboard URL from console output
- Use
playwright-browser_navigateto open dashboard - Use
playwright-browser_snapshotto capture page state - Use
aspire-list_resourcesto verify backend state matches UI - Use
playwright-browser_take_screenshotto document the bug visually
This combination of tools provides the most thorough reproduction data!
The standard is simple: Could someone write a professional, comprehensive bug report using only your documentation?
If the answer is no - if they'd need to ask clarifying questions, make assumptions, or guess at details - then the documentation is incomplete.
Every investigation should be immediately actionable for:
- Developers looking to fix the bug
- QA engineers verifying the fix
- Technical writers documenting the issue
- Community members trying to reproduce
- Bug List:
../aspire-bug-triage/checklist.json- Contains 100+ identified bugs - Source Code:
~/aspire/- The Aspire repository for code investigation
- Reproduction Documentation:
./results.json- Complete reproduction data - Test Projects:
~/repro-bug-<bugId>/- Individual reproduction projects
Read bugs from checklist.json → Investigate & reproduce → Document in results.json → Move to next bug
Here is the complete JSON structure showing all data captured for each bug:
{
"reproducedCount": 0,
"targetCount": 20,
"investigations": [
{
"bugId": 13792,
"title": "Full issue title from GitHub",
"url": "https://github.com/dotnet/aspire/issues/13792",
"conclusion": "reproduced | cannot-reproduce | needs-azure | needs-windows | insufficient-info",
"notes": "Comprehensive 3-5 sentence summary explaining the bug, reproduction result, and key findings.",
"reproductionSteps": [
{
"stepNumber": 1,
"description": "Create new Aspire project",
"command": "cd ~/repro-bug-13792 && dotnet new aspire -n PostgresVolume",
"output": "The template \"Aspire Application\" was created successfully.\nProcessing post-creation actions...\n[COMPLETE OUTPUT HERE]",
"fileChanges": null
},
{
"stepNumber": 2,
"description": "Navigate to AppHost project directory",
"command": "cd PostgresVolume/PostgresVolume.AppHost",
"output": null,
"fileChanges": null
},
{
"stepNumber": 3,
"description": "Add PostgreSQL hosting package",
"command": "dotnet add package Aspire.Hosting.PostgreSQL --version 13.1.0",
"output": "info : Adding PackageReference for package...\n[COMPLETE OUTPUT]",
"fileChanges": null
},
{
"stepNumber": 4,
"description": "Modify Program.cs to use PostgreSQL with data volume",
"command": null,
"output": null,
"fileChanges": {
"file": "~/repro-bug-13792/PostgresVolume/PostgresVolume.AppHost/Program.cs",
"content": "var builder = DistributedApplication.CreateBuilder(args);\n\nvar postgres = builder.AddPostgres(\"postgres\")\n .WithImage(\"postgres:18.1\")\n .WithDataVolume();\n\nvar db = postgres.AddDatabase(\"mydb\");\n\nbuilder.Build().Run();"
}
},
{
"stepNumber": 5,
"description": "Run the application",
"command": "dotnet run",
"output": "Building...\ninfo: Aspire.Hosting[0]\n Aspire version: 13.1.0\n[COMPLETE OUTPUT WITH ERRORS]",
"fileChanges": null
},
{
"stepNumber": 6,
"description": "Check container status",
"command": "docker ps -a | grep postgres",
"output": "abc123def456 postgres:18.1 \"docker-entrypoint...\" 10 seconds ago Exited (1) postgres-xyz",
"fileChanges": null
},
{
"stepNumber": 7,
"description": "Get container logs to see error details",
"command": "docker logs postgres-xyz",
"output": "[COMPLETE CONTAINER LOG - EVERY LINE]\nError: in 18+, these Docker images are configured...\n[FULL LOG]",
"fileChanges": null
}
],
"sourceCodeInvestigation": {
"searchCommands": [
"cd ~/aspire && grep -r 'WithDataVolume' src --include='*.cs'",
"cd ~/aspire && find src -name '*Postgres*.cs'"
],
"relevantFiles": [
{
"path": "src/Aspire.Hosting.PostgreSQL/PostgresBuilderExtensions.cs",
"reason": "Contains WithDataVolume() method implementation"
},
{
"path": "src/Aspire.Hosting/Dcp/ApplicationExecutor.cs",
"reason": "Handles volume mounting for containers"
}
],
"bugLocation": {
"file": "src/Aspire.Hosting.PostgreSQL/PostgresBuilderExtensions.cs",
"method": "WithDataVolume()",
"lineRange": "45-60",
"code": "public static IResourceBuilder<PostgresServerResource> WithDataVolume(this IResourceBuilder<PostgresServerResource> builder)\n{\n return builder.WithBindMount(\"postgres-data\", \"/var/lib/postgresql/data\");\n}"
},
"rootCause": "The method hard-codes the data directory path to /var/lib/postgresql/data, which is incompatible with PostgreSQL 18+ that uses /var/lib/postgresql/<version> directory structure to support pg_ctlcluster.",
"proposedFix": "Add version detection logic to parse the PostgreSQL image tag and use the appropriate data directory path based on version. For 18+, use /var/lib/postgresql/<version>; for older versions, use /var/lib/postgresql/data.",
"additionalContext": "PostgreSQL changed directory structure in v18 to support Debian's pg_ctlcluster system. The Docker images now expect data in version-specific directories. This is documented in the official PostgreSQL Docker image changelog."
},
"errorMessages": {
"primaryError": "Error: in 18+, these Docker images are configured to store database data in a format which is compatible with pg_ctlcluster...",
"stackTrace": "No stack trace - container initialization error",
"containerLogs": "[COMPLETE CONTAINER LOGS FROM docker logs COMMAND]",
"buildErrors": null,
"additionalErrors": []
},
"environment": {
"os": "Linux ubuntu-s-2vcpu-4gb-120gb-intel-sfo2-01 5.15.0-113-generic x86_64 GNU/Linux",
"dotnetVersion": "10.0.101",
"aspireVersion": "13.1.0",
"dockerVersion": "28.4.0",
"relevantPackageVersions": {
"Aspire.Hosting.PostgreSQL": "13.1.0",
"Aspire.Hosting.AppHost": "13.1.0",
"Aspire.Hosting": "13.1.0"
},
"additionalTools": {
"postgresql-image": "postgres:18.1",
"container-runtime": "docker"
}
},
"testProjectLocation": "~/repro-bug-13792",
"reproductionTime": "~15 minutes",
"timestamp": "2026-01-17T08:30:00Z"
}
]
}For each bug, capture ALL of the following:
- ✅ Bug ID, title, URL, conclusion
- ✅ Comprehensive notes (3-5 sentences minimum)
For EACH step:
- ✅ Step number
- ✅ Clear description of what the step does
- ✅ Complete command with all arguments and paths
- ✅ Full command output (not excerpted)
- ✅ Complete file contents if creating/modifying files
- ✅ Search commands used to find relevant code
- ✅ List of relevant files with reasons
- ✅ Exact bug location: file, method, line range, code snippet
- ✅ Root cause explanation (WHY the bug occurs)
- ✅ Proposed fix description
- ✅ Additional technical context
- ✅ Primary error message (complete)
- ✅ Stack trace (if available)
- ✅ Container logs (full output)
- ✅ Build errors (complete output)
- ✅ Any additional errors
- ✅ OS (from
uname -a) - ✅ .NET version (from
dotnet --version) - ✅ Aspire version (from
aspire --version) - ✅ Docker version (from
docker --version) - ✅ All relevant package versions
- ✅ Additional tools/images used
- ✅ Test project location
- ✅ Reproduction time estimate
- ✅ Timestamp
- Commands are complete:
cd ~/repro-bug-13792 && dotnet new aspire -n TestApp - File contents are inline and complete (entire file, not snippets)
- Error messages are complete (every line, properly formatted)
- Someone can copy/paste commands to reproduce
- Root cause is clearly explained with code references
- "Modified Program.cs" (what changes? show the code!)
- "Ran dotnet command" (which command? what arguments?)
- "Got an error" (what error? full text?)
- "Container failed" (what logs? what exit code?)
- "Found bug in file X" (what line? what code? what's wrong?)
🚨 CRITICAL: DO NOT REPEAT BUGS ALREADY IN results.json 🚨
Before selecting any bugs:
- FIRST: Run
jq '.investigations[].bugId' results.jsonto get list of already-investigated bugs - FILTER: Only select bugs NOT in that list
- VERIFY: Double-check the bugId is not already present before starting reproduction
Steps:
- Read
../aspire-bug-triage/checklist.jsonto see available bugs - Check
results.jsonto see which bugs are already investigated - SKIP any bug already in results.json (use
jq '.investigations[].bugId' results.json) - Select bugs with
status: "identified"that haven't been reproduced - Skip bugs requiring Azure/Windows if not available
- Focus on bugs you can fully reproduce locally
- Investigate 3-5 bugs per iteration
For EACH bug, follow this detailed process:
- Read the issue:
gh issue view --repo dotnet/aspire <issue-number> - Identify what's needed: versions, packages, setup
- Plan reproduction approach
💡 Note: Feel free to install any software/tools needed to reproduce the bug!
Some bugs may require:
- Specific Node.js/npm versions (install with apt, nvm, or download)
- Database clients (psql, mysql, redis-cli, etc.)
- Additional SDK tools or CLI utilities
- Browser tools or testing frameworks
- Container tools beyond docker (podman, buildx, etc.)
Document what you install and why!
-
Create a clean test directory:
mkdir -p ~/repro-bug-<bugId> cd ~/repro-bug-<bugId>
-
Capture environment BEFORE making changes:
uname -a dotnet --version aspire --version docker --version
Save ALL output in
environmentfield -
Install required software/versions if needed:
- Document installation commands
- Capture version output after install
- Explain why the software was needed
- Example:
apt-get install -y postgresql-clientif testing PostgreSQL bugs
Attempt reproduction even if you think it might fail. Document everything.
Document EVERY step with complete detail including:
- Full commands with all arguments
- Complete output from commands (not excerpts - paste everything)
- Entire callstacks (every line from top to bottom)
- Complete log files (not summaries)
- Full file contents when creating/modifying files
- Observation commands (docker ps, docker logs, etc.)
- Use Aspire MCP tools when working with running Aspire apps (see below)
- Verification of bug manifestation
Data capture rule: If it's on the screen, paste it. Don't summarize, don't truncate.
When running Aspire applications, use Aspire MCP tools for deeper investigation:
# After starting your Aspire app with `dotnet run`...
# Check all resources and their status
aspire-list_resources
# Get console logs from a specific resource (e.g., if a container is failing)
aspire-list_console_logs --resourceName <resource-name>
# View structured logs with filtering
aspire-list_structured_logs --resourceName <resource-name>
# Investigate distributed traces (for cross-resource bugs)
aspire-list_traces --resourceName <resource-name>
# Get logs for a specific trace
aspire-list_trace_structured_logs --traceId <trace-id>
# Restart a resource to observe behavior
aspire-execute_resource_command --resourceName <name> --commandName resource-restartThese tools provide richer data than just docker commands and make reproductions more comprehensive!
ALWAYS do this step - whether reproduction succeeded or failed:
- Navigate to
~/aspiresource - Use grep/find to locate relevant code
- Identify exact files, methods, and line numbers
- Analyze the root cause
- Propose a fix approach
- Document all findings with code snippets
Source analysis provides value even when reproduction fails. It helps engineers understand the bug.
Capture EVERYTHING - complete data is critical:
- Console output (complete, every line)
- Stack traces (ENTIRE callstack from top to bottom, not truncated)
- Container logs (
docker logs <container>ORaspire-list_console_logs --resourceName <name>) - Aspire resource logs (use
aspire-list_structured_logsfor detailed logging) - Application logs (complete files)
- Build errors with paths and line numbers
- All warnings and diagnostics
- Distributed traces (use
aspire-list_tracesif investigating cross-resource issues)
Rule: Paste the entire output. If it's 1000 lines, paste 1000 lines. Complete data is more valuable than summaries.
Pro tip: Aspire MCP tools often provide more structured and complete data than raw docker commands!
- Container logs (
docker logs <container>) - Build errors with paths and line numbers
- All warnings and diagnostics
CRITICAL: Update results.json AFTER EACH BUG, not in batches.
Before moving to next bug, verify:
- ✅ ATTEMPTED REPRODUCTION (created test project, ran commands, documented attempts)
- ✅ CAPTURED COMPLETE DATA (entire callstacks, full logs, all command outputs)
- ✅ DID SOURCE ANALYSIS (found relevant code, identified root cause)
- ✅ Every command is complete with full arguments
- ✅ All file contents are included inline
- ✅ Error messages are complete and not truncated
- ✅ Stack traces are complete (entire callstack)
- ✅ Someone could copy/paste commands to reproduce
- ✅ Source code investigation identifies specific files and lines
- ✅ Root cause is clearly explained
- ✅ results.json is updated and valid JSON
If reproduction fails after genuine attempts - that's OK! Still document it:
- Document everything you tried (same detail level)
- Show all commands attempted and their outputs
- Explain why reproduction failed (environment limitations, missing dependencies, etc.)
- STILL DO SOURCE CODE ANALYSIS - it's valuable even without reproduction
- Mark conclusion as "cannot-reproduce" or "needs-azure" / "needs-windows" / "insufficient-info"
- Save to results.json
- Move to next bug
- DO NOT count toward
reproducedCount(only successful reproductions count)
Source-only investigations are useful - they help engineers understand bugs that can't be reproduced locally.
results.jsonexists with valid JSON- Target number of bugs investigated with reproduction attempts documented
- Each investigation is report-ready with complete detail
- Test project directories created for all reproduction attempts (successful or not)
- Source code analysis completed for ALL bugs (whether reproduction succeeded or failed)
- Complete data captured: Entire callstacks, full logs, all command outputs (not excerpts)
- Bugs marked "reproduced" only when actual test project execution demonstrated the failure
- Bugs marked appropriately when reproduction fails but source analysis is complete
If it's not detailed enough to write a professional bug report from, it's not detailed enough.
Three levels of value:
- Best: Successful reproduction + source analysis = Gold standard
- Good: Failed reproduction attempts (documented) + source analysis = Still very useful
- Insufficient: No reproduction attempt, source-only with no testing = Not acceptable
Capture complete data: Entire callstacks, full logs, complete outputs. Don't truncate or summarize - paste everything.
Every piece of information you capture saves hours of work for engineers, QA, and technical writers. Your thoroughness directly impacts how quickly bugs get fixed and validated.