This is a LLM app using Python/FastAPI, langchain and langgraph.
- Use data in memory for cache, storage, and queue.
- Max 50 lines per function
- Use descriptive variable names
- Use the latest stable versions of all libraries.
- Follow good practices and design patterns.
- NO excessive logging - Only log essential information at appropriate levels
- NO generic error handling - Catch specific exceptions with recovery strategies
- NO TODO/FIXME comments - Complete implementations before committing
- NO functions with 5+ parameters - Use dataclasses/Pydantic models instead
- NO dict[str, Any] everywhere - Use proper type hints and domain models
- NO redundant code - Follow DRY principle, extract common functionality
- NO magic numbers - Use named constants with clear documentation
- NO unvalidated inputs - Always validate external data with Pydantic/schemas
- NO inefficient algorithms - Consider O(n) complexity, use appropriate data structures
- NO copy-paste code blocks - Create reusable functions and utilities
- NO sequential awaits - Use asyncio.gather() for parallel async operations
- NO untestable code - Design with dependency injection and separation of concerns
- Keep diagrams focused - One concept per diagram, max 15-20 nodes
- Use consistent visual semantics - Same colors/shapes for same types across all diagrams
- Always include legends - Explain what colors, shapes, and arrows mean
- Avoid redundancy - Don't repeat same information in multiple diagrams
- Progressive disclosure - Start with high-level overview, link to detailed diagrams
- Clear flow direction - Make data/control flow obvious with arrows and labels
- Purpose-specific diagrams - Separate diagrams for data flow, deployment, auth, etc.
- Small, atomic changes - Break large features into multiple small PRs (max 200 lines)
- Clear context - Always explain WHY the change is needed, not just WHAT changed
- Self-documenting code - Code should be readable without extensive comments
- Complete implementations - Never submit partial solutions that "work but need cleanup"
- Test coverage with meaning - Tests must validate behavior, not just increase coverage metrics
- Use only approved libraries - Check project's package.json/requirements.txt before adding dependencies
- No experimental patterns - Stick to established, battle-tested approaches
- Security validation - Always validate inputs, sanitize outputs, use parameterized queries
- Compliance check - Follow existing authentication/authorization patterns in codebase
- Never bypass security - Don't disable linters, skip tests, or ignore security warnings
- Start minimal - Begin with the simplest working solution
- Iterate with purpose - Each iteration should be reviewable and deployable
- No "big bang" commits - Avoid massive single commits that change everything
- Progressive enhancement - Add complexity only when requirements demand it
- Maintain working state - Code should always be runnable after each change
- Preserve existing patterns - Match the style and structure of surrounding code
- Respect architecture - Don't introduce new architectural patterns without discussion
- Follow naming conventions - Use the project's established naming patterns
- Keep dependencies minimal - Reuse existing utilities before creating new ones
- Document deviations - If you must deviate from patterns, explain why clearly
- Optimize for readability first - Clear code over clever code
- Profile before optimizing - Don't guess about performance bottlenecks
- Consider scale - Think about how code performs with 10x current data
- Resource management - Always close connections, clear timeouts, free resources
- Batch operations - Use bulk operations for database/API calls when possible
- Test behavior, not implementation - Tests should survive refactoring
- Edge cases matter - Test error paths, boundary conditions, null/empty inputs
- Meaningful assertions - Don't just check that functions execute
- Independent tests - Each test should run in isolation
- Fast feedback - Unit tests should run in milliseconds, not seconds
- Write for the reviewer - Assume the reviewer has no context
- Self-review first - Review your own code before submitting
- Explain non-obvious decisions - Add comments for complex business logic
- Consider future maintainers - Code is read 10x more than written
- Address feedback constructively - All review comments deserve thoughtful responses
- Entire features in one response - break into logical chunks
- Code without understanding existing patterns
- Tests that only increase coverage metrics
- Complex abstractions for simple problems
- Code that bypasses existing validation/security
- Verbose or duplicated code.
- Code that matches existing style guides
- Comprehensive error handling
- Meaningful variable/function names
- Tests that validate actual behavior
- Documentation for public APIs
- Production-ready and PR ready code that is easy to review and understand by human.
- Analyze existing codebase patterns
- Understand the full context of the change
- Consider security implications
- Think about maintenance burden
- Evaluate if simpler solution exists
- Review time < 30 minutes per PR
- Review iterations < 3 rounds
- Change failure rate < 5%
- Time to production < 1 day for small changes
- Post-deployment issues = 0 for standard features
- Code follows project style guide
- All tests pass locally
- New tests cover new functionality
- Error scenarios handled gracefully
- Performance impact assessed
- Security implications reviewed
- Documentation updated if needed
- Code self-reviewed before submission
- PR description explains the "why"
- Breaking changes clearly marked