| name | description |
|---|---|
ovid-architecture |
Analyze a codebase for software architecture strengths and flaws and report the good and the bad. |
You are an AI coding agent working inside the current repository directory. Your task is to find common software architecture strengths and weaknesses in this codebase and produce a concise report with evidence, then write the final report to a Markdown file named:
<YYYY-MM-DD>--architecture-report.md
- Use today’s date in ISO format: YYYY-MM-DD
- Determine <git-repo-name> from the git remote or top-level folder. If this directory is not a git repo, omit the repo name portion and write: <YYYY-MM-DD>-architecture-report.md.
If you must keep the original filename for compatibility, use:
<YYYY-MM-DD>-<git-repo-name>-architecture-flaws.md(and keep the title “Architecture Flaws Report”). Otherwise prefer the neutral “architecture-report” naming above.
Produce a balanced report that highlights:
- Strengths (what’s working architecturally and why it matters)
- Flaws/Risks (architectural problems and their likely impact)
Do NOT propose fixes yet. This is diagnosis only.
Include findings across all of the following architectural issue types (prioritize high-signal, but ensure coverage across the set). For each type:
- If observed, include at least one evidence-backed finding (or reference a finding ID).
- If not found, mark Not observed.
- If tooling limits prevent assessment, mark Not assessed.
- Global mutable state — Shared state that any code can change, making behavior unpredictable and hard to test.
- God object — One class/service accumulates too many responsibilities and becomes a brittle dependency magnet.
- Tight coupling — Components depend on concrete details of each other, so small changes ripple widely.
- High/unstable dependencies — Core modules depend on “leaf” modules, forcing rebuilds and coordinated releases.
- Circular dependencies — Packages/modules import each other, complicating builds, testing, and refactoring.
- Leaky abstractions — An abstraction requires callers to know underlying details to use it correctly.
- Over-abstraction — Too many layers/interfaces for uncertain future needs, increasing complexity without payoff.
- Premature optimization — Architecture choices made for performance before evidence, harming clarity and flexibility.
- Shotgun surgery — A single logical change requires edits across many files/services.
- Feature envy / anemic domain model — Business logic lives in services/utilities while domain objects are just data bags.
- Low cohesion — Modules group unrelated behaviors, making boundaries unclear and changes risky.
- Hidden side effects — Functions/methods do more than their signature suggests, surprising callers.
- Inconsistent boundaries — Responsibilities drift between layers/services, causing duplication and confusion.
- Distributed monolith — “Microservices” in name only, with heavy synchronous coupling and shared release constraints.
- Chatty service calls — Too many small network calls between services, increasing latency and failure surfaces.
- Synchronous-only integration — Everything depends on immediate responses, turning partial outages into full outages.
- No clear ownership of data — Multiple services write the same data, creating conflicts and integrity problems.
- Shared database across services — Services couple through schema and queries, making independent evolution difficult.
- Lack of idempotency — Retries create duplicates or corruption because operations aren’t safe to repeat.
- Weak error handling strategy — Errors are swallowed, over-generalized, or inconsistently surfaced.
- No observability plan — Missing logs/metrics/traces makes debugging and capacity planning guesswork.
- Configuration sprawl — Behavior is controlled by scattered configs/flags with unclear precedence and drift.
- Dependency injection misuse — DI becomes a maze of indirection that obscures control flow.
- Inconsistent API contracts — Endpoints/events evolve without compatibility discipline, breaking consumers.
- Business logic in the UI — Critical rules live in front-end code, leading to duplication and inconsistent behavior.
- Poor transactional boundaries — Operations span multiple systems without a strategy, leaving partially-updated states.
- Temporal coupling — Components must be called in a specific order/timing to work correctly.
- Magic numbers/strings everywhere — Important values are hard-coded and repeated, making change error-prone.
- “Utility” dumping ground — Generic helper modules grow into unowned, untestable grab-bags of unrelated code.
- Security as an afterthought — AuthZ/authN, secrets, and trust boundaries bolted on late/inconsistently enforced.
Additionally, you MUST include these concrete issues: 31) Dead code / unused dependencies — Increases cognitive load and attack surface. 32) Missing or inadequate test coverage for critical paths — Architectural risk that compounds all the others. 33) Hard-coded credentials or secrets in source — Concrete security flaw; call out separately when found. 34) Inconsistent error/logging conventions across services — Specifically cross-service inconsistency (formats, levels, fields, correlation IDs).
In addition to flaws, explicitly look for architecture strengths. Report strengths using the same evidence standard as flaws (label, explanation, evidence). Assess at least these categories:
S1) Clear modular boundaries — Well-defined packages/modules/layers with minimal leakage. S2) High cohesion — Modules/services group related responsibilities; boundaries feel “natural”. S3) Loose coupling — Abstractions/interfaces/events reduce ripple effects from change. S4) Dependency direction is stable — Core depends on stable contracts; leaf depends on core; minimal “core depends on leaf”. S5) Dependency management hygiene — Minimal circular deps; consistent import conventions; sensible package structure. S6) Consistent API contracts — Versioning/compat discipline; schema validation; backward-compatible changes. S7) Robust error handling — Consistent error taxonomy; errors surfaced appropriately; avoids swallowing exceptions. S8) Observability present — Structured logs, metrics, traces, correlation IDs, health checks. S9) Configuration discipline — Centralized config; clear precedence; safe defaults; separation by environment. S10) Security built-in — AuthN/Z patterns, secret management, least privilege, explicit trust boundaries. S11) Testability & coverage — Tests around critical paths; good seams; determinism; contract tests where appropriate. S12) Resilience patterns — Timeouts, retries with idempotency, circuit breakers/backpressure, async integration where appropriate. S13) Domain modeling strength — Business logic lives with domain entities/value objects; invariants enforced close to data. S14) Simple, pragmatic abstractions — Abstraction level matches current complexity; avoids over/under engineering.
If a strength category is “not applicable” due to repo nature (e.g., no networked services), mark it Not applicable and briefly explain.
- Do NOT propose fixes (no refactors, no “should do X”). Only describe strengths and weaknesses with evidence.
- Prefer high-signal findings over exhaustive listing, but ensure coverage across the required sets.
- Every finding MUST include: (1) short label, (2) 1–2 sentence explanation, (3) concrete evidence (file paths + symbol names + a small excerpt or line range reference).
- Validate candidates by opening files to reduce false positives.
A) Repo identification
-
Detect if this is a git repo (
git rev-parse). -
If yes, determine repo name:
- Prefer: basename of top-level directory OR derive from
git remote get-url origin(strip .git, take last path segment).
- Prefer: basename of top-level directory OR derive from
-
Set output filename accordingly.
B) Repo overview
- Identify primary languages/frameworks and key directories (apps/, services/, packages/, src/, lib/, etc.).
- Estimate size: number of services/modules/packages if applicable.
C) Dependency & structure analysis (as available)
-
Build a quick dependency picture:
- top-level modules/packages
- identify potential cycles via import graphs or simple heuristics
- identify “core depends on leaf” patterns
-
Also note positive structure signals (clean layering, bounded contexts, stable interfaces).
D) Search strategy (use repo tools available to you)
-
Use ripgrep/git grep and lightweight heuristics; optionally use AST tools if present.
-
Specific searches (risks + strengths):
- Global state: module-level mutable variables, singletons, service locators, static mutables.
- God objects: very large classes/files, managers/services/controllers with huge responsibility surface; high fan-in/fan-out.
- Coupling/cycles: cross-layer imports, concrete instantiations, circular imports.
- Boundaries/cohesion: directory organization, naming, cross-layer leakage, “one reason to change”.
- Abstractions: leaky vs clean; over-abstraction vs pragmatic, consistent interfaces.
- Side effects: IO/DB/network/event publishing in “pure-looking” functions.
- Error handling/idempotency/resilience: catch-and-ignore, blanket exceptions, retries with idempotency keys, timeouts, circuit breakers/backpressure (if present).
- Observability: structured logging, tracing, correlation IDs; consistency across services.
- Security: authN/Z patterns, secret management; hard-coded secrets.
- Dead code/unused deps: unused packages, unused files/modules, unreachable code paths, deprecated directories, stale feature flags.
- Tests/coverage: identify critical paths (entrypoints, handlers, core domain services) and check for corresponding tests; highlight strengths and gaps.
-
Title: “Architecture Report — <repo-name or current folder>”
-
Date: <today’s date in ISO format>
-
Repo overview (languages, key directories)
-
Strengths (ranked High/Medium/Low impact), 5–15 items, each formatted exactly:
- [Impact: High|Medium|Low] — <1–2 sentence explanation>. Evidence: <path>:<line range> (<symbol/function/class>), excerpt: "<short excerpt>"
-
Flaws/Risks (ranked High/Medium/Low impact), 10–25 items, each formatted exactly:
- [Impact: High|Medium|Low] <Problem label> — <1–2 sentence explanation>. Evidence: <path>:<line range> (<symbol/function/class>), excerpt: "<short excerpt>"
-
Coverage checklist
- Flaw/Risk types 1–34: Observed / Not observed / Not assessed, with 1 short line and optional pointer to a finding ID.
- Strength categories S1–S14: Observed / Not observed / Not assessed / Not applicable, with 1 short line and optional pointer to a finding ID.
-
Hotspots: top 3 files/directories to review (brief why; can include both risk hotspots and “strong core” hotspots)
-
Next questions (max 5) to guide humans (questions only; no suggested solutions)
- Perform the investigation now using available tools (
rg,git grep,ls,tree, language-specific linters if already configured). - Then write the Markdown file to the repository root with the required filename.
- Finally, print the path to the generated file and a brief summary (3–6 bullet points) of the highest-impact strengths and risks.