You want to build a Next.js dashboard, using multiple Codex CLI agents run via codex exec in parallel, with clear roles, minimal overlap, and a concrete workflow from spec → implementation → tests.
- New or existing Next.js app, App Router + TypeScript, lives at <PROJECT_ROOT> (you can adjust).
- Node/npm/yarn/pnpm already installed; npx create-next-app@latest is available if you start from scratch.
- Codex CLI is installed and authenticated; codex exec "" runs headless tasks in your shell. (developers.openai.com (https://developers.openai.com/codex/noninteractive/? utm_source=openai))
- You are comfortable giving codex exec edit permissions via --full-auto when you want agents to modify files. (archive.ph (https://archive.ph/2025.12.09-182757/https%3A/ developers.openai.com/codex/sdk?utm_source=openai))
- Single human operator: you run codex exec, create branches, and do final merges.
- Phase 0 (sequential): Architecture & project scaffolding (Agent A1).
- Phase 1 (parallel after Phase 0):
- Agent A2: UI & layout for dashboard pages/components.
- Agent A3: Data, API integration & state layer.
- Phase 2 (parallel after first UI/data pass exists):
- Agent A4: Tests, linting, and docs hardening.
- Shared spec and type contracts act as interfaces to keep A2/A3 parallel work from conflicting.
- Purpose: Decide structure, scaffold the Next.js app, and write the shared spec/contracts.
- Responsibilities / tasks
- If needed, scaffold Next app in <PROJECT_ROOT> (TypeScript + App Router).
- Create docs/dashboard-spec.md with pages, navigation, widgets, and user flows.
- Create docs/api-contracts.md plus types/domain.ts with the data models and API shapes used by the dashboard.
- Create a high-level IMPLEMENTATION_PLAN.md describing what A2, A3, A4 should do and which dirs they own.
- Inputs: Your high-level requirements (data sources, target users, example widgets), existing repo (if any).
- Outputs / done criteria
- Project scaffolded and builds.
- docs/dashboard-spec.md, docs/api-contracts.md, types/domain.ts, IMPLEMENTATION_PLAN.md committed and stable for others.
- Allowed areas:
- Root config (package.json, tsconfig.json, next.config., .eslintrc.).
- docs/, types/, and basic skeleton in app/ (no detailed styling).
- Purpose: Build dashboard pages, layouts, and visual components per spec, using stubbed data interfaces.
- Responsibilities / tasks
- Implement global layout and navigation (app/layout.tsx, top-level header/sidebar).
- Create page routes and route groups under app/(dashboard)/... as defined in docs/dashboard-spec.md.
- Implement reusable components under components/ui/ (cards, tables, filters, charts placeholders).
- Wire components to data interfaces defined in types/domain.ts and lib/data/* (but not implement data fetching logic itself).
- Inputs: docs/dashboard-spec.md, IMPLEMENTATION_PLAN.md, types/domain.ts, public functions defined/owned by A3.
- Outputs / done criteria
- All key pages render with realistic mock/sample data calls (e.g., getDashboardSummary() from A3).
- UI is responsive and navigable; no direct API calls or DB logic inside components.
- Allowed areas
- app/layout.tsx, app/page.tsx and app/(dashboard)/** (but no app/api/**).
- components/, styles/ or app/globals.css, Tailwind config if used.
- May add imports from lib/data/* but not change A3-owned files without updating spec.
- Purpose: Implement the data layer (API routes, server utilities, state helpers) behind stable interfaces.
- Responsibilities / tasks
- Implement data-fetching utilities under lib/data/*.ts matching function signatures/types in types/domain.ts.
- Implement Next.js API routes or server actions under app/api/** as needed, using mock data or real backend.
- Provide simple hooks/helpers (e.g., useDashboardSummary()) that UI can consume.
- Inputs: docs/api-contracts.md, types/domain.ts, IMPLEMENTATION_PLAN.md, any backend/API docs.
- Outputs / done criteria
- All functions promised in types/domain.ts/docs/api-contracts.md exist and return correct shapes.
- UI build passes without type errors when using these functions.
- Data logic isolated to lib/ + app/api/** (no UI markup).
- Allowed areas
- lib/, app/api/**, types/, and data-only changes in server components when needed (no layout/styling changes).
- Must not change docs/dashboard-spec.md (A1-owned), but may propose updates in comments or a note in docs/api-contracts.md.
- Purpose: Add tests, linting, and docs to keep the dashboard stable as features land.
- Responsibilities / tasks
- Configure testing framework (e.g., Jest/React Testing Library) and basic test scripts in package.json.
- Add tests for critical UI flows (navigation, main widgets rendering) and data utilities.
- Document how to run dev server, tests, and any data mocks in README.md or docs/testing-notes.md.
- Inputs: Existing app from A1–A3, IMPLEMENTATION_PLAN.md, your preferences for tools.
- Outputs / done criteria
- npm test (or equivalent) runs and passes for the core flows.
- Lint script configured and passing.
- Clear instructions in docs for running/validating the dashboard.
- Allowed areas
- tests/, tests/, config files for test runner, README.md, docs/testing-notes.md.
- Should not alter feature behavior without coordination; focus on coverage and quality gates.
- docs/dashboard-spec.md (owned by A1)
- Canonical description of pages, navigation, and widgets.
- A2/A3 never change this directly; propose changes via comments or a separate docs/dashboard-spec-notes.md.
- docs/api-contracts.md + types/domain.ts (owned by A1/A3)
- Define TypeScript types and function signatures for data access (DashboardSummary, getDashboardSummary(), etc.).
- A2 reads & consumes; A3 implements. Any breaking changes require an explicit new version or note.
- lib/data/*.ts (owned by A3)
- Implementation of data interfaces; UI only calls exported functions/types, not internal details.
- IMPLEMENTATION_PLAN.md (owned by A1)
- High-level checklist assigning which directories/files each agent may modify.
- Source of truth for ownership; update here before changing responsibilities.
- Testing config and scripts (owned by A4)
- Shared commands like npm test, npm run lint, described in docs; all agents run them before finishing their own work.
Simple conflict rules:
- A2 owns UI layout/styling; A3 owns data fetching; A4 owns tests/docs; A1 owns specs/config.
- No agent edits a file outside their allowed areas without updating IMPLEMENTATION_PLAN.md and pausing other affected agents.
Use codex exec "" in non-interactive mode; add --full-auto when you expect Codex to edit files. (developers.openai.com (https://developers.openai.com/codex/noninteractive/? utm_source=openai))
First, define a shared env var in each shell:
export PROJECT_ROOT="<ABSOLUTE_PATH_TO_DASHBOARD_REPO>"
Prompt file suggestion: ~/.codex/prompts/next-dashboard-architect.prompt.md (you create this file).
Suggested command:
codex exec --cd "$PROJECT_ROOT" --full-auto "$(cat ~/.codex/prompts/next-dashboard-architect.prompt.md)"
Prompt file should include (outline):
- Role: “You are the Dashboard Architect & Scaffolder for a Next.js (App Router, TypeScript) project.”
- Goal: “Scaffold or inspect the app in $PROJECT_ROOT and produce docs/dashboard-spec.md, docs/api-contracts.md, types/domain.ts, and IMPLEMENTATION_PLAN.md for a <SHORT_DESCRIPTION_OF_DASHBOARD> dashboard.”
- Constraints: “Do not implement detailed UI; only skeleton pages/layout and docs. Respect ownership conventions you define.”
- Tasks: bullet list matching A1 responsibilities.
Prompt file: ~/.codex/prompts/next-dashboard-ui.prompt.md.
Command:
codex exec --cd "$PROJECT_ROOT" --full-auto "$(cat ~/.codex/prompts/next-dashboard-ui.prompt.md)"
Prompt outline:
- Role: “You are the UI & Layout Implementer for this Next.js dashboard.”
- Inputs: “Read docs/dashboard-spec.md, IMPLEMENTATION_PLAN.md, types/domain.ts and any existing lib/data/* functions.”
- Tasks: implement pages/layouts/components in app/(dashboard)/** and components/ui/** using only the data interfaces defined by A3.
- Constraints: “Do not edit docs/, lib/, or app/api/**; treat those as external contracts. Keep UI responsive and accessible.”
Prompt file: ~/.codex/prompts/next-dashboard-data.prompt.md.
Command:
codex exec --cd "$PROJECT_ROOT" --full-auto "$(cat ~/.codex/prompts/next-dashboard-data.prompt.md)"
Prompt outline:
- Role: “You own the data, API, and state layer for this dashboard.”
- Inputs: docs/api-contracts.md, types/domain.ts, IMPLEMENTATION_PLAN.md, any backend notes.
- Tasks: implement lib/data/*.ts and app/api/** functions needed by the dashboard, plus any simple hooks (no UI).
- Constraints: “Do not change docs/dashboard-spec.md or UI layout; if you must change contracts, update docs/api-contracts.md clearly and keep backward compatibility where possible.”
Prompt file: ~/.codex/prompts/next-dashboard-testing.prompt.md.
Command:
codex exec --cd "$PROJECT_ROOT" --full-auto "$(cat ~/.codex/prompts/next-dashboard-testing.prompt.md)"
Prompt outline:
- Role: “You are responsible for tests, linting, and dev docs for this dashboard.”
- Inputs: existing app, IMPLEMENTATION_PLAN.md, docs/dashboard-spec.md.
- Tasks: configure test runner, add tests for main flows, set up lint scripts, and update README.md/docs/testing-notes.md with how to run everything.
- Constraints: “Avoid changing feature behavior; if changes are required for testability, keep them minimal and document them.”
- Step 1 – Prepare repo & prompts
- Create or select <PROJECT_ROOT>; initialize git if not already.
- Create the four prompt files under ~/.codex/prompts/ using the outlines above, customized with your dashboard requirements.
- Step 2 – Run A1 (sequential)
- In one terminal: run the A1 command.
- Review diffs, adjust prompt or rerun if the spec/contracts aren’t what you want.
- Step 3 – Start parallel agents A2 & A3
-
Once docs/dashboard-spec.md, docs/api-contracts.md, types/domain.ts, and IMPLEMENTATION_PLAN.md are stable, open two more terminals:
- Terminal 1: run A2 command (UI).
- Terminal 2: run A3 command (data).
-
Optionally log output:
mkdir -p "$PROJECT_ROOT/logs" codex exec --cd "$PROJECT_ROOT" --full-auto "$(cat ~/.codex/prompts/next-dashboard-ui.prompt.md)"
2>&1 | tee "$PROJECT_ROOT/logs/ui-$(date +%Y%m%d-%H%M%S).log"
-
- Step 4 – Start A4
- After at least one pass from A2/A3 (basic dashboard working), start A4 in another terminal using its command.
- Step 5 – Iterate
- If any agent uncovers spec gaps (e.g., A3 needs new fields), pause other affected agents, update docs/api-contracts.md and/or IMPLEMENTATION_PLAN.md with A1 (or yourself), then re-run A2/A3 with updated context.
- Branching / merging
- Optionally create branches per agent run, e.g. feature/dashboard-architect, feature/dashboard-ui, etc., and run each codex exec with that branch checked out.
- After each agent completes, review git diff, resolve any small overlaps manually, and merge into a main feature branch (e.g., feature/dashboard-main).
- Conflict resolution
- If A2 & A3 touched the same file (e.g., a server component), prefer:
- A2: layout/markup decisions.
- A3: data-loading logic.
- Reconcile by hand and, if repeated, refine IMPLEMENTATION_PLAN.md and prompts to tighten boundaries.
- If A2 & A3 touched the same file (e.g., a server component), prefer:
- Validation steps
- From <PROJECT_ROOT> run:
- npm install (once).
- npm run dev to smoke-test the dashboard in the browser.
- npm test and npm run lint (or equivalents) as configured by A4.
- Only after all pass should you merge to main.
- From <PROJECT_ROOT> run:
- Risk: Conflicting edits in shared files (e.g., app/layout.tsx)
- Mitigation: Make ownership explicit in IMPLEMENTATION_PLAN.md; if a file must be shared, define which parts each agent can change and prefer A1/A2 as layout owners.
- Risk: Spec drift between A1 and implementation
- Mitigation: Treat docs/dashboard-spec.md as immutable during A2/A3/A4 runs; schedule explicit “spec update” passes with A1 before another iteration.
- Risk: API/UI contract mismatch (types vs. implementation)
- Mitigation: Keep all shared contracts in types/domain.ts and docs/api-contracts.md; require A2/A3 to align with those and update them (via A1/A3) before changing call sites.
- Risk: Over-parallelization slowing you down (too much coordination overhead)
- Mitigation: Start with A1 sequential, then run only A2 & A3 in parallel; introduce A4 once flows are stable. Reduce to fewer agents if coordination becomes painful.
- Risk: codex exec making broad or unexpected edits
- Mitigation: Use targeted prompts with clear directory scopes, keep runs short and incremental, and review diffs after each run before starting the next.
If you share a brief description of the dashboard’s purpose and key widgets (e.g., “SaaS analytics for MRR/churn, with filters by plan and region”), I can help you draft concrete contents for the four prompt files tailored to your exact use case