Date: 2026-02-18
Scope: backend/ directory, deployment configs, database schema
Purpose: Evaluate backend readiness against the CollabBoard MVP requirements (G4 Week 1)
The backend is a Node.js 20 (ESM) Express + Socket.IO server backed by Supabase (PostgreSQL + Auth). AI commands use Anthropic Claude with tool calling. Caching is in-memory (no external Redis dependency).
backend/
├── src/
│ ├── config/
│ │ ├── supabase.js ← Supabase client + all DB operations
│ │ └── redis.js ← In-memory cache (Redis removed)
│ ├── middleware/
│ │ ├── auth.js ← JWT auth, board access, RBAC
│ │ └── errorHandler.js ← AppError class, asyncHandler
│ ├── routes/
│ │ ├── auth.js ← Register, verify, get current user
│ │ └── boards.js ← CRUD boards, collaborators, objects
│ ├── services/
│ │ └── aiAgent.js ← Anthropic Claude tool-calling agent
│ ├── utils/
│ │ └── logger.js ← Winston logger
│ ├── websocket/
│ │ └── socketHandler.js ← Socket.IO event handlers
│ └── server.js ← Entry point, Express + Socket.IO setup
├── Dockerfile ← Multi-stage production build
├── .dockerignore
├── .env.example
└── package.json
| MVP Requirement | Backend Support | Status |
|---|---|---|
| Infinite board with pan/zoom | Frontend concern; backend stores/syncs objects | N/A |
| Sticky notes with editable text | Object CRUD via WebSocket + REST | PARTIAL (see Issue 1) |
| At least one shape type | Rectangle, circle, arrow, text supported | PASS |
| Create, move, edit objects | create_object, update_object, delete_object events |
PASS |
| Real-time sync between 2+ users | Socket.IO rooms per board, broadcasts on mutation | PASS |
| Multiplayer cursors with name labels | cursor_move / cursor_moved events with userId + email |
PASS |
| Presence awareness | user_joined, user_left, active_users events |
PASS |
| User authentication | Supabase Auth with JWT verification | PASS |
| Deployed and publicly accessible | Dockerfile + render.yaml present | READY (see Issues) |
Files: src/middleware/auth.js, src/config/supabase.js (lines 273-285)
The auth middleware chain is well-structured with four layers:
authenticate-- requires valid JWToptionalAuth-- attaches user if token present, continues either waycheckBoardAccess-- verifies ownership, collaboration, or public boardrequireRole-- enforces a minimum role level (viewer < editor < admin < owner)
Token verification uses supabase.auth.getUser(token) which is the correct server-side approach. The frontend auth context matches (Supabase Auth with email/password and Google OAuth).
File: src/websocket/socketHandler.js
All core real-time events are implemented:
- Board join/leave with room management
- Object CRUD with broadcast to all room members
- Cursor position broadcasting
- Active user tracking with counts
- AI command handling with response broadcast
- Disconnect cleanup (removes user from tracking, notifies room)
Socket authentication extracts the JWT from handshake auth or headers and verifies it. Unauthenticated sockets can still connect but are blocked from AI features.
File: src/services/aiAgent.js
- Uses Anthropic tool calling with 7 defined tools
- AI-created objects are persisted to Supabase and broadcast via Socket.IO
- Locking prevents concurrent AI operations on the same board
- AI commands are logged to the
ai_commandstable with execution time - Meets the 6+ command types requirement from the spec
File: supabase-schema.sql
Comprehensive and well-designed:
- 5 tables:
boards,board_objects,board_collaborators,ai_commands,users - Proper foreign keys with
ON DELETE CASCADE - Performance indexes on all lookup columns
- RLS policies for every table with appropriate access rules
updated_attriggers- Auto user-profile creation trigger on signup
File: backend/Dockerfile
Production-quality:
- Multi-stage build (builder + production)
node:20-alpinebase- Production-only dependencies (
npm ci --only=production) - Non-root user (
nodejs:1001) - Health check configured
- Exposes port 8080
File: src/middleware/errorHandler.js
- Custom
AppErrorclass with status codes and operational flag asyncHandlerwrapper catches rejected promises and forwards to error middleware- Structured error logging with different severity for 4xx vs 5xx
- Stack trace only exposed in development mode
File: src/routes/boards.js, lines 36-73
Severity: CRASH -- this route will throw a runtime error
The "list all boards for current user" route uses a database API that does not exist on the Supabase client:
const ownedBoards = await supabaseConfig.db
.collection('boards')
.where('ownerId', '==', userId)
.orderBy('updatedAt', 'desc')
.get();supabaseConfig has no .db property, no .collection() method, and no .where() chaining. This syntax belongs to a different database SDK.
What to do: Rewrite this route to use the Supabase query builder pattern that is already used everywhere else in the codebase. Look at how getBoard(), getObjects(), and other methods in src/config/supabase.js work -- they all use this.adminClient.from('table').select('*').eq('column', value). You likely need to:
- Add a
getBoardsByOwner(userId)method toSupabaseConfiginsrc/config/supabase.js - Add a
getBoardsByCollaborator(userId)method to the same file - Replace the Firestore-style code in the route with calls to these new methods
File: docker-compose.yml (root)
Severity: BROKEN -- local Docker development will not work
The compose file sets environment variables that the backend code does not read. The backend expects SUPABASE_URL, SUPABASE_ANON_KEY, and SUPABASE_SERVICE_ROLE_KEY (see src/config/supabase.js lines 13-15 and .env.example).
What to do: Update the environment block in docker-compose.yml for both services to match what .env.example defines. The backend needs Supabase vars; the frontend needs VITE_SUPABASE_URL and VITE_SUPABASE_ANON_KEY.
File: infrastructure/render/render.yaml
Severity: DEPLOYMENT TRAP -- deploying with the wrong config file
There are two render.yaml files:
render.yaml(root) -- correct, uses Supabase env varsinfrastructure/render/render.yaml-- stale, references wrong env vars and provisions a Redis service the code no longer uses
What to do: Delete infrastructure/render/render.yaml or update it to match the root render.yaml. Render auto-detects render.yaml at the repo root, so the root file is the one that matters.
File: src/services/aiAgent.js
Impact: MVP spec requires "Sticky notes with editable text" and the tool schema lists createStickyNote(text, x, y, color) as a minimum tool.
The AI agent defines create_rectangle, create_circle, create_text, and create_arrow -- but no create_sticky_note. Sticky notes are a core MVP deliverable.
What to do: Add a create_sticky_note tool to BOARD_TOOLS with parameters text, x, y, color, and width/height. In executeToolCall, map it to a createShape('sticky_note', ...) call. The frontend will need a corresponding renderer for the sticky_note object type.
File: src/services/aiAgent.js, processToolCalls function (line 176)
Impact: Complex commands like "Create a SWOT analysis" will only partially execute.
When the Anthropic API returns a response with stop_reason: "tool_use", the caller is expected to send tool results back in a follow-up message so the model can continue. The current code processes the first batch of tool calls but never sends results back. This means:
- Single-step commands (create one rectangle) work fine
- Multi-step commands (create a 4-quadrant layout) will only create the first batch of objects
What to do: After processing tool calls, check response.stop_reason. If it is "tool_use", build a new messages array that includes the assistant response and tool results, then call anthropic.messages.create(...) again. Repeat until stop_reason is "end_turn". Be careful to set a maximum iteration count (e.g., 10) to prevent infinite loops.
File: src/utils/logger.js
Impact: On Railway and Render, log files are lost on every deploy and may cause write errors.
The logger only switches to stdout-only when K_SERVICE is set (Google Cloud Run). On Railway or Render, it will try to write to logs/error.log, logs/combined.log, etc.
What to do: Expand the production check at line 57 to include Railway and Render:
if (process.env.NODE_ENV === 'production') {
// Use stdout-only on any cloud platform
logger.clear();
logger.add(new winston.transports.Console({ ... }));
}File: package.json (dependency), all route files
Impact: No input validation on request bodies beyond basic truthy checks.
joi is listed as a production dependency but is never imported in any file. Route handlers like POST /api/auth/register only check if (!email || !password) without validating email format, password strength, or field types.
What to do: Either add Joi validation schemas to the routes (recommended) or remove joi from package.json to reduce bundle size. For MVP, at minimum validate email format and password length on registration.
File: src/config/redis.js
Impact: Acceptable for single-instance MVP deployment; will not work for horizontal scaling.
The InMemoryCache class replaces Redis. This means:
- Cache is lost on restart/redeploy
- AI command locking only works within a single process
- If you ever scale to multiple instances, concurrent AI commands on the same board could conflict
What to do: For MVP on a single Render/Railway instance, this is fine. Document the limitation. If you scale later, add Redis (Render offers a free Redis plan, Railway has Redis as a plugin).
Impact: jest and supertest are in devDependencies and a test script exists, but there are zero test files.
What to do: At minimum, add integration tests for the health endpoint, auth routes, and board CRUD. The supertest dependency is already available. Even a small test suite demonstrates code quality.
File: src/services/aiAgent.js
Impact: The spec's "Tool Schema (Minimum)" section lists createFrame(title, x, y, width, height) and createConnector(fromId, toId, style). These are missing.
What to do: Add both tools following the same pattern as existing tools. Frames and connectors are explicitly in the minimum tool schema in the spec.
The root render.yaml is correctly configured for Docker deployment with Supabase. To deploy:
- Fix BUG 1 (boards route) -- the app will crash on this route otherwise
- Fix ISSUE 3 (logger) -- or the app may error on missing log directory
- Set these env vars in the Render dashboard:
SUPABASE_URLSUPABASE_ANON_KEYSUPABASE_SERVICE_ROLE_KEYANTHROPIC_API_KEYALLOWED_ORIGINS(set to your deployed frontend URL)
- Connect your GitHub repo and deploy
WebSocket note: Render free tier supports WebSockets. The Socket.IO config in server.js (25s ping interval) will keep connections alive. Render has a 15-minute idle timeout, but active WebSocket pings prevent that.
The infrastructure/railway/railway.toml needs the dockerfilePath adjusted to backend/Dockerfile (relative to repo root), or configure Railway to use the backend/ directory as the root. Env vars must be set in the Railway dashboard.
Currently broken due to BUG 2. After fixing the environment variables, Docker Compose provides a good local dev experience with hot reload via volume mounts.
- BUG 1: Rewrite
GET /api/boardsto use Supabase queries - ISSUE 3: Fix logger for non-GCP cloud platforms
- BUG 3: Remove stale render config to avoid deployment confusion
- ISSUE 1: Add
createStickyNoteAI tool (core MVP requirement) - ISSUE 2: Add multi-turn AI tool use for complex commands
- ISSUE 7: Add
createFrameandcreateConnectorAI tools
- BUG 2: Fix
docker-compose.ymlfor local dev - ISSUE 4: Add Joi validation or remove unused dependency
- ISSUE 6: Add basic test coverage
- ISSUE 5: Document in-memory cache limitation
Strengths:
- Clean modular structure (config, middleware, routes, services, utils, websocket)
- Consistent error handling with
asyncHandlerandAppError - Role-based access control is well-implemented
- Supabase RLS policies add a second layer of security
- Dockerfile follows production best practices
- Winston logger is well-configured (aside from the platform detection issue)
Areas for improvement:
- Missing JSDoc/documentation on exported functions
- No input validation middleware despite having
joiinstalled - Single-line comments used in several files instead of structured documentation
- No test coverage