A comprehensive, interview-ready reference for SDE2 frontend system design interviews.
- The RADIO Framework
- How to Structure Your Interview Answer
- Design: Google Docs
- Design: Netflix
- Design: YouTube
- Design: Instagram Feed
- Design: Figma
- Design: CodeSandbox
- Design: Autocomplete
- Interview Cheat Sheet
RADIO is a structured approach for frontend system design interviews. Use it as your skeleton — then fill in the depth.
| Step | Objective | Duration |
|---|---|---|
| Requirements | Understand scope, clarify ambiguities | ~15% |
| Architecture | Key components and how they relate | ~20% |
| Data Model | Core entities, fields, ownership | ~10% |
| Interface (API) | Contracts between components | ~15% |
| Optimizations | Performance, UX, scalability, tradeoffs | ~40% |
⭐ SDE2 Tip: Interviewers mostly evaluate you on Performance, Scalability, and Tradeoffs. Spend at least 40% of your time on Optimizations.
Goal: Understand the problem and narrow scope before designing anything.
Functional requirements:
- What are the core use cases? (What must the product do?)
- What features are out of scope or nice-to-have?
- Who are the primary users?
Non-functional requirements:
- What are the performance expectations? (e.g., page load < 2s, real-time < 100ms)
- What scale should we design for? (users, items, concurrent connections)
- Is offline support needed?
- What devices/platforms must be supported? (desktop, mobile, tablet)
- Are there accessibility requirements?
- Are there internationalization (i18n) / localization (l10n) requirements?
| Type | Definition | Examples |
|---|---|---|
| Functional | Core behaviors the product must have | User can create/edit a document, watch a video |
| Non-functional | Quality attributes that improve the product | <100ms latency, offline support, 60fps scrolling |
💡 Interview tip: List your assumptions out loud, write them down, and get the interviewer to confirm. This avoids wasted effort and shows clear thinking.
Goal: Draw the key client-side components and explain how they interact.
| Component | Responsibility |
|---|---|
| Server | Treat as a black box; exposes HTTP/WebSocket APIs |
| View / UI | What the user sees; contains client-only state |
| Controller | Handles user interactions; bridges store and view |
| Client Store | App-wide state; holds server data + local state |
| Network Layer | Manages HTTP requests, WebSocket connections |
| Worker | Off-main-thread computation (bundling, indexing) |
| Cache Layer | In-memory or Service Worker cache |
- Use rectangles for components, arrows for data flow
- Show direction of data (e.g., user input → controller → store → view)
- Nest subcomponents inside parent components
- Label every arrow with what is being passed
- Mention where state lives (client vs server vs shared)
💡 Interview tip: Draw the diagram first, then walk through it verbally. Say what each component does and why it exists.
Goal: Define entities, their fields, and which component owns them.
| Type | Persisted? | Examples |
|---|---|---|
| Server-originated | Yes (in DB) | User profile, posts, videos |
| Client input | Eventually (sent to server) | Form fields, new post draft |
| Ephemeral state | No | Active tab, expanded sections, loading flags |
💡 Interview tip: For each entity, call out: source (server vs client), owner (which component), and lifetime (session vs persistent).
Goal: Define the contract between components — what data flows in and out.
Method: GET | POST | PUT | DELETE
Path: /resource/:id
Description: What this endpoint does
Parameters: { field: type, ... }
Response: { field: type, ... }
Function: functionName(params): ReturnType
Description: What this function does
Parameters: { field: type }
Returns: { field: type }
Event: event-name
Direction: client → server | server → client | bidirectional
Payload: { field: type }
💡 Interview tip: For UI component design questions, treat "Interface" as the component's props API — inputs, outputs, callbacks, and variants.
Goal: Demonstrate senior-level thinking. Pick 3–5 areas most relevant to the product.
- Code splitting: Only load JS needed for the current view (
React.lazy, dynamic imports) - Virtualized rendering: Render only visible items (react-window, react-virtual)
- Image optimization: WebP/AVIF, responsive
srcset, lazy loading vialoading="lazy" - Prefetching / preloading: Anticipate the next user action; preload assets early
- Memoization:
React.memo,useMemo,useCallbackto prevent unnecessary re-renders - Debouncing / throttling: Limit expensive operations (search input, scroll handlers)
- Web Workers: Move CPU-intensive work off the main thread
- Bundle optimization: Tree shaking, minification, compression (Brotli/gzip)
- Pagination / infinite scroll: Cursor-based preferred over offset for consistency
- Caching: HTTP cache headers, Service Worker cache, in-memory cache
- Optimistic updates: Update UI immediately, rollback on error
- Request deduplication: Don't fire duplicate in-flight requests
- AbortController: Cancel stale requests when input changes
- Batching: Combine multiple mutations into one request
- CDN: Serve static assets and media from edge locations
- WebSockets: Full-duplex; best for collaborative editing, chat, live feeds
- Server-Sent Events (SSE): Server → client only; simpler than WebSockets; good for notifications/feeds
- Long Polling: Fallback for environments without WebSocket support
- Reconnection logic: Exponential backoff, heartbeat pings
| Approach | Used By | Notes |
|---|---|---|
| Operational Transform (OT) | Google Docs | Requires central server to transform ops |
| CRDT | Figma, Notion | Decentralized; better for offline; eventually consistent |
- Service Worker: Cache assets and API responses; intercept fetch events
- IndexedDB: Store structured data locally (documents, drafts, queue)
- Background sync: Queue mutations offline; flush when reconnected
- Conflict handling on resync: Last-write-wins vs. merge strategy
| Strategy | When to Use |
|---|---|
| CSR | Highly interactive apps, dashboards |
| SSR | SEO-critical pages, fast first paint |
| SSG | Mostly static content (blogs, landing pages) |
| Streaming SSR | Large pages where you want progressive rendering |
- Semantic HTML (
<button>,<nav>,<main>) - ARIA roles and labels for dynamic content
- Keyboard navigation (tab order, focus management)
- Screen reader announcements for live regions (
aria-live) - Sufficient color contrast (WCAG AA: 4.5:1)
- XSS: Sanitize user input; avoid
dangerouslySetInnerHTML - CSRF: Use SameSite cookies and CSRF tokens
- Content Security Policy (CSP): Restrict script/style origins
- iframe sandboxing:
sandboxattribute to limit capabilities
- Skeleton screens over spinners for content-heavy pages
- Optimistic UI for interactions (like, comment, follow)
- Error boundaries + graceful fallbacks
- Empty states, loading states, error states for every data-dependent view
Speak in this order:
"First, I'll clarify requirements..."
→ Define functional + non-functional, confirm with interviewer
"Then I'll walk through the high-level architecture..."
→ Draw diagram, explain each component
"Here are the data models..."
→ Entities, fields, ownership
"Let me define the APIs..."
→ Server-client and client-client interfaces
"Finally, I'll dive into optimizations — that's where the real complexity lives..."
→ Pick 3–5 areas relevant to the product
Core challenge: Real-time collaborative text editing with conflict resolution.
Functional:
- Create, open, and edit documents
- Real-time multi-user collaboration
- Auto-save
- Basic formatting (bold, italic, headings, lists)
- User presence indicators (cursors)
Out of scope (mention briefly): Comments, version history, offline editing
Non-functional:
- Edit latency < 100ms
- Support 100s of concurrent editors per doc
- No data loss on disconnect
Platforms: Desktop web (primary), mobile web
Browser
│
├── Editor UI
│ ├── Toolbar (formatting controls)
│ ├── Document Canvas (contenteditable / custom renderer)
│ └── Presence Layer (remote cursors + avatars)
│
├── Client Store
│ ├── Document state (current content + version)
│ ├── Pending operations queue
│ └── Remote user cursors
│
├── Collaboration Engine
│ ├── OT / CRDT engine
│ ├── Operation transformer
│ └── Patch generator
│
├── Network Layer
│ ├── WebSocket client (real-time ops)
│ └── HTTP client (load/save doc)
│
└── Server (black box)
// Server-originated
Document {
id: string
title: string
content: ContentNode[] // rich-text tree
version: number
updatedAt: timestamp
}
ContentNode {
type: "paragraph" | "heading" | "list-item"
text: string
marks: Mark[] // bold, italic, etc.
}
// Collaboration
Operation {
type: "insert" | "delete" | "retain" | "format"
position: number
text?: string
length?: number
userId: string
version: number // vector clock / doc version
}
// Ephemeral (client-only)
UserPresence {
userId: string
displayName: string
color: string
cursorPosition: number
selectionRange: [number, number]
}
LocalState {
pendingOps: Operation[]
isSyncing: boolean
lastAckedVersion: number
}Load document
GET /docs/:id
Response: { id, title, content, version }
Save document (autosave)
POST /docs/:id/snapshot
Body: { content, version }
WebSocket: Send operation
{
"type": "operation",
"op": { "type": "insert", "position": 10, "text": "hello" },
"version": 100
}WebSocket: Receive remote operation
{
"type": "remote-op",
"op": { "type": "insert", "position": 10, "text": "world" },
"userId": "u2",
"version": 101
}WebSocket: Presence update
{
"type": "presence",
"userId": "u2",
"cursorPosition": 50
}1. Real-time transport Use WebSockets (not polling) for bidirectional, low-latency operation exchange. Implement reconnection with exponential backoff and op replayability.
2. Conflict resolution: OT vs CRDT
| OT | CRDT | |
|---|---|---|
| Used by | Google Docs | Figma, Notion |
| Requires central server | Yes | No |
| Offline support | Harder | Natural |
| Complexity | High | High (different kind) |
For an interview, describe OT: "When two users insert at the same position, we transform one operation against the other so both edits survive."
3. Operation queue + acknowledgment Buffer operations locally, apply optimistically, and only advance version when server ACKs. On conflict: rollback and re-apply transformed ops.
4. Virtualized rendering Don't render a 10,000-line document as 10,000 DOM nodes. Render only the visible viewport; recycle nodes as user scrolls.
5. Batched operations Batch keystrokes every 50–100ms before sending over WebSocket to reduce network chatter.
6. Offline support Store pending ops in IndexedDB. On reconnect, replay the local queue. Use version vectors to detect gaps and request missing ops from server.
7. Presence & awareness Throttle cursor position broadcasts (every 100ms max). Show colored cursors and name labels for each active user.
8. Autosave strategy
Use a combination of: (a) periodic snapshots every 30s, and (b) save on idle (no edits for 2s), and (c) save on beforeunload.
Core challenge: High-performance video streaming + global scale content delivery.
Functional:
- Browse home feed (rows of content)
- Search
- Watch video with playback controls
- Resume playback where user left off
- Subtitle / audio track selection
- Recommendations
Non-functional:
- Fast initial page load (< 2s TTI)
- Smooth video playback, minimal buffering
- Adaptive quality based on network
- Global audience (CDN critical)
Platforms: Desktop web, Smart TV, mobile web
Browser
│
├── Home Page
│ ├── Hero Banner
│ ├── Content Rows (lazy loaded)
│ └── Category Nav
│
├── Video Player
│ ├── Video element (MSE)
│ ├── ABR Controller (quality switching)
│ ├── Subtitle Renderer
│ ├── Playback Controls
│ └── Buffer Monitor
│
├── Client Store
│ ├── User profile + preferences
│ ├── Watch history + resume positions
│ └── Cached metadata
│
├── API Layer (HTTP)
│
└── CDN (video segments + images)
Movie {
id: string
title: string
description: string
posterUrl: string
backdropUrl: string
duration: number // seconds
genres: string[]
rating: string // PG, R, etc.
}
PlaybackState {
userId: string
contentId: string
progress: number // seconds watched
updatedAt: timestamp
}
ContentRow {
id: string
title: string // "Trending Now", "Continue Watching"
contents: Movie[]
cursor: string // pagination
}
StreamManifest {
contentId: string
hlsUrl: string // HLS manifest URL
dashUrl: string // DASH manifest URL
subtitleTracks: SubtitleTrack[]
audioTracks: AudioTrack[]
}Home feed
GET /home
Response: { rows: ContentRow[], userProfile: User }
Fetch row (paginated)
GET /rows/:rowId?cursor=abc
Response: { contents: Movie[], nextCursor: string }
Video stream manifest
GET /stream/:contentId
Response: { hlsUrl, dashUrl, subtitleTracks, audioTracks }
Save/resume playback progress
POST /playback/:contentId/progress
Body: { progress: 1200 }
Search
GET /search?q=stranger+things
Response: { results: Movie[] }
1. Adaptive Bitrate Streaming (ABR) Netflix uses HLS / MPEG-DASH. The player downloads a manifest listing segment URLs at multiple bitrates. The ABR algorithm monitors buffer health and bandwidth to switch quality automatically:
240p → 480p → 720p → 1080p → 4K
Use Media Source Extensions (MSE) API to feed segments to the <video> element programmatically.
2. CDN architecture Video segments are served from the nearest edge server. Netflix uses its own CDN (Open Connect Appliances). Design implication: manifest URLs point to CDN endpoints, not origin.
3. Code splitting The video player is heavy. Don't bundle it with the home page:
const Player = React.lazy(() => import('./Player'));4. Lazy loading rows Each content row is a separate API call triggered by IntersectionObserver when it enters the viewport.
5. Image optimization
- Serve posters as WebP/AVIF
- Use responsive
srcsetfor different screen sizes - Lazy load images below the fold with
loading="lazy"
6. Hover prefetch On desktop, when user hovers a tile for 300ms, prefetch the title's metadata and begin buffering the first few video segments.
7. Resume playback
Store progress both locally (localStorage/IndexedDB) and server-side. On load, merge: use server value if > local (user switched devices).
8. Virtualized content rows A row can have 50+ titles. Render only visible tiles + a small overscan buffer. Use a horizontal windowed list.
9. SSR for home page Server-render the above-the-fold hero content for fast perceived load. Hydrate interactivity client-side.
Core challenge: Video streaming + recommendations + comment scalability.
Functional: Browse feed, search, watch video, like/comment/subscribe, recommendations
Non-functional: Fast page load, smooth playback, adaptive quality, scalable comment rendering
Platforms: Desktop, mobile web, native apps (web scope only)
Browser
│
├── Home Feed
│ ├── Video Card Grid
│ └── Sidebar (subscriptions)
│
├── Watch Page
│ ├── Video Player
│ ├── Video Metadata (title, likes, channel)
│ ├── Comments Section
│ └── Related Videos (sidebar)
│
├── Client Store
│ ├── User session + subscriptions
│ ├── Watch history
│ └── Video metadata cache
│
└── Network Layer + CDN
Video {
id, title, description, thumbnailUrl,
duration, viewCount, likeCount,
channelId, channelName, channelAvatarUrl,
uploadedAt, tags: string[]
}
Comment {
id, videoId, authorId, authorName,
text, likeCount, replyCount,
createdAt, parentId? // null if top-level
}
PlaybackState {
videoId, progress, quality, playbackRate
}GET /feed?cursor=abc → { videos: Video[], nextCursor }
GET /search?q=react+hooks → { videos: Video[] }
GET /stream/:videoId → { hlsUrl, dashUrl }
GET /videos/:id/comments?cursor=abc → { comments: Comment[], nextCursor }
POST /videos/:id/like → { likeCount }
- ABR Streaming — same as Netflix (HLS/DASH with MSE)
- Thumbnail lazy load — IntersectionObserver, blur-up placeholder
- Chapter prefetch — preload next video segment when current is 80% watched
- Comment virtualization — top-level comments only initially; load replies on demand
- Stale-while-revalidate — serve cached feed, refresh in background
- View count optimistic update — increment locally; reconcile with server
Core challenge: Infinite scroll feed + image performance + real-time interactions.
Functional: Scroll feed, like/comment, view stories, create post, follow users
Non-functional: 60fps scroll, fast image load, real-time like counts, optimistic interactions
Browser
│
├── Stories Bar (horizontal scroll)
│
├── Feed
│ ├── Post List (virtualized)
│ └── Post Component
│ ├── Image / Carousel
│ ├── Action Bar (like, comment, share, save)
│ └── Caption + Comments Preview
│
├── Client Store
│ ├── Feed cache (cursor-paginated)
│ ├── User interactions (likes, saves)
│ └── Stories state
│
└── API Layer
Post {
id, authorId, authorName, authorAvatarUrl,
mediaUrls: string[], // images or video
mediaType: "image" | "video" | "carousel",
caption, likeCount, commentCount,
isLikedByUser: boolean,
isSavedByUser: boolean,
createdAt
}
Story {
id, authorId, mediaUrl, expiresAt, seen: boolean
}
FeedPage {
posts: Post[]
nextCursor: string
}GET /feed?cursor=abc → FeedPage
POST /posts/:id/like → { likeCount }
DELETE /posts/:id/like → { likeCount }
GET /posts/:id/comments?cursor=abc → { comments, nextCursor }
POST /posts → Post (multipart/form-data)
- Cursor-based pagination — stable ordering unlike offset (avoid duplicate/missing posts)
- Virtualized list — render ~5 posts around viewport; destroy off-screen DOM nodes
- Prefetch next batch — trigger next page fetch when user reaches 70% of current batch
- Optimistic like — toggle heart immediately; rollback on error
- Image placeholders — blur hash or low-quality placeholder while full image loads
- Carousel preload — preload next image in carousel on swipe
- WebP/AVIF — 30–50% smaller than JPEG at same quality
- Stories preload — preload first frame of each unseen story when stories bar is visible
Core challenge: Real-time collaborative canvas with thousands of objects.
Functional: Create/edit shapes and text, multi-user real-time editing, cursor presence, layers panel, component library
Non-functional: Low-latency updates (<50ms), support hundreds of concurrent editors, smooth 60fps canvas interaction
Browser
│
├── Canvas (WebGL renderer)
│ └── Scene Graph (quadtree for hit testing)
│
├── Tool Layer
│ ├── Select / Move
│ ├── Shape tools
│ └── Text tool
│
├── UI Chrome
│ ├── Toolbar
│ ├── Layers Panel
│ └── Properties Panel
│
├── Collaboration Engine
│ ├── CRDT / OT engine
│ └── Presence manager
│
├── Client Store
│ ├── Scene state (shapes, styles)
│ └── Selection state
│
└── WebSocket Layer
Shape {
id: string
type: "rect" | "ellipse" | "text" | "frame" | "component"
x, y, width, height: number
rotation: number
fill: Paint
stroke: Paint
opacity: number
children?: string[] // child shape IDs (for frames/groups)
parentId?: string
version: number
}
TextShape extends Shape {
content: string
fontSize, fontFamily, fontWeight: ...
}
UserPresence {
userId, displayName, color: string
cursorX, cursorY: number
selectedShapeIds: string[]
}
Operation {
type: "move" | "resize" | "create" | "delete" | "style"
shapeId: string
patch: Partial<Shape>
userId: string
timestamp: number
}GET /files/:id → { shapes: Shape[], version }
WebSocket connect /files/:id/collab
WS send: { type: "op", op: Operation }
WS recv: { type: "op", op: Operation, userId }
WS recv: { type: "presence", userId, cursor, selection }
WS recv: { type: "ack", version }
1. WebGL rendering (not DOM) DOM cannot handle thousands of shapes at 60fps. Use WebGL (or Canvas 2D) with a scene graph. Only re-render dirty regions on change.
2. Quadtree for hit testing
Efficiently find which shape is under the mouse cursor without iterating all shapes: O(log n) vs O(n).
3. Viewport culling Only process and render shapes visible in the current viewport. Skip off-screen shapes entirely.
4. CRDT for conflict resolution Each shape property is a CRDT register (last-write-wins with logical clock). Concurrent edits to different properties merge automatically.
5. Delta operations only Never send full shape state over WebSocket. Send only the changed properties:
{ "type": "move", "shapeId": "s1", "patch": { "x": 100, "y": 200 } }6. Multiplayer cursor throttle Broadcast cursor positions at 30fps max. Interpolate remote cursors client-side for smooth appearance.
7. Optimistic local apply Apply operations to local state immediately. If the server rejects (conflict), reconcile and re-apply.
8. Component instances Shape can reference a "master component". Only the master's data is stored; instances store overrides only (saves memory + bandwidth).
Core challenge: In-browser code editor + bundler + secure preview runtime.
Functional: Edit files, preview running app, live reload, install npm packages, share projects
Non-functional: Fast bundling, no server round-trip for preview, secure sandboxed execution
Browser
│
├── Editor Pane
│ ├── Monaco Editor (VSCode engine)
│ └── File Explorer
│
├── Preview Pane
│ └── Sandboxed iframe
│
├── Build Worker (Web Worker)
│ ├── Bundler (esbuild/rollup)
│ └── Module resolver
│
├── File System (in-memory + IndexedDB)
│
├── Package Manager
│ └── npm package fetcher (via CDN like esm.sh)
│
└── API Layer
├── Project CRUD
└── Package resolution
File {
path: string // "/src/App.tsx"
content: string
language: string
isDirty: boolean
}
Project {
id, name: string
files: File[]
dependencies: Record<string, string> // { "react": "^18.0.0" }
template: string // "react", "vue", "vanilla"
}
BuildResult {
bundleJs: string
bundleCss?: string
errors: BuildError[]
warnings: BuildWarning[]
}GET /projects/:id → Project
POST /projects → Project (create)
PUT /projects/:id/files → { files: File[] }
// Packages resolved via CDN — no custom API needed
// e.g. https://esm.sh/react@18 returns ESM bundle
1. Web Worker for bundling Move all bundling off the main thread. Post file system changes to worker; receive bundle output. Keeps editor at 60fps while bundling.
2. Incremental bundling Don't re-bundle the entire project on every keystroke. Detect changed modules and re-bundle only the affected subgraph.
3. Package CDN (esm.sh / Skypack)
Avoid running npm install in the browser. Fetch pre-built ESM bundles from a CDN. Cache aggressively (packages rarely change).
4. iframe sandboxing Run user code inside a sandboxed iframe with restricted CSP:
<iframe sandbox="allow-scripts" src="..."></iframe>This prevents user code from accessing parent frame DOM.
5. Live reload via postMessage
When bundle is ready, post new bundle to iframe via postMessage. Iframe hot-swaps modules without full reload.
6. File system in IndexedDB Persist project files locally. Reload restores last state. Sync to server in background.
7. Error boundaries in preview Catch runtime errors in the iframe and display them in an overlay without crashing the editor.
Core challenge: Low-latency suggestions with minimal network overhead and great UX.
Functional: Show suggestions as user types, keyboard navigation, select suggestion fills input, highlight matching substring
Non-functional: < 100ms perceived latency, graceful degradation on network issues, accessible
Browser
│
├── Input Component
│
├── Debounce Layer
│
├── Cache Layer (in-memory LRU)
│
├── API Client
│ └── AbortController (cancel stale requests)
│
└── Suggestions Dropdown
├── Suggestion Items (virtualized if > 50)
└── Loading / Empty / Error states
Suggestion {
id: string
text: string
category?: string // "movie", "person", "place"
score?: number // relevance score from server
}
AutocompleteState {
query: string
suggestions: Suggestion[]
selectedIndex: number // keyboard nav
isLoading: boolean
error: string | null
}
// Cache entry
CacheEntry {
query: string
results: Suggestion[]
cachedAt: timestamp
}GET /autocomplete?q=rea&limit=10&category=movies
Response: {
suggestions: Suggestion[],
query: string
}
Client-side function interfaces:
function fetchSuggestions(query: string, signal: AbortSignal): Promise<Suggestion[]>
function selectSuggestion(suggestion: Suggestion): void
function highlightMatch(text: string, query: string): ReactNode1. Debouncing Wait for user to pause typing before firing request (300ms typical):
const debouncedSearch = debounce(fetchSuggestions, 300);2. In-memory LRU cache Cache last N query results. Re-entering the same prefix returns instantly without a network call.
3. AbortController Cancel the previous in-flight request when a new keystroke fires:
controllerRef.current?.abort();
controllerRef.current = new AbortController();
fetch(url, { signal: controllerRef.current.signal });4. Prefix-based cache hits If we have results for "reac" and user types "reac", return cache immediately while fetching "react" in background.
5. Optimistic UI Show stale cache results immediately while fresh results load. Replace when new results arrive.
6. Keyboard accessibility
ArrowUp/ArrowDown— navigate suggestionsEnter— select highlighted suggestionEscape— close dropdown, return focus to inputrole="combobox",aria-expanded,aria-activedescendantfor screen readers
7. Highlight matching substring
Split suggestion text on the matching query fragment and wrap match in <mark>:
"react" → "re" + <mark>act</mark>
8. Server-side: Trie or inverted index for O(log n) prefix lookups. Use Redis sorted sets or Elasticsearch for production scale.
| Step | Time |
|---|---|
| Requirements | 5–7 min |
| Architecture | 8–10 min |
| Data Model | 4–5 min |
| APIs | 5–7 min |
| Optimizations | 15–18 min |
| Dimension | Signals |
|---|---|
| Structured thinking | RADIO flow, doesn't ramble |
| Frontend depth | Knows rendering, browser APIs, performance |
| Tradeoff awareness | OT vs CRDT, WebSocket vs SSE, CSR vs SSR |
| Scale intuition | Virtualization, CDN, caching, pagination |
| UX sensibility | Loading/error/empty states, optimistic UI |
| Communication | Narrates decisions, confirms with interviewer |
| Problem | Solution |
|---|---|
| Large lists | Virtualization (react-window) |
| Too many API calls | Debounce + cache |
| Real-time sync | WebSockets + OT/CRDT |
| Slow page load | Code split + lazy load + SSR |
| Race conditions | AbortController + version stamps |
| Offline | Service Worker + IndexedDB |
| Concurrent edits | OT (centralized) or CRDT (decentralized) |
| Heavy computation | Web Workers |
| Video streaming | HLS/DASH + MSE + ABR |
- Jumping into design without clarifying requirements
- Designing only the server (this is a frontend interview)
- Never mentioning loading states, error states, or empty states
- Ignoring accessibility entirely
- Treating all optimizations as equally important (prioritize for the product)
- Not discussing tradeoffs — saying "I'd use X" without saying why
Framework based on the RADIO method by Yangshun Tay (Ex-Meta Staff Engineer). Expanded with SDE2 interview depth by Claude.