Skip to content

Instantly share code, notes, and snippets.

@MinhHieu-Nguyen-dn
Last active October 16, 2025 03:04
Show Gist options
  • Select an option

  • Save MinhHieu-Nguyen-dn/539745dedfa96261ac98b7bf39890011 to your computer and use it in GitHub Desktop.

Select an option

Save MinhHieu-Nguyen-dn/539745dedfa96261ac98b7bf39890011 to your computer and use it in GitHub Desktop.
A quick-start guide for clarifying and vibe coding the MVP for your idea. Information collected and compiled by Hieu Nguyen Minh (MinhHieu-Nguyen-dn).

Toolkit: Build in 2025 - From Idea to MVP Guide

Who this is for: Anyone who wants to turn an idea into a working MVP quickly and responsibly.

Realistic expectation: An MVP here means a demo-able, clickable slice of your product covering 1–2 core jobs-to-be-done. It's good enough to collect feedback, run user tests, or capture early sign-ups. It is not production-hardened. "Real" engineers still need to review, refactor, and prepare it for scale, security, and operations.

Critical disclaimer: Everything described here produces validation artifacts, not shipping code.

image

Source: Substack


0) Operating Model: Speed with Guardrails

The promise of AI-assisted development is compressing the journey from idea to working prototype. The reality is more nuanced.

image

Source: Facebook

What AI Actually Accelerates

  • Drafting UI variations in minutes instead of hours
  • Generating boilerplate code for standard patterns (forms, CRUD operations, API clients)
  • Writing initial tests that you refine
  • Bootstrapping project structure with reasonable defaults
  • Producing first-draft documentation from code or vice versa

What AI Doesn't Replace

  • Architectural decisions about data models, API boundaries, state management
  • Security hardening (secrets management, input validation, authentication flows)
  • Performance optimization under real load
  • Production operations (monitoring, logging, incident response)
  • Deep debugging when things break in unexpected ways

The Principles That Keep You Sane

image

Source: Substack

Throughout this guide, use these three principles as your default decision tests:

  • YAGNI (You Aren't Gonna Need It): Only build what's needed to test your hypothesis. If a feature doesn't move your MVP metric, defer it.
image
  • KISS (Keep It Simple, Stupid): Prefer the simplest component and API surface that works. Complex solutions are harder to debug and extend. When AI suggests a complicated solution, ask: "What's the simplest version that proves the concept?"
image
  • DRY (Don't Repeat Yourself): If logic appears in two places, refactor to one.
image

Make these visible in your workspace.
Write them in your AGENTS.md file.
Reference them in prompts.
They're your quality bar..


1) Plan: Review Before Coding

Goal: Improve clarity about what you're building and for whom, without over-deciding how.

Many builders skip this step and jump straight to "build me an app that does X." Result: five hours later, you have something that runs but doesn't address the actual user's needs.

1.1 Quick Review Questions (10–15 minutes)

Answer these in a simple document:

  • Who exactly is the user? E.g.: Not "developers" but "junior frontend engineers joining their first production codebase."
  • What job do they need done today? Be specific: "Understand where to add a new API endpoint without reading 50 files."
  • What alternative do they use now - and what's painful? "They ask senior engineers in Slack, who take 30 minutes to answer each time."
  • What single metric will you try to move in the next 2–4 weeks? "Reduce time-to-first-contribution from 2 weeks to 3 days."
  • What's the smallest workflow that, if it works, proves value? "New engineer pastes a function name, gets back the file path, function signature, and three usage examples."

1.2 Fast Market Scan (30–45 minutes)

Use any AI with search capability (Gemini, ChatGPT with browsing, Claude with search, or Grok):

Prompt template:

I'm building [your one-sentence description].
Help me with competitive research:
1. Find 3 closest existing solutions - include URLs, pricing, and their main value proposition
2. Suggest 5 keywords users would search to find a solution like mine
3. Analyze those 3 competitors and tell me 3 ways I could make a stronger first impression in 30 seconds
Present this as a markdown table and short analysis.

Paste the results into a one-pager. Don't agonize over strategy yet - you just want to know the landscape.

1.3 Translate Idea → Mini-Spec

What is a spec? A short note that tells a developer (or AI agent) what to build and how to verify it worked.

Without a spec, you're in an endless loop of "not quite what I meant." With one, you have a shared definition of "done."

Use EARS (Easy Approach to Requirements Syntax)

EARS is a lightweight syntax developed at Rolls-Royce for writing clear, testable requirements. It uses temporal logic - events trigger system behavior.

Basic pattern:

WHEN [triggering condition or event]
THE SYSTEM SHALL [expected behavior]

Examples:

Bad requirement: "The app should be fast."

Good EARS requirement: "WHEN a user submits a search query, THE SYSTEM SHALL return results within 2 seconds for queries with fewer than 100 matches."

Another example: "WHEN a user uploads a file larger than 10MB, THE SYSTEM SHALL display a warning message before starting the upload."

Add User Stories with Acceptance Criteria

Pick 2–3 user stories for your MVP. Each should have:

Format:

## User Story: [Short title]
**As a** [type of user]
**I want** [goal]
**So that** [reason/value]
### Acceptance Criteria (EARS format)
1. WHEN [condition], THE SYSTEM SHALL [behavior]
2. WHEN [condition], THE SYSTEM SHALL [behavior]
3. ...
### Definition of Done
- [ ] Behavior works in [browser/device]
- [ ] One happy path test passes
- [ ] Can demo to a user

Example:

## User Story: Quick code search
**As a** new engineer joining the codebase
**I want** to search for where a specific function is defined
**So that** I can understand how to use it without asking teammates
### Acceptance Criteria
1. WHEN the user types a function name in the search box, THE SYSTEM SHALL display matching results within 2 seconds
2. WHEN the user clicks a search result, THE SYSTEM SHALL display the file path, function signature, and line number
3. WHEN no matches are found, THE SYSTEM SHALL display "No results found. Try searching for a class or variable name instead."
### Definition of Done
- [ ] Search works for at least 50 known function names in the example repo
- [ ] Results display correctly on mobile (iOS Safari) and desktop (Chrome)
- [ ] One test confirms search returns expected results for "getUserById"
- [ ] Can demo: search → click result → see code location

Write 2–3 of these. This is your MVP scope. Everything else goes into a "later" list.


2) UI: Vibe-Coding the Interface

Intent: Get a good-enough first screen and a clickable hero path fast. Iterate visually a few times, then stop and move to implementation.

Why this matters: A working UI mockup is a contract. It shows your developer (or AI agent) exactly what to build. It exposes gaps in your thinking (Where does this button go? What happens when this field is empty?). And it's far cheaper to change pixels than code.

2.1 Sources of Inspiration

Before you prompt for "a beautiful dashboard," study what beauty means in your domain.

2.1.1. Galleries to browse:

  • Dribbble, Mobbin, Behance (sorted by "most liked") - see what resonates with your target users
  • Component libraries: shadcn/ui, Tailwind UI, Flowbite - understand available building blocks
  • App showcases: Lapa Ninja, Land-book, SaaS landing pages

2.1.2. Patterns to study (outline/bố cục):

Different products need different layouts:

  • Header + Sidebar: Admin panels, dashboards, content management (e.g., Notion, Linear)
  • Top-nav + Content: Marketing sites, blogs, documentation
  • Split pane: Code editors, email clients, comparison tools
  • Card grid: Marketplaces, galleries, social feeds
  • Data table + Detail: Analytics tools, CRMs, inventory systems
  • Wizard/Stepper: Onboarding, checkout, multi-step forms
  • Mobile-first column vs two-pane: Chat apps, social media, mobile-web hybrids

Pick one pattern that fits your core workflow. Don't combine three layouts into one screen.

2.1.3. Design style components (decide early):

A consistent design system has three layers:

a. Tokens (the raw variables):

  • Color palette: Primary, secondary, neutral, error, success (light and dark mode variants)
  • Typography scale: Font families, sizes (12px, 14px, 16px, 20px, 24px), weights
  • Spacing scale: Multiples of 4 or 8 (4px, 8px, 16px, 24px, 32px)
  • Radius: Border radius values (0, 4px, 8px, 16px, full)
  • Shadows: None, subtle, medium, large

b. Interaction (how things behave):

  • States: Default, hover, focus, active, disabled
  • Motion/easing: Duration (100ms for micro, 300ms for standard), easing functions (ease-out for natural feel, spring for playful)
  • Density: Compact (tight spacing, small touch targets) vs roomy (generous padding, large buttons)

c. Accessibility (non-negotiable):

  • Contrast: Text meets WCAG AA (4.5:1 for normal text)
  • Touch targets: Minimum 44×44px for interactive elements
  • Focus indicators: Visible outline on keyboard navigation

Ask AI to analyze examples:

Find 2–3 designs you like. Screenshot them. Ask:

Analyze these UI designs:
1. What design style is this? (e.g., Glassmorphism, Neobrutalism, Minimalist)
2. Extract the color palette (primary, secondary, neutral)
3. What typography choices do you see? (fonts, size scale)
4. Describe the layout pattern used
5. What makes this design feel cohesive?
Give me a structured breakdown I can use as a reference.

Save this breakdown. It becomes your design guideline.

2.2 Tools That Accelerate UI Drafts

  • Any multi-modal AI chat (with vision): To compare your screenshot to references and suggest token-level improvements (e.g., "increase contrast between button text and background," "reduce spacing between cards by 4px").
  • Vercel v0/Lovable/Gemini: Prompt-to-screen builder. Great for first drafts of components or full pages. Ideal for drafting and trying alternative layouts quickly. Export or rebuild key parts for your repo - don't expect production code.
  • Vision for agents (MCP servers): Give your AI assistant eyes by integrating:
    • Chrome DevTools MCP: Lets the agent capture screenshots, inspect DOM, measure spacing/colors directly from a rendered page.
    • Playwright MCP: Automates browser actions - click buttons, fill forms, take screenshots - so the agent can iterate on the UI autonomously.

2.3 The Iteration Loop (Repeat 2–3 Times; ~90 Minutes Each)

Here's a workflow that balances speed and quality:

Round 1: Generate the first screen

  1. Pick 2–3 reference screens and define your tokens (colors, typography, spacing).
  2. Prompt your tool (v0, Lovable, or Claude with artifacts):
Create a [screen name] for [product description].
Style: [e.g., Modern SaaS, Neobrutalism, Minimal]
Layout pattern: [e.g., sidebar + content area]
Tokens:
- Primary color: #3B82F6 (blue)
- Font: Inter, 16px body, 14px small, 24px heading
- Spacing: 8px base unit
- Radius: 8px
Include:
- [Component 1]
- [Component 2]
- [Component 3]
Use fake/realistic data (names, dates, numbers) to make it feel real.
  1. Review the output. Does it match your mental picture? Are the components in the right order?

Round 2: Capture and critique

  1. Screenshot the rendered UI (use Chrome DevTools MCP if available, or manual screenshot).
  2. Ask the agent to critique:
Compare this screenshot to the reference designs I provided earlier.
Analyze:
- Is the layout hierarchy clear? (Does the eye know where to look first?)
- Do the colors match the token palette?
- Is spacing consistent (multiples of 8px)?
- Are font sizes from the defined scale?
- Does contrast meet accessibility standards?
Provide a list of **specific diffs** to apply (e.g., "Change button padding from 10px to 12px" not "improve the button").
  1. Apply small diffs only. Don't let it rewrite the whole component - apply targeted fixes.

Round 3: Hook fake data

  1. Replace placeholder text with realistic fake data (use libraries like Faker or ask the agent to generate sample JSON).
  2. Test edge cases visually:
    • What if a username is 50 characters long?
    • What if there are zero items in the list?
    • What if an error message appears?
  3. Log decisions in a ui-notes.md file:
## 2025-10-15: Dashboard layout
- Changed sidebar width from 240px to 280px (better for 14+ char menu labels)
- Increased card shadow from subtle to medium (improves hierarchy)
- Set h1 to 32px instead of 36px (felt too large on mobile)

This log helps you (and the agent) stay consistent in later iterations.

Gate: "UI is Ready to Code Against"

Before moving to Build, make sure that you have a design guideline file (design-guideline.md) describing tokens, layout, and component rules.

If these conditions aren't met, spend another 60–90 minutes refining. A clean UI foundation saves hours in the Build phase.


3) Build: AI-Assisted Development

Goal: Produce working code that implements your spec and UI. Keep it simple, tested, and maintainable.

Reality check: This is where "vibe coding" gets real. AI will write code fast - but fast code is often brittle, over-engineered, or wrong. Your job is to steer, verify, and simplify.

3.1 Start with the Model's "Innate" Capability (Once Per Project, at the beginning)

Why this matters: Different models and different coding assistants (Cursor, Windsurf, Claude Code, Aider, Copilot) behave differently. Some are "smart students" who write clean code with modern libraries and fix errors quickly. Others are "mediocre students" who produce buggy code and can't self-correct.

Knowing your model's baseline capability helps you decide how much guidance to provide in your repository's instructions. With a strong model, you can write high-level prompts. With a weaker model, you need detailed examples and constraints.

Test protocol:

Give the agent a small, representative task that touches the core of your stack:

Example prompt:

Build a simple user registration form with:
- Email and password fields
- Client-side validation (valid email format, password min 8 chars)
- Submit button that posts to /api/register
- Display success or error message
- Write one test that verifies validation works
Use [your stack: React + TypeScript, Vue, Next.js, etc.].

Observe:

  • Error-fixing ability: Does it fix bugs on the first try, or spiral into broken attempts?
  • Library choices: Does it use current versions or deprecated APIs?
  • Code style: Is code readable, or a tangled mess?
  • Test quality: Does it write meaningful tests, or fake mocks to pass?
  • Pace: Does it complete in 5 minutes or stall for 30?

Decide your model policy:

Based on results, set a preferred model and fallback strategy:

  • Strong performance: Use as primary agent; provide high-level guidance
  • Medium performance: Use for implementation, but plan architecture yourself
  • Weak performance: Use only for boilerplate; write critical logic by hand

Document this in your AGENTS.md file so you don't forget your findings.

3.2 Context Engineering: Teach the Agent About Your Project

Problem: AI agents can't read minds. Without context, they hallucinate requirements, use inconsistent patterns, and produce code that doesn't integrate with the rest of your project.

image

Source: Substack

Solution: Structure your repository so the agent knows what to read and what rules to follow.

Create AGENTS.md (Project-Level Instructions)

This file lives at the root of your repo. It's the agent's onboarding document.

Template:

# AGENTS.md

This file provides guidance to AI coding assistants working in this repository.

## Project Overview

[2-3 sentence description: What does this project do? Who is it for?]

## How to Run Locally

\`\`\`bash
npm install
npm run dev
# Navigate to http://localhost:3000
\`\`\`

## Architecture

- **/src/components** - React components (one component per file, max 300 LoC)
- **/src/lib** - Utility functions and shared logic
- **/src/app** - Next.js app router pages
- **/tests** - Test files (named `*.test.ts`)

## Code Style

- **TypeScript strict mode** enabled
- **ESLint + Prettier** configured (run `npm run lint` before committing)
- **Prefer functional components** (no class components)
- **Keep functions under 50 lines** when possible

## Testing Strategy

- Write **tests first** for new features (TDD approach)
- Test file naming: `[filename].test.ts` next to source file
- Run `npm test` before pushing
- **Do not skip failing tests to make CI pass**

## Constraints

- **No localStorage or sessionStorage** (not supported in artifact environment; use React state)
- **Do not hardcode API keys or secrets** in code (use `.env` files)
- **Do not commit `.env` files** (already in `.gitignore`)

## Principles

Apply **YAGNI, KISS, DRY** in all implementations:

- **YAGNI:** Don't add features "in case we need them." Build only what's specified.
- **KISS:** Prefer simple solutions. Avoid complex patterns unless necessary.
- **DRY:** Refactor duplicated logic into shared functions.

## See Also

- `./ai-docs/guideline.md` - Detailed technical guidelines
- `./docs/design-guideline.md` - UI design system reference

Create ai-docs/guideline.md (Technical Deep-Dive)

This file contains:

  • Repo map: How files relate to each other
  • Module boundaries: What each folder is responsible for
  • Examples: Before/after snippets showing preferred patterns
  • Do/Don't rules: Common mistakes to avoid

Example section:

## API Client Pattern

**DO:**

\`\`\`typescript
// lib/api.ts
export async function fetchUser(id: string) {
  const response = await fetch(`/api/users/${id}`);
  if (!response.ok) throw new Error('Failed to fetch user');
  return response.json();
}
\`\`\`

**DON'T:**

\`\`\`typescript
// Avoid inline fetch calls scattered across components
function UserProfile() {
  const [user, setUser] = useState(null);
  useEffect(() => {
    fetch('/api/users/123').then(r => r.json()).then(setUser);
  }, []);
  // ...
}
\`\`\`

**Why:** Centralized API functions are easier to mock, test, and update when endpoints change.

Keep Files Small (~≤300 Lines of Code)

Large files overwhelm AI context windows. If a file exceeds 300 LoC, split it:

  • Extract helper functions to /lib
  • Break components into smaller sub-components

Use Task Summaries (summary-[task].md)

For each task, create a markdown file tracking:

  • Intent: What were you trying to accomplish?
  • Steps done: What changes were made?
  • Issues found: What didn't work?
  • Next steps: What remains?

Why: When you switch tasks or open a new chat, you can attach this summary to restore context quickly.

CLEAR/COMPRESS Ritual

AI agents lose context over long conversations. Follow this protocol:

  • New task = new chat/tab (in editors that support it)
  • Before switching tasks: Prompt the agent: "Summarize what you've done, issues encountered, and remaining work into summary-[task-name].md"
  • If the agent goes off-track: Stop. Use git checkout to revert changes. Clear context and start fresh with a more detailed prompt or smaller task scope.
  • If errors repeat after 3–4 fix attempts: Reset. The agent is lost. Let's do it again!

3.3 Spec-Driven + TDD (Test-Driven Development)

Why TDD matters with AI:

AI agents can lie. They'll tell you "tests pass" when tests are faked with mocks. They'll mark tasks "done" when features don't work. The only way to verify behavior is to run real tests.

Embed TDD in agent instructions:

Add to AGENTS.md:

## Test-Driven Development (Required)

For every new feature or bug fix:

1. **Write test(s) first** based on acceptance criteria (EARS format)
2. **Run tests** - they should fail initially
3. **Implement** the minimum code to make tests pass
4. **Refactor** while keeping tests green
5. **Commit only when tests pass**

**Acceptance criteria example:**

> WHEN a user submits the form with an invalid email, THE SYSTEM SHALL display "Please enter a valid email address"

**Test for this:**

\`\`\`typescript
it('displays error for invalid email', () => {
  render(<RegistrationForm />);
  fireEvent.change(screen.getByLabelText('Email'), { target: { value: 'notanemail' } });
  fireEvent.click(screen.getByText('Submit'));
  expect(screen.getByText('Please enter a valid email address')).toBeInTheDocument();
});
\`\`\`

Keep PRs small (≤200–300 changed lines):

Large PRs are hard to review and hard for AI to manage. Break work into:

  • One feature or bug fix per PR
  • Run lint, type-check, and tests locally before pushing
  • CI runs the same checks

3.4 Let the Agent See and Test the UI (Playwright/Chrome MCP)

The problem: AI can't see the UI. It generates code that compiles but looks wrong or behaves incorrectly.

The solution: Integrate Playwright MCP or Chrome DevTools MCP so the agent can:

  1. Render the app in a browser
  2. Take screenshots
  3. Compare to design references or previous versions
  4. Identify visual bugs (misaligned elements, wrong colors, overlapping text)
  5. Propose fixes
  6. Re-run to verify

Example workflow:

  1. Agent implements a feature
  2. You prompt: "Run the app, navigate to /dashboard, screenshot the result"
  3. Agent executes browser automation, captures screenshot
  4. You prompt: "Compare this to designs/dashboard-reference.png. What differs?"
  5. Agent responds: "Button color is #2563EB instead of #3B82F6. Spacing between cards is 12px instead of 16px."
  6. You prompt: "Fix those two issues and screenshot again."
  7. Agent updates CSS, re-runs, confirms match.

This loop is dramatically faster than manually describing what's wrong.


4) "Mark Yourself as Safe": Public MVP Mini-Bar

Goal: Before showing your MVP to users, ensure it won't leak secrets, crash embarrassingly, or expose data unsafely.

Principle: This is not a production security audit. It's a checklist to avoid obvious, preventable mistakes that harm your reputation or users.

4.1 Files and Things to Check

Secrets Management:

  • .env file exists and contains all secrets (API keys, database URLs)
  • .env is listed in .gitignore
  • Run git log --all --full-history -- .env to confirm .env was never committed
  • .env.example exists with placeholder values
  • No hardcoded secrets in code

Ignore Lists:

.gitignore includes:

  • .env
  • OS files (.DS_Store, Thumbs.db)
  • Build artifacts (dist/, build/, node_modules/)
  • Test output (coverage/, *.log)

Tests:

  • Every MVP feature has 1 happy path test + 1–2 edge case tests
  • Tests pass locally

4.2 Simple Ways to Publish & Demo

Vercel (Recommended for Web Apps):

  • Deploys from Git (GitHub, GitLab, Bitbucket)
  • Supports Next.js, React, Vue, Svelte, static sites
  • Serverless/edge functions for backend logic
  • Environment variables managed in UI (safe for secrets)
  • Free tier sufficient for MVPs

ngrok (For Local Demos):

  • Exposes your local dev server via a public HTTPS URL
  • Ideal for short user tests or demos without deploying
  • Free tier: temporary URLs (expire after session)
  • Paid tier: persistent URLs

Static Hosting (If Frontend-Only):

  • GitHub Pages: Free, easy for static sites (HTML/CSS/JS)
  • Netlify: Similar to Vercel, great for static + serverless functions
  • Cloudflare Pages: Fast CDN, generous free tier

Do Not:

  • Deploy with secrets in environment variables visible in client-side code
  • Use a personal computer as a production server (unreliable, insecure)
  • Skip HTTPS (browsers block features like camera, geolocation on HTTP)

Final Reminder: Hand Off to Senior Engineers

Your MVP is a validation artifact. It proves:

  • Users want this feature
  • The workflow makes sense
  • The UI is usable

It is not ready for:

  • Thousands of concurrent users
  • Sensitive data at scale
  • Compliance requirements (GDPR, HIPAA, etc.)

Next step: Show the MVP to senior engineers. They will:

  • Review architecture for scalability bottlenecks
  • Harden security (input validation, SQL injection prevention, rate limiting)
  • Add monitoring and logging for production incidents
  • Refactor code for maintainability
  • Write comprehensive tests

Do not skip this step. AI-generated code optimizes for speed, not safety or scale.


References

  1. Spec-driven development with AI: Get started with a new open source toolkit - The GitHub Blog
  2. Concepts - Docs - Kiro
  3. Test-Driven Development (TDD) for AI Agents
  4. [VB-06] Vibe coding giao diện sao cho đẹp?
  5. [VB-07] Claude Code: Common Mistakes & "Production-ready" Project
  6. [VB-03] Viết prompt như thế nào khi "Vibe Coding"
  7. Facebook: Viet Tran
  8. Turn Claude Code into Your Own INCREDIBLE UI Designer (using Playwright MCP Subagents)
  9. Chrome DevTools (MCP) for your AI agent | Blog
  10. Lovable
  11. V0
  12. Alistair Mavin EARS
@MinhHieu-Nguyen-dn
Copy link
Author

ảnh 1

@MinhHieu-Nguyen-dn
Copy link
Author

image2

@MinhHieu-Nguyen-dn
Copy link
Author

image3

@MinhHieu-Nguyen-dn
Copy link
Author

image4

@MinhHieu-Nguyen-dn
Copy link
Author

image5

@MinhHieu-Nguyen-dn
Copy link
Author

image6

@MinhHieu-Nguyen-dn
Copy link
Author

image7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment