Skip to main content
Technology & EngineeringVibe Coding Workflow289 lines

prompt-to-app

Guides the complete journey from an idea to a working application using AI code generation tools. Covers writing effective app specifications, choosing the right tool for the job (Claude Code, Cursor, Bolt, v0, Lovable, Replit Agent), the spec-first approach, iterating on generated code without losing coherence, and managing scope creep during AI-assisted development. Use when someone wants to build an app from scratch using vibe coding.

Quick Summary35 lines
Going from an idea to a working application using AI code generation — the spec-first approach, tool selection, and iteration strategy.

## Key Points

- Feature 1: [one sentence]
- Feature 2: [one sentence]
- Feature 3: [one sentence]
- Frontend: [framework]
- Backend: [framework or "none"]
- Database: [type or "none"]
- Auth: [approach or "none"]
- [Thing you are tempted to build but should not yet]
- [Another thing]
- Forces you to decide what you are building before the AI starts generating
- Prevents scope creep ("while we're at it, let's add...")
- Gives the AI clear constraints (tech stack, features)

## Quick Example

```
Add a task list page at /tasks that:
- Fetches all tasks from the database
- Displays them in a table with columns: title, status, priority, created date
- Add a status filter dropdown above the table
- Use server components for the data fetch
```

```
Add error handling to the task list page:
- Show a friendly error message if the database query fails
- Add a loading skeleton while data is being fetched
- Handle the empty state (no tasks) with a message and a "Create your first task" button
```
skilldb get vibe-coding-workflow-skills/prompt-to-appFull skill: 289 lines
Paste into your CLAUDE.md or agent config

Prompt to App

Going from an idea to a working application using AI code generation — the spec-first approach, tool selection, and iteration strategy.


The Spec-First Approach

The single most important practice in vibe coding: write a spec before you write a prompt.

A spec is not a 20-page requirements document. It is a concise description of what you are building, written for the AI (and for yourself). Without it, you will wander, the AI will guess, and your project will drift.

What a Good Spec Contains

# App: [Name]

## What it does
[2-3 sentences. What does the user do? What problem does it solve?]

## Core features (MVP)
- Feature 1: [one sentence]
- Feature 2: [one sentence]
- Feature 3: [one sentence]

## Tech stack
- Frontend: [framework]
- Backend: [framework or "none"]
- Database: [type or "none"]
- Auth: [approach or "none"]

## Out of scope (for now)
- [Thing you are tempted to build but should not yet]
- [Another thing]

Why Spec-First Works

  • Forces you to decide what you are building before the AI starts generating
  • Prevents scope creep ("while we're at it, let's add...")
  • Gives the AI clear constraints (tech stack, features)
  • Creates a reference document you can point back to when the project drifts
  • Makes it easy to split work into sequential prompts

Spec Anti-Patterns

Too vague: "Build me a social media app." The AI will make 100 decisions you did not intend.

Too detailed: A 5-page spec with pixel-level UI descriptions. You will spend more time writing the spec than building the app. Save detail for the prompts.

No constraints: Forgetting to specify tech stack, auth approach, or database. The AI will choose for you, and its choices may not be what you want.

Including "nice to haves" in MVP: Every feature you add multiplies complexity. Be ruthless about what makes the cut.


Choosing the Right Tool

Different AI coding tools excel at different things. Choosing wrong wastes hours.

Claude Code (CLI)

Best for: Backend services, CLI tools, complex logic, full-stack apps where you want full control, working in existing codebases.

Strengths: Understands project structure deeply, can read and modify many files, works with any language or framework, excellent at refactoring.

Weaknesses: No visual preview, requires terminal comfort, steeper learning curve.

Use when: You are a developer who wants AI assistance without leaving your workflow. You have an existing codebase. You need complex multi-file changes.

Cursor

Best for: Full-stack development with visual feedback, working in existing codebases, developers who want IDE integration.

Strengths: Inline editing, codebase-aware completions, chat + edit in one interface, supports .cursorrules for project context.

Weaknesses: Subscription cost, can be slow on large codebases, learning curve for effective .cursorrules configuration.

Use when: You want AI-assisted development inside an IDE. You are working on a medium-to-large project. You want granular control over what gets changed.

Bolt / Lovable

Best for: Frontend-heavy apps, rapid prototyping, non-developers building apps, visual-first development.

Strengths: Instant preview, deploy in one click, good for standard web app patterns, low barrier to entry.

Weaknesses: Limited backend capabilities, hard to customize deeply, vendor lock-in risk, struggles with complex state management.

Use when: You want a working app in under an hour. You are building a standard web app. You prioritize speed over control.

v0 (Vercel)

Best for: UI components, landing pages, React/Next.js interfaces, design-to-code.

Strengths: Beautiful default styling, shadcn/ui integration, one-click deploy to Vercel, excellent for component-level work.

Weaknesses: Narrow tech stack (React/Next.js focused), limited backend, not ideal for full applications.

Use when: You need a polished UI component or page. You are in the Next.js ecosystem. You want design-quality output quickly.

Replit Agent

Best for: Quick prototypes, learning projects, apps that need hosting, collaborative development.

Strengths: Integrated hosting, database, and deployment. End-to-end environment. Good for beginners.

Weaknesses: Limited to Replit's environment, performance constraints, less control over infrastructure.

Use when: You want everything in one place. You are building a prototype you want to share immediately. You do not want to manage infrastructure.

Decision Heuristic

Ask yourself:

  1. Do I need a backend? (If no: v0 or Bolt. If yes: continue.)
  2. Do I have an existing codebase? (If yes: Claude Code or Cursor. If no: continue.)
  3. Do I want full control? (If yes: Claude Code. If no: continue.)
  4. Do I want instant preview and deploy? (If yes: Bolt or Lovable. If no: Cursor.)

The Build Sequence

Once you have a spec and a tool, follow this sequence:

Phase 1: Scaffold (1-3 prompts)

Generate the project structure, install dependencies, set up the basic layout.

Example prompt:

Create a new Next.js 14 app with TypeScript, Tailwind CSS, and shadcn/ui.
Set up the folder structure with:
- /app for pages
- /components for shared components
- /lib for utilities and database
Use SQLite with Drizzle ORM for the database.
Do not add any features yet, just the scaffold.

Key principle: Get a running app before adding features. Verify the scaffold works (starts without errors, shows a page) before moving on.

Phase 2: Data Model (1-2 prompts)

Define your database schema and basic data access.

Add a database schema for a task management app:
- tasks table: id, title, description, status (todo/in-progress/done),
  priority (low/medium/high), created_at, updated_at
- Create the Drizzle schema in /lib/db/schema.ts
- Add a db connection helper in /lib/db/index.ts
- Run the migration

Phase 3: Core Feature Loop (1 prompt per feature)

Build each feature from your spec, one at a time.

Add a task list page at /tasks that:
- Fetches all tasks from the database
- Displays them in a table with columns: title, status, priority, created date
- Add a status filter dropdown above the table
- Use server components for the data fetch

Do not combine features. "Add the task list and the task creation form and the edit page" will produce worse results than three separate prompts.

Phase 4: Polish (multiple small prompts)

Fix styling, add loading states, handle errors, improve UX.

Add error handling to the task list page:
- Show a friendly error message if the database query fails
- Add a loading skeleton while data is being fetched
- Handle the empty state (no tasks) with a message and a "Create your first task" button

Iterating Without Losing Coherence

The biggest risk in prompt-to-app development is coherence loss: after 20 prompts, the codebase feels like it was written by 20 different people (because effectively it was).

Maintain Coherence With

A project instruction file: Update it as you establish patterns. "All API routes return { data, error } format. All components use the cn() utility for class merging."

Consistent prompt structure: Use the same format for similar requests. If you always say "Add a [feature] page at [route] that [does X]," the AI will generate consistent output.

Periodic consolidation: Every 5-10 features, pause and prompt: "Review the codebase for inconsistent patterns. List any files that handle errors differently, use different naming conventions, or duplicate logic."

Explicit references: "Follow the same pattern as the task list page" is better than re-describing the pattern.

Common Coherence Failures

  • Two different API response formats in the same app
  • Some pages use server components, others use client components for no reason
  • Multiple utility functions that do the same thing in different files
  • Inconsistent naming (camelCase in some files, snake_case in others)
  • Different error handling approaches across routes

Managing Scope Creep

Scope creep is the primary way vibe coding projects fail. It is easy to add features when each one is "just one prompt."

The Rule of Three

Your MVP should have at most 3 core features. If you have more than 3, you are building too much.

The "Later" List

Keep a list of features you want but will not build yet. When you think "it would be nice if..." add it to the list and keep going.

Scope Creep Warning Signs

  • Your "quick prototype" is on its 40th prompt
  • You are adding features no one asked for
  • The app does 5 things adequately instead of 1 thing well
  • You keep saying "just one more thing"
  • You have spent more time on edge cases than core features

How to Say No (to Yourself)

Before each prompt, ask: "Is this in my spec?" If not, add it to the Later list. Revisit the Later list only after the MVP is complete and tested.


Debugging the Build Process

When things go wrong during prompt-to-app development:

The app will not start after a generation:

  • Check the terminal output for the actual error
  • Give the AI the exact error message: "I get this error when running npm run dev: [error]"
  • Do not add context or interpretation, just the error

The AI changed something you did not ask it to:

  • Use git diff to see exactly what changed
  • Revert unexpected changes: git checkout -- path/to/file
  • Re-prompt with more specificity: "Only modify [file]. Do not change any other files."

Generated code does not match your tech stack:

  • The AI may use React patterns in a Vue project or Express patterns in a Fastify project
  • Explicitly remind it: "This project uses Vue 3 with Composition API. Do not use Options API."

Features conflict with each other:

  • Usually caused by building too much at once
  • Revert to the last working commit
  • Add features one at a time, testing between each

Launch Checklist

Before showing your vibe-coded app to anyone:

  1. Test every feature manually. Click every button, fill every form, try every flow.
  2. Check mobile responsiveness. AI often generates desktop-only layouts.
  3. Try to break it. Empty inputs, special characters, rapid clicking, back button.
  4. Check the console. Look for JavaScript errors, failed network requests, warnings.
  5. Review environment variables. Make sure no secrets are hardcoded.
  6. Test with real-ish data. Not just 3 items — try 100. Not just "test" — try actual content.
  7. Check loading states. Slow network, large datasets, initial load.
  8. Verify auth flows. Login, logout, session expiry, unauthorized access.

Anti-Patterns Summary

Anti-PatternConsequenceFix
No specProject drifts, features conflictWrite a spec first, even a short one
Wrong tool choiceFighting the tool instead of buildingMatch tool to project type
Multi-feature promptsPartial implementations, conflictsOne feature per prompt
No testing between promptsBug compoundingTest after every generation
Unlimited scopeNever ships, mediocre at everythingMVP with 3 core features max
No project instruction fileInconsistent code patternsCreate and maintain one from prompt 1
Ignoring gitCannot undo bad generationsCommit after every working change

Install this skill directly: skilldb add vibe-coding-workflow-skills

Get CLI access →

Related Skills

ai-pair-programming

Teaches effective AI pair programming techniques for tools like Claude Code, Cursor, and Copilot. Covers when to lead versus follow the AI, providing persistent context through CLAUDE.md and .cursorrules files, breaking complex tasks into AI-manageable pieces, using git strategically with frequent commits as checkpoints, and recognizing when the AI is stuck in a loop. Use when working alongside AI coding tools in a collaborative development workflow.

Vibe Coding Workflow325L

debugging-ai-code

Teaches how to debug code generated by AI tools, covering the unique failure modes of AI-generated code including hallucinated APIs, version mismatches, circular logic, and phantom dependencies. Explains how to read error messages back to the AI effectively, provide minimal reproductions, diagnose when the AI is giving bad fixes, and use systematic debugging approaches on codebases you did not write by hand. Use when AI-generated code is not working and you need to find and fix the issue.

Vibe Coding Workflow371L

maintaining-ai-codebases

Covers the unique challenges of maintaining codebases built primarily through AI code generation. Addresses inconsistent patterns across AI-generated files, refactoring AI sprawl, establishing coding conventions after the code already exists, documentation strategies for AI-built projects, and managing the specific forms of technical debt that AI tools create. Use when a vibe-coded project needs ongoing maintenance or has grown unwieldy.

Vibe Coding Workflow300L

reviewing-ai-code

Teaches how to review, audit, and evaluate AI-generated code effectively. Covers common AI code smells like over-engineering, dead code, wrong abstractions, and hallucinated APIs. Includes security review checklists, dependency auditing, performance review techniques, and strategies for catching the subtle bugs that AI confidently introduces. Use when reviewing code produced by any AI coding tool.

Vibe Coding Workflow307L

scaling-past-vibe

Guides the transition from a vibe-coded prototype to a production-grade application. Covers identifying when the project has outgrown pure vibe coding, refactoring AI-generated code for production reliability, adding tests retroactively to an untested codebase, introducing CI/CD pipelines, establishing code ownership and review processes, and building the engineering practices needed to sustain a growing application. Use when a vibe-coded project is succeeding and needs to become a real product.

Vibe Coding Workflow421L

vibe-coding-architecture

Covers architecture decisions optimized for AI-assisted development. Teaches how to choose frameworks and structures that AI tools work well with, why monolith-first is the right default for vibe coding, how to organize files so AI can navigate them, which abstraction patterns help versus hinder AI code generation, and how to keep complexity within the bounds of what AI can reason about. Use when making technology and architecture choices for a vibe-coded project.

Vibe Coding Workflow402L