Skip to main content
Technology & EngineeringVibe Coding Workflow325 lines

ai-pair-programming

Teaches effective AI pair programming techniques for tools like Claude Code, Cursor, and Copilot. Covers when to lead versus follow the AI, providing persistent context through CLAUDE.md and .cursorrules files, breaking complex tasks into AI-manageable pieces, using git strategically with frequent commits as checkpoints, and recognizing when the AI is stuck in a loop. Use when working alongside AI coding tools in a collaborative development workflow.

Quick Summary18 lines
How to work effectively with AI as your programming partner — when to lead, when to follow, and how to stay productive.

## Key Points

- Set direction (what to build, in what order)
- Provide context (project structure, conventions, constraints)
- Review output (correctness, style, security)
- Manage scope (prevent feature creep, keep focus)
- Maintain project knowledge (the AI will not remember tomorrow)
- Fast code generation across languages and frameworks
- Broad pattern knowledge (has seen millions of codebases)
- Tireless iteration (will regenerate as many times as you ask)
- No ego (will completely rewrite without complaint)
1. You lead: "Build a user registration form with email, password, and name fields"
2. AI drives: Generates the form component
3. You review: "The validation is wrong — password needs 8+ characters and one number"
skilldb get vibe-coding-workflow-skills/ai-pair-programmingFull skill: 325 lines
Paste into your CLAUDE.md or agent config

AI Pair Programming

How to work effectively with AI as your programming partner — when to lead, when to follow, and how to stay productive.


The Pair Programming Model

Traditional pair programming has a driver (types) and a navigator (thinks). AI pair programming inverts this: the AI drives (generates code) and you navigate (direct and review).

But unlike a human pair, the AI has no persistent memory, no opinions about architecture, and no ability to push back when you are making a mistake. You must compensate for all three.

Your responsibilities as navigator:

  • Set direction (what to build, in what order)
  • Provide context (project structure, conventions, constraints)
  • Review output (correctness, style, security)
  • Manage scope (prevent feature creep, keep focus)
  • Maintain project knowledge (the AI will not remember tomorrow)

The AI's strengths as driver:

  • Fast code generation across languages and frameworks
  • Broad pattern knowledge (has seen millions of codebases)
  • Tireless iteration (will regenerate as many times as you ask)
  • No ego (will completely rewrite without complaint)

When to Lead vs Follow

Lead the AI When

You know the architecture. Tell the AI what to build and where to put it. "Create a service in /lib/services/auth.ts that handles login, logout, and session refresh" is better than "add authentication."

You have strong conventions. If your project has established patterns, lead the AI to follow them. "Use the same pattern as our other API routes in /api/" is better than letting the AI invent a new pattern.

The change is cross-cutting. When a change affects multiple files or systems, plan the sequence yourself. "First update the database schema, then the API route, then the frontend component" prevents the AI from making incompatible changes.

You are refactoring. Refactoring requires understanding the full context of how code is used. Lead with specific instructions: "Rename fetchData to getUserProfile in these 5 files, updating all call sites."

Follow the AI When

You are in unfamiliar territory. If you are using a framework or language you do not know well, let the AI draft the initial approach. Review it, learn from it, then direct refinements.

The task is well-defined. "Write a function that validates an email address and returns true/false" — the AI will handle this faster than you can type it.

You need boilerplate. Form components, API route handlers, database migrations, test scaffolding. Let the AI generate; you review and adjust.

The AI suggests a better approach. Sometimes the AI generates a solution that is better than what you had in mind. Be open to it. Review the approach on its merits, not on whether it matches your preconception.

The Handoff Pattern

Effective AI pair programming has frequent handoffs:

  1. You lead: "Build a user registration form with email, password, and name fields"
  2. AI drives: Generates the form component
  3. You review: "The validation is wrong — password needs 8+ characters and one number"
  4. AI drives: Fixes validation
  5. You lead: "Now connect this to the /api/register endpoint using the same fetch pattern as the login form"
  6. AI drives: Implements the connection
  7. You test: Try the form, find an edge case
  8. You lead: "When the email is already taken, the API returns 409 — handle that and show an error message"

Each cycle takes 1-5 minutes. A productive session has dozens of these cycles.


Providing Context

The AI starts every session knowing nothing about your project. Context is the fuel that makes it useful.

Project Instruction Files

CLAUDE.md (for Claude Code):

# Project: TaskFlow

## Architecture
- Next.js 14 app router with TypeScript
- SQLite database via Drizzle ORM
- Auth via NextAuth with GitHub OAuth
- Deployed on Vercel

## Conventions
- All API routes use the handler wrapper from /lib/api/handler.ts
- Components use shadcn/ui primitives
- Database queries go in /lib/db/queries.ts
- Types are co-located with their feature, not in a global types file

## Current State
- Auth is complete and working
- Task CRUD is complete
- Currently building: team collaboration features
- Known issue: task filtering is slow on >1000 tasks

## Do Not
- Do not add new npm packages without asking first
- Do not use class components
- Do not put business logic in API route files — use services

.cursorrules (for Cursor): Similar content, formatted per Cursor's conventions. Include the same architecture, conventions, and current state information.

.github/copilot-instructions.md (for Copilot): Same pattern, adapted for Copilot's context system.

Updating Context Files

Your project instruction file is a living document. Update it when:

  • You establish a new convention
  • The architecture changes
  • You complete a major feature (update "Current State")
  • You discover a pattern that keeps going wrong (add to "Do Not")
  • You add a new dependency

Anti-pattern: Writing the file once and never updating it. A stale instruction file is worse than none because the AI follows outdated guidance.

In-Prompt Context

For specific tasks, add context directly in the prompt:

The task creation form is in /components/tasks/TaskForm.tsx.
It currently supports title and description fields.
Add a priority dropdown (low, medium, high) and a due date picker.
The Task type is defined in /lib/types/task.ts and already has
priority and dueDate fields.
Use the same DatePicker component we use in /components/tasks/TaskFilter.tsx.

This prompt has:

  • File locations (so the AI knows where to look)
  • Current state (what exists)
  • Specific request (what to add)
  • Type information (what the data looks like)
  • Pattern reference (reuse existing component)

Context Anti-Patterns

  • Assuming the AI remembers: "Like we discussed earlier" — it may not have that context. Be explicit.
  • Too much context: Pasting 5 files into the prompt when only 1 is relevant. The AI gets distracted.
  • No context: "Add a dropdown." Where? What options? What component library? The AI will guess.
  • Contradictory context: Project instruction file says one thing, prompt says another. The AI will pick one unpredictably.

Breaking Complex Tasks

AI works best on focused, well-defined tasks. Complex features need to be decomposed.

The Decomposition Framework

For any feature, break it into:

  1. Data layer: Schema changes, migrations, query functions
  2. API layer: Routes, handlers, validation
  3. UI layer: Components, pages, forms
  4. Integration: Connecting the layers, testing end-to-end

Prompt each layer separately, in order. Test each layer before moving to the next.

Example: Adding Comments to Tasks

Bad: "Add a comments feature to tasks — users should be able to add, edit, and delete comments on any task, with real-time updates."

Good sequence:

  1. "Add a comments table to the database schema with: id, task_id, user_id, content, created_at, updated_at. Create the migration."
  2. "Add query functions in /lib/db/queries.ts for: getCommentsByTaskId, createComment, updateComment, deleteComment."
  3. "Add API routes for comments: GET /api/tasks/[taskId]/comments, POST /api/tasks/[taskId]/comments, PATCH /api/comments/[commentId], DELETE /api/comments/[commentId]. Use the handler wrapper."
  4. "Add a CommentList component that displays comments for a task. Fetch from the API. Show author name, content, and timestamp. Add a 'delete' button for the comment author."
  5. "Add a CommentForm component with a textarea and submit button. POST to the API. Clear the form on success and refresh the comment list."
  6. "Add edit functionality: clicking a comment's edit button turns it into a textarea. Save updates via PATCH."

Six focused prompts, each building on the last, each testable independently.

Signs You Need to Decompose Further

  • The AI generates more than 200 lines in one go
  • The generation modifies more than 3 files
  • You cannot test the output without building something else first
  • The AI starts making assumptions you did not state

Using Git Strategically

Git is not just version control in vibe coding — it is your undo button, your safety net, and your checkpoint system.

The Checkpoint Commit Pattern

Commit after every successful generation:

# AI generates task list component — it works
git add src/components/TaskList.tsx
git commit -m "add task list component with filtering"

# AI generates task creation form — it works
git add src/components/TaskForm.tsx src/app/api/tasks/route.ts
git commit -m "add task creation form and API route"

# AI tries to add editing — it breaks the list
git checkout -- src/components/TaskList.tsx  # Revert just the broken file
# Re-prompt with more specific instructions

Commit Messages for AI Codebases

Keep them descriptive. You will need to find specific points to revert to.

Good: add task filtering by status and priority with URL params Bad: update task list

Branch Strategy

For features: Create a branch before starting a new feature. If the feature goes sideways, you can abandon the branch cleanly.

For experiments: git stash before trying something risky. git stash pop if it works, git stash drop if it does not.

For refactoring: Always branch. Refactoring can touch many files, and reverting selective changes is painful.

The Nuclear Option

When things go truly wrong:

# See what changed
git diff

# If it is all bad, revert everything
git checkout -- .

# If only some files are bad, revert selectively
git checkout -- src/components/BrokenComponent.tsx

# If you need to go back further
git log --oneline -10  # Find the good commit
git revert <bad-commit-hash>  # Revert one commit safely

Knowing When the AI Is Stuck

The AI will never tell you it is stuck. It will keep generating variations that do not work. You need to recognize the signs.

Signs the AI Is Stuck

Oscillating fixes: The AI fixes problem A but reintroduces problem B. Then fixes B and reintroduces A. Back and forth.

Increasing complexity: Each iteration adds more code instead of fixing the issue. Workarounds on top of workarounds.

Same error, different code: The AI rewrites the function but the same error occurs because the root cause is elsewhere.

Contradictory changes: The AI adds a null check in one place and removes it in another, or toggles a setting back and forth.

Hallucinating solutions: The AI suggests using a method or API that does not exist, or provides a fix that has nothing to do with the error.

What to Do When the AI Is Stuck

  1. Stop generating. More iterations will not help.
  2. Read the error message yourself. Understand what is actually failing.
  3. Check the actual API/library documentation. The AI may be using an outdated or incorrect API.
  4. Simplify the problem. Remove code until you have a minimal reproduction.
  5. Provide the minimal reproduction to the AI. "Here is a 10-line reproduction of the bug. The error is [X]. The expected behavior is [Y]."
  6. If the AI still cannot solve it, solve it yourself. Some problems require human debugging — stepping through code, reading source code of dependencies, checking environment configuration.

The Three-Strike Rule

If the AI cannot solve a problem in 3 attempts with good prompts:

  • The problem is likely outside the AI's capability
  • Or your prompts are missing critical context
  • Or there is a fundamental misunderstanding

Stop, diagnose manually, then either fix it yourself or write a much more specific prompt with the exact root cause identified.


Session Management

Starting a Session

  1. Open your project instruction file — is it current?
  2. Run the app — does it still work from last session?
  3. Check git status — any uncommitted changes?
  4. Decide what you are building this session (one feature or task)
  5. Review relevant existing code before prompting

Ending a Session

  1. Commit all working changes
  2. Revert any broken experiments
  3. Update the project instruction file if conventions changed
  4. Note where you left off (in the instruction file or a TODO comment)
  5. Run the app one final time to confirm it works

Session Length

Productive AI pair programming sessions typically last 1-3 hours. After that:

  • Context windows fill up with stale information
  • Your review quality drops
  • Accumulated changes become hard to track
  • Start a fresh session instead of pushing through

Anti-Patterns Summary

Anti-PatternConsequenceFix
No project instruction fileAI reinvents conventions every sessionCreate and maintain one
Mega-promptsPoor results, hard to debugDecompose into focused steps
Never committingNo undo when things breakCommit after every working change
Fighting the AI past 3 triesWasted time, frustrationDiagnose manually, re-prompt specifically
Following the AI blindlyArchitecture drift, bad patternsLead on architecture, follow on implementation
Stale context filesAI follows outdated guidanceUpdate after every significant change
Marathon sessionsDiminishing returns, review fatigueCap at 3 hours, start fresh

Install this skill directly: skilldb add vibe-coding-workflow-skills

Get CLI access →

Related Skills

debugging-ai-code

Teaches how to debug code generated by AI tools, covering the unique failure modes of AI-generated code including hallucinated APIs, version mismatches, circular logic, and phantom dependencies. Explains how to read error messages back to the AI effectively, provide minimal reproductions, diagnose when the AI is giving bad fixes, and use systematic debugging approaches on codebases you did not write by hand. Use when AI-generated code is not working and you need to find and fix the issue.

Vibe Coding Workflow371L

maintaining-ai-codebases

Covers the unique challenges of maintaining codebases built primarily through AI code generation. Addresses inconsistent patterns across AI-generated files, refactoring AI sprawl, establishing coding conventions after the code already exists, documentation strategies for AI-built projects, and managing the specific forms of technical debt that AI tools create. Use when a vibe-coded project needs ongoing maintenance or has grown unwieldy.

Vibe Coding Workflow300L

prompt-to-app

Guides the complete journey from an idea to a working application using AI code generation tools. Covers writing effective app specifications, choosing the right tool for the job (Claude Code, Cursor, Bolt, v0, Lovable, Replit Agent), the spec-first approach, iterating on generated code without losing coherence, and managing scope creep during AI-assisted development. Use when someone wants to build an app from scratch using vibe coding.

Vibe Coding Workflow289L

reviewing-ai-code

Teaches how to review, audit, and evaluate AI-generated code effectively. Covers common AI code smells like over-engineering, dead code, wrong abstractions, and hallucinated APIs. Includes security review checklists, dependency auditing, performance review techniques, and strategies for catching the subtle bugs that AI confidently introduces. Use when reviewing code produced by any AI coding tool.

Vibe Coding Workflow307L

scaling-past-vibe

Guides the transition from a vibe-coded prototype to a production-grade application. Covers identifying when the project has outgrown pure vibe coding, refactoring AI-generated code for production reliability, adding tests retroactively to an untested codebase, introducing CI/CD pipelines, establishing code ownership and review processes, and building the engineering practices needed to sustain a growing application. Use when a vibe-coded project is succeeding and needs to become a real product.

Vibe Coding Workflow421L

vibe-coding-architecture

Covers architecture decisions optimized for AI-assisted development. Teaches how to choose frameworks and structures that AI tools work well with, why monolith-first is the right default for vibe coding, how to organize files so AI can navigate them, which abstraction patterns help versus hinder AI code generation, and how to keep complexity within the bounds of what AI can reason about. Use when making technology and architecture choices for a vibe-coded project.

Vibe Coding Workflow402L