Skip to main content
Technology & EngineeringVibe Coding Workflow300 lines

maintaining-ai-codebases

Covers the unique challenges of maintaining codebases built primarily through AI code generation. Addresses inconsistent patterns across AI-generated files, refactoring AI sprawl, establishing coding conventions after the code already exists, documentation strategies for AI-built projects, and managing the specific forms of technical debt that AI tools create. Use when a vibe-coded project needs ongoing maintenance or has grown unwieldy.

Quick Summary33 lines
How to maintain, evolve, and keep healthy a codebase that was built primarily through AI prompting.

## Key Points

- Some API routes return `{ data }`, others return `{ result }`, others return the data directly
- Some components use hooks, others use render props, others use HOCs
- Error handling varies from try/catch to .catch() to uncaught across the codebase
- Three different date formatting functions in three different files
- Two separate API client wrappers that do the same thing
- Multiple validation schemas for the same data type
- Files generated early that were never cleaned up
- Empty component shells that were replaced by different implementations
- Config files for features that were never built
- `getUserById` in one file, `fetchUser` in another, `loadUserData` in a third
- `isLoading` in some components, `loading` in others, `isLoaded` (inverted) in others
- Database columns mixing camelCase and snake_case

## Quick Example

```
/src
  /components    # Shared UI components (PascalCase.tsx)
  /app           # Routes/pages
  /lib           # Utilities, database, shared logic
  /types         # TypeScript type definitions
```

```
// Convention: All data fetching goes through /lib/api.ts
// All functions return Promise<{ data: T } | { error: string }>
```
skilldb get vibe-coding-workflow-skills/maintaining-ai-codebasesFull skill: 300 lines
Paste into your CLAUDE.md or agent config

Maintaining AI Codebases

How to maintain, evolve, and keep healthy a codebase that was built primarily through AI prompting.


The Maintenance Problem

AI-generated codebases have a unique maintenance challenge: they were written by an entity with no memory between sessions and no consistent personal style. Each generation may use different patterns, naming conventions, and approaches — even within the same project.

After 50-100 prompts, you often end up with a codebase that works but feels like it was written by a rotating team of contractors who never talked to each other. This is the AI sprawl problem, and addressing it is the core of AI codebase maintenance.


Diagnosing AI Sprawl

Signs Your Codebase Has Sprawl

Inconsistent patterns across files:

  • Some API routes return { data }, others return { result }, others return the data directly
  • Some components use hooks, others use render props, others use HOCs
  • Error handling varies from try/catch to .catch() to uncaught across the codebase

Duplicate utilities:

  • Three different date formatting functions in three different files
  • Two separate API client wrappers that do the same thing
  • Multiple validation schemas for the same data type

Abandoned scaffolding:

  • Files generated early that were never cleaned up
  • Empty component shells that were replaced by different implementations
  • Config files for features that were never built

Inconsistent naming:

  • getUserById in one file, fetchUser in another, loadUserData in a third
  • isLoading in some components, loading in others, isLoaded (inverted) in others
  • Database columns mixing camelCase and snake_case

Orphaned dependencies:

  • Packages in package.json that nothing imports
  • Packages that were replaced by alternatives but never removed

Measuring Sprawl

Run these checks periodically:

  1. Dead code analysis: Use your language's tools (TypeScript noUnusedLocals, ESLint no-unused-vars, depcheck for unused npm packages)
  2. Duplicate detection: Tools like jscpd find copy-pasted code blocks
  3. Import analysis: Check which files import what — orphaned files become visible
  4. Pattern grep: Search for variations of the same concept (e.g., all error handling patterns, all fetch calls)

Establishing Conventions Post-Generation

The ideal time to establish conventions is before writing code. The realistic time, in a vibe-coded project, is after you have a working app and want to keep it working.

Step 1: Audit What You Have

Before deciding on conventions, document what the AI generated:

  • What patterns exist? (List all the ways errors are handled, data is fetched, components are structured)
  • Which pattern is most common? (This is often the best candidate for the convention)
  • Which pattern is best? (Sometimes the minority pattern is actually better)

Step 2: Choose Conventions

Pick one approach for each category:

File structure: Where do new files go? How are they named?

/src
  /components    # Shared UI components (PascalCase.tsx)
  /app           # Routes/pages
  /lib           # Utilities, database, shared logic
  /types         # TypeScript type definitions

Data fetching: One pattern for all data access.

// Convention: All data fetching goes through /lib/api.ts
// All functions return Promise<{ data: T } | { error: string }>

Error handling: One pattern for all error paths.

// Convention: All API routes use this error wrapper
// All errors are logged server-side, generic message client-side

Naming: One style guide.

// Convention: camelCase for variables and functions
// PascalCase for components and types
// SCREAMING_SNAKE for constants
// Descriptive names: getUserById, not getUser or fetchUserDataFromDatabase

Step 3: Document in Your Project Instruction File

Put these conventions in CLAUDE.md (or .cursorrules, etc.) so the AI follows them going forward:

# Conventions

## Error Handling
- All API routes wrap handler logic in try/catch
- Errors are logged with console.error server-side
- Client receives { error: "Human-readable message" } with appropriate status code
- Never expose stack traces or internal details to client

## Data Fetching
- All database queries go through functions in /lib/db/queries.ts
- All functions are async and return typed results
- Use Drizzle ORM query builder, not raw SQL

## Components
- All components are functional components with TypeScript
- Props are defined as a type above the component
- Use cn() for conditional class names
- Shared components go in /components, page-specific components stay with the page

Step 4: Migrate Gradually

Do not rewrite everything at once. Migrate to conventions as you touch files:

  • When fixing a bug in a file, also update it to follow conventions
  • When adding a feature, make the new code follow conventions
  • Dedicate one session per week to convention alignment if the project is active

Refactoring AI Sprawl

The Consolidation Prompt

Use AI to help find and fix its own inconsistencies:

Review the codebase for inconsistent patterns. Specifically:
1. List all different approaches to error handling across API routes
2. List all different approaches to data fetching in components
3. List any duplicate utility functions
4. List any files that appear to be unused

Do not make changes yet. Just report what you find.

Then, for each inconsistency:

Refactor all API routes to use the error handling pattern from /api/tasks/route.ts.
Update these specific files: [list the files].
Do not change any business logic, only the error handling wrapper.

Safe Refactoring Practices

One pattern at a time. Do not refactor error handling and data fetching and naming in the same session.

Test between each change. Refactoring should not change behavior. If tests break, the refactor was wrong.

Commit before and after. Every refactoring session starts with a clean commit and ends with a clean commit.

Use find-and-replace for naming. If you are renaming fetchUser to getUserById across the codebase, use IDE-wide rename, not AI. This is safer and faster.

What to Refactor First

Priority order for maximum impact:

  1. Duplicate utility functions — Merge into one, update all imports
  2. Error handling — Inconsistent error handling causes bugs
  3. Data access patterns — Inconsistent data fetching causes confusion
  4. Dead code removal — Reduces cognitive load
  5. Naming conventions — Improves readability but lower urgency
  6. File structure — Moving files is high-risk, low-reward unless structure is truly chaotic

Documentation Strategies

AI-generated code is often under-documented because the AI does not know what will be confusing to future readers (including future AI sessions).

What to Document

Architecture decisions: Why you chose this database, this framework, this approach. The AI chose the implementation; you need to record the reasoning.

Non-obvious business logic: If a function has logic that is not self-evident from its name and parameters, add a comment explaining why.

AI-generated workarounds: If the AI generated a workaround for a bug or limitation, document what it works around and link to the issue. Otherwise you will forget why the weird code exists.

Integration points: How your app connects to external services, what environment variables are needed, what the expected data format is.

What Not to Document

What the code does line by line. If the code is so unclear it needs line-by-line comments, refactor it instead.

Obvious function signatures. getUserById(id: string): Promise<User> does not need a JSDoc comment explaining that it gets a user by their ID.

AI session history. Do not keep logs of what prompts you used. They are not useful for maintenance.

The Architecture Decision Record (ADR)

For significant decisions, keep a simple log:

# Architecture Decisions

## 2025-03-15: SQLite over PostgreSQL
- Chose SQLite for the MVP because: single-file deployment, no separate server,
  fast enough for expected load (< 100 concurrent users)
- Will migrate to PostgreSQL if: we need concurrent writes, full-text search,
  or the database exceeds 1GB

## 2025-03-20: Server Components by Default
- All pages use React Server Components unless they need interactivity
- Client components are marked with "use client" and kept small
- Reason: better performance, simpler data fetching

Managing Technical Debt from AI

AI technical debt is different from human technical debt. Humans create debt through shortcuts they understand. AI creates debt through patterns the developer may not fully understand.

Types of AI Technical Debt

Cargo-cult patterns: Code that follows a pattern without understanding why. It works, but if conditions change, no one knows why it breaks.

Implicit dependencies: AI code may depend on specific execution order, global state, or side effects that are not documented or obvious.

Version-locked code: AI generates code for the library versions in its training data. When you update dependencies, AI patterns may break.

Over-engineered foundations: AI builds "flexible" systems early that constrain later development. Removing them is harder than building on them.

Inconsistency debt: The cumulative cost of having 4 patterns for the same thing. Every new developer (or AI session) has to figure out which pattern to follow.

Paying Down AI Debt

Weekly consistency passes: Spend 30 minutes per week aligning code to conventions. Small, regular effort prevents debt from compounding.

Pre-feature audits: Before adding a major feature, audit the area of the codebase it will touch. Fix inconsistencies before building on them.

Dependency updates: Monthly, check for outdated dependencies. Update one at a time. Test after each.

Remove what you do not use: AI generates optimistically. It creates handlers for cases that never occur, components for features that were never built, utilities that nothing calls. Delete them.


Long-Term Health Practices

The Monthly Health Check

Once a month, run through this checklist:

  1. Run the linter and fix all warnings — not just errors
  2. Run dead code detection — remove anything unused
  3. Check dependency health — outdated packages, security advisories
  4. Review the project instruction file — is it still accurate?
  5. Check for TODO/FIXME comments — address or delete them
  6. Run the full test suite — fix any flaky tests
  7. Check bundle size — has it grown unexpectedly?

When to Rewrite vs Refactor

Refactor when:

  • The code works but is messy
  • The architecture is sound but implementation is inconsistent
  • You can improve incrementally without breaking things

Rewrite when:

  • The architecture is fundamentally wrong for your needs
  • The codebase has more dead code than live code
  • Every new feature requires working around existing code
  • You have spent more time maintaining than building for 3 consecutive months

A vibe-coded rewrite is fine. If the original was built with AI, building v2 with AI (using the lessons learned) is often faster and better than trying to salvage v1.


Anti-Patterns Summary

Anti-PatternConsequenceFix
No conventions after MVPSprawl compounds with every featureEstablish conventions once the MVP works
Big-bang refactoringBreaks everything, hard to debugRefactor one pattern at a time
No project instruction fileAI re-introduces old patternsMaintain and update the file
Ignoring dead codeCognitive load, confusionDelete aggressively
Never updating dependenciesSecurity vulnerabilities, version lockMonthly dependency check
No architecture docsDecisions are forgotten, repeatedKeep simple ADRs
Rewriting too earlyWastes time on code that worksRefactor first, rewrite only if needed

Install this skill directly: skilldb add vibe-coding-workflow-skills

Get CLI access →

Related Skills

ai-pair-programming

Teaches effective AI pair programming techniques for tools like Claude Code, Cursor, and Copilot. Covers when to lead versus follow the AI, providing persistent context through CLAUDE.md and .cursorrules files, breaking complex tasks into AI-manageable pieces, using git strategically with frequent commits as checkpoints, and recognizing when the AI is stuck in a loop. Use when working alongside AI coding tools in a collaborative development workflow.

Vibe Coding Workflow325L

debugging-ai-code

Teaches how to debug code generated by AI tools, covering the unique failure modes of AI-generated code including hallucinated APIs, version mismatches, circular logic, and phantom dependencies. Explains how to read error messages back to the AI effectively, provide minimal reproductions, diagnose when the AI is giving bad fixes, and use systematic debugging approaches on codebases you did not write by hand. Use when AI-generated code is not working and you need to find and fix the issue.

Vibe Coding Workflow371L

prompt-to-app

Guides the complete journey from an idea to a working application using AI code generation tools. Covers writing effective app specifications, choosing the right tool for the job (Claude Code, Cursor, Bolt, v0, Lovable, Replit Agent), the spec-first approach, iterating on generated code without losing coherence, and managing scope creep during AI-assisted development. Use when someone wants to build an app from scratch using vibe coding.

Vibe Coding Workflow289L

reviewing-ai-code

Teaches how to review, audit, and evaluate AI-generated code effectively. Covers common AI code smells like over-engineering, dead code, wrong abstractions, and hallucinated APIs. Includes security review checklists, dependency auditing, performance review techniques, and strategies for catching the subtle bugs that AI confidently introduces. Use when reviewing code produced by any AI coding tool.

Vibe Coding Workflow307L

scaling-past-vibe

Guides the transition from a vibe-coded prototype to a production-grade application. Covers identifying when the project has outgrown pure vibe coding, refactoring AI-generated code for production reliability, adding tests retroactively to an untested codebase, introducing CI/CD pipelines, establishing code ownership and review processes, and building the engineering practices needed to sustain a growing application. Use when a vibe-coded project is succeeding and needs to become a real product.

Vibe Coding Workflow421L

vibe-coding-architecture

Covers architecture decisions optimized for AI-assisted development. Teaches how to choose frameworks and structures that AI tools work well with, why monolith-first is the right default for vibe coding, how to organize files so AI can navigate them, which abstraction patterns help versus hinder AI code generation, and how to keep complexity within the bounds of what AI can reason about. Use when making technology and architecture choices for a vibe-coded project.

Vibe Coding Workflow402L