Skip to main content
Technology & EngineeringDatabase Services241 lines

Mongodb

Build with MongoDB as a document database. Use this skill when the project needs

Quick Summary37 lines
You are a database specialist who integrates MongoDB into projects. MongoDB is a
document database that stores data as flexible JSON-like documents (BSON), with
powerful querying, aggregation pipelines, and horizontal scaling via sharding.

## Key Points

- Embed data that's read together — avoid joins for common read patterns
- Use `.lean()` in Mongoose when you don't need document methods — much faster
- Create indexes for every query pattern — use `explain()` to verify
- Use aggregation pipelines for complex data transformations
- Use transactions for multi-document operations that must be atomic
- Use change streams for real-time features instead of polling
- Close connections in scripts: `await client.close()` or `await mongoose.disconnect()`
- Normalizing data like a relational database — embed for read performance
- Not creating indexes — queries scan the entire collection
- Storing unbounded arrays in documents — document size limit is 16MB
- Using `find()` without `.limit()` — returns everything
- Creating a new MongoClient per request — reuse the connection

## Quick Example

```bash
# Native driver
npm install mongodb

# Or Mongoose ODM
npm install mongoose
```

```typescript
import { MongoClient } from 'mongodb';

const client = new MongoClient(process.env.MONGODB_URI!);
const db = client.db('myapp');
```
skilldb get database-services-skills/MongodbFull skill: 241 lines
Paste into your CLAUDE.md or agent config

MongoDB Integration

You are a database specialist who integrates MongoDB into projects. MongoDB is a document database that stores data as flexible JSON-like documents (BSON), with powerful querying, aggregation pipelines, and horizontal scaling via sharding.

Core Philosophy

Documents, not rows

MongoDB stores data as documents — nested JSON objects with flexible schemas. You don't need migrations to add a field. Documents in the same collection can have different shapes. Design your documents around how your application reads data.

Embed vs reference

The key design decision in MongoDB is whether to embed related data inside a document or reference it by ID. Embed for data that's read together. Reference for data that's shared across documents or grows unboundedly.

Aggregation pipelines

MongoDB's aggregation framework is a pipeline of stages — $match, $group, $lookup, $project, $sort. Think of it as Unix pipes for your data. Each stage transforms the documents and passes them to the next.

Setup

Install

# Native driver
npm install mongodb

# Or Mongoose ODM
npm install mongoose

Connect (native driver)

import { MongoClient } from 'mongodb';

const client = new MongoClient(process.env.MONGODB_URI!);
const db = client.db('myapp');

Connect (Mongoose)

import mongoose from 'mongoose';

await mongoose.connect(process.env.MONGODB_URI!);

Key Techniques

CRUD with native driver

const posts = db.collection('posts');

// Insert
const result = await posts.insertOne({
  title: 'Hello',
  content: '...',
  authorId: new ObjectId(userId),
  status: 'draft',
  tags: ['intro'],
  createdAt: new Date(),
});

// Find
const post = await posts.findOne({ _id: new ObjectId(postId) });

const published = await posts
  .find({ status: 'published' })
  .sort({ createdAt: -1 })
  .limit(20)
  .toArray();

// Update
await posts.updateOne(
  { _id: new ObjectId(postId) },
  { $set: { title: 'Updated', updatedAt: new Date() } }
);

// Upsert
await posts.updateOne(
  { slug: 'hello-world' },
  { $set: { title: 'Hello World', content: '...' } },
  { upsert: true }
);

// Delete
await posts.deleteOne({ _id: new ObjectId(postId) });

// Bulk write
await posts.bulkWrite([
  { updateOne: { filter: { _id: id1 }, update: { $set: { status: 'published' } } } },
  { deleteOne: { filter: { _id: id2 } } },
]);

Mongoose schemas and models

import mongoose, { Schema, model, type InferSchemaType } from 'mongoose';

const postSchema = new Schema({
  title: { type: String, required: true },
  content: { type: String, required: true },
  slug: { type: String, required: true, unique: true },
  status: { type: String, enum: ['draft', 'published'], default: 'draft' },
  author: { type: Schema.Types.ObjectId, ref: 'User', required: true },
  tags: [String],
  viewCount: { type: Number, default: 0 },
}, { timestamps: true });

postSchema.index({ status: 1, createdAt: -1 });
postSchema.index({ author: 1 });
postSchema.index({ slug: 1 }, { unique: true });

type Post = InferSchemaType<typeof postSchema>;
const Post = model('Post', postSchema);

// Create
const post = await Post.create({
  title: 'Hello',
  content: '...',
  slug: 'hello',
  author: userId,
});

// Find with populate (join)
const posts = await Post.find({ status: 'published' })
  .populate('author', 'name email')
  .sort({ createdAt: -1 })
  .limit(20)
  .lean();  // Returns plain objects, not Mongoose documents

// Update
await Post.findByIdAndUpdate(postId, { title: 'Updated' }, { new: true });

// Delete
await Post.findByIdAndDelete(postId);

Aggregation pipeline

const stats = await posts.aggregate([
  { $match: { status: 'published' } },
  { $group: {
    _id: '$authorId',
    totalPosts: { $sum: 1 },
    avgViews: { $avg: '$viewCount' },
    tags: { $push: '$tags' },
  }},
  { $sort: { totalPosts: -1 } },
  { $limit: 10 },
  { $lookup: {
    from: 'users',
    localField: '_id',
    foreignField: '_id',
    as: 'author',
  }},
  { $unwind: '$author' },
  { $project: {
    authorName: '$author.name',
    totalPosts: 1,
    avgViews: { $round: ['$avgViews', 0] },
  }},
]).toArray();

Transactions

const session = client.startSession();
try {
  await session.withTransaction(async () => {
    await posts.updateOne(
      { _id: postId },
      { $set: { status: 'published' } },
      { session }
    );
    await notifications.insertOne(
      { userId: authorId, type: 'published', entityId: postId },
      { session }
    );
  });
} finally {
  await session.endSession();
}

Change streams (real-time)

const changeStream = posts.watch([
  { $match: { 'fullDocument.status': 'published' } }
]);

changeStream.on('change', (change) => {
  if (change.operationType === 'insert') {
    console.log('New published post:', change.fullDocument);
  }
});

Atlas Search (full-text)

const results = await posts.aggregate([
  { $search: {
    index: 'default',
    text: { query: searchTerm, path: ['title', 'content'] },
  }},
  { $limit: 10 },
  { $project: { title: 1, score: { $meta: 'searchScore' } } },
]).toArray();

Best Practices

  • Embed data that's read together — avoid joins for common read patterns
  • Use .lean() in Mongoose when you don't need document methods — much faster
  • Create indexes for every query pattern — use explain() to verify
  • Use aggregation pipelines for complex data transformations
  • Use transactions for multi-document operations that must be atomic
  • Use change streams for real-time features instead of polling
  • Close connections in scripts: await client.close() or await mongoose.disconnect()

Anti-Patterns

  • Normalizing data like a relational database — embed for read performance
  • Not creating indexes — queries scan the entire collection
  • Storing unbounded arrays in documents — document size limit is 16MB
  • Using find() without .limit() — returns everything
  • Creating a new MongoClient per request — reuse the connection
  • Using $lookup (joins) extensively — restructure your data model instead
  • Not using .lean() in Mongoose when you only need the data

Install this skill directly: skilldb add database-services-skills

Get CLI access →