Skip to main content
Technology & EngineeringDatabase Services267 lines

Firebase Firestore

Build with Firebase Firestore as a NoSQL document database. Use this skill when

Quick Summary34 lines
You are a backend specialist who integrates Firebase Firestore into projects.
Firestore is a NoSQL document database with real-time sync, offline support, and
security rules that scale from prototype to production.

## Key Points

- Design data for your queries — denormalize, don't normalize
- Use `serverTimestamp()` for all timestamps — consistent across clients
- Always unsubscribe from listeners when components unmount
- Use batched writes for multiple related updates (atomic)
- Use transactions for read-then-write operations
- Create composite indexes for multi-field queries (Firestore prompts you)
- Use collection group queries sparingly — they need indexes
- Keep documents under 1MB, ideally under 10KB
- Modeling Firestore like a relational database with normalized tables
- Not unsubscribing from listeners — causes memory leaks
- Using client-side joins across collections — slow and expensive
- Storing arrays that grow unbounded — document size limit is 1MB

## Quick Example

```bash
npm install firebase
```

```typescript
import { initializeApp, cert } from 'firebase-admin/app';
import { getFirestore } from 'firebase-admin/firestore';

initializeApp({ credential: cert(serviceAccount) });
const adminDb = getFirestore();
```
skilldb get database-services-skills/Firebase FirestoreFull skill: 267 lines
Paste into your CLAUDE.md or agent config

Firebase Firestore Integration

You are a backend specialist who integrates Firebase Firestore into projects. Firestore is a NoSQL document database with real-time sync, offline support, and security rules that scale from prototype to production.

Core Philosophy

Documents and collections, not tables

Firestore organizes data as documents (JSON-like objects) inside collections. Think in terms of document trees, not relational tables. Denormalize data for read performance — duplicating data across documents is normal and expected.

Security rules are your backend

Firestore security rules run on Google's servers and validate every read/write. They replace traditional backend authorization logic. If your rules are solid, you can safely let clients talk directly to Firestore.

Offline by default

Firestore caches data locally and syncs when connectivity returns. Design your data model knowing that reads may return cached data and writes may be queued.

Setup

Install

npm install firebase

Initialize

import { initializeApp } from 'firebase/app';
import { getFirestore } from 'firebase/firestore';

const app = initializeApp({
  apiKey: process.env.NEXT_PUBLIC_FIREBASE_API_KEY,
  authDomain: process.env.NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN,
  projectId: process.env.NEXT_PUBLIC_FIREBASE_PROJECT_ID,
});

const db = getFirestore(app);

Admin SDK (server-side)

import { initializeApp, cert } from 'firebase-admin/app';
import { getFirestore } from 'firebase-admin/firestore';

initializeApp({ credential: cert(serviceAccount) });
const adminDb = getFirestore();

Key Techniques

CRUD operations

import {
  collection, doc, getDoc, getDocs, addDoc, setDoc,
  updateDoc, deleteDoc, query, where, orderBy, limit,
  serverTimestamp
} from 'firebase/firestore';

// Create (auto-generated ID)
const docRef = await addDoc(collection(db, 'posts'), {
  title: 'Hello',
  content: '...',
  authorId: userId,
  status: 'draft',
  createdAt: serverTimestamp(),
});

// Create (custom ID)
await setDoc(doc(db, 'profiles', userId), {
  name: 'Alice',
  plan: 'free',
  createdAt: serverTimestamp(),
});

// Read single document
const snap = await getDoc(doc(db, 'posts', postId));
if (snap.exists()) {
  const post = { id: snap.id, ...snap.data() };
}

// Read collection with query
const q = query(
  collection(db, 'posts'),
  where('status', '==', 'published'),
  orderBy('createdAt', 'desc'),
  limit(20)
);
const querySnap = await getDocs(q);
const posts = querySnap.docs.map(d => ({ id: d.id, ...d.data() }));

// Update
await updateDoc(doc(db, 'posts', postId), {
  title: 'Updated',
  updatedAt: serverTimestamp(),
});

// Delete
await deleteDoc(doc(db, 'posts', postId));

Real-time listeners

import { onSnapshot, query, where, collection } from 'firebase/firestore';

const q = query(
  collection(db, 'messages'),
  where('roomId', '==', roomId),
  orderBy('createdAt', 'desc'),
  limit(50)
);

const unsubscribe = onSnapshot(q, (snapshot) => {
  const messages = snapshot.docs.map(d => ({ id: d.id, ...d.data() }));
  setMessages(messages);

  // Track changes
  snapshot.docChanges().forEach((change) => {
    if (change.type === 'added') console.log('New:', change.doc.data());
    if (change.type === 'modified') console.log('Modified:', change.doc.data());
    if (change.type === 'removed') console.log('Removed:', change.doc.data());
  });
});

// Clean up
unsubscribe();

Batched writes

import { writeBatch, doc } from 'firebase/firestore';

const batch = writeBatch(db);
batch.set(doc(db, 'posts', 'post1'), { title: 'Post 1' });
batch.update(doc(db, 'posts', 'post2'), { status: 'published' });
batch.delete(doc(db, 'posts', 'post3'));
await batch.commit(); // Atomic — all succeed or all fail

Transactions

import { runTransaction, doc } from 'firebase/firestore';

await runTransaction(db, async (transaction) => {
  const postRef = doc(db, 'posts', postId);
  const postSnap = await transaction.get(postRef);

  if (!postSnap.exists()) throw new Error('Post not found');

  const newLikes = (postSnap.data().likes || 0) + 1;
  transaction.update(postRef, { likes: newLikes });
});

Subcollections

// Posts -> Comments subcollection
await addDoc(collection(db, 'posts', postId, 'comments'), {
  text: 'Great post!',
  authorId: userId,
  createdAt: serverTimestamp(),
});

// Query subcollection
const comments = await getDocs(
  query(
    collection(db, 'posts', postId, 'comments'),
    orderBy('createdAt', 'asc')
  )
);

// Collection group query (all comments across all posts)
import { collectionGroup } from 'firebase/firestore';

const allComments = await getDocs(
  query(collectionGroup(db, 'comments'), where('authorId', '==', userId))
);

Security Rules

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    // Profiles: owner read/write
    match /profiles/{userId} {
      allow read: if true;
      allow write: if request.auth != null && request.auth.uid == userId;
    }

    // Posts: public read, owner write
    match /posts/{postId} {
      allow read: if resource.data.status == 'published' || request.auth.uid == resource.data.authorId;
      allow create: if request.auth != null && request.resource.data.authorId == request.auth.uid;
      allow update, delete: if request.auth != null && request.auth.uid == resource.data.authorId;

      // Comments subcollection
      match /comments/{commentId} {
        allow read: if true;
        allow create: if request.auth != null;
        allow delete: if request.auth != null && request.auth.uid == resource.data.authorId;
      }
    }
  }
}

Data Modeling Patterns

Denormalize for reads

// Instead of joining users and posts, embed author data
await addDoc(collection(db, 'posts'), {
  title: 'Hello',
  authorId: userId,
  authorName: 'Alice',        // Denormalized
  authorAvatar: avatarUrl,     // Denormalized
  createdAt: serverTimestamp(),
});

Counters (distributed)

// Use a Cloud Function to maintain counters
// Or use increment for simple cases
import { increment } from 'firebase/firestore';

await updateDoc(doc(db, 'posts', postId), {
  viewCount: increment(1),
});

Best Practices

  • Design data for your queries — denormalize, don't normalize
  • Use serverTimestamp() for all timestamps — consistent across clients
  • Always unsubscribe from listeners when components unmount
  • Use batched writes for multiple related updates (atomic)
  • Use transactions for read-then-write operations
  • Create composite indexes for multi-field queries (Firestore prompts you)
  • Use collection group queries sparingly — they need indexes
  • Keep documents under 1MB, ideally under 10KB

Anti-Patterns

  • Modeling Firestore like a relational database with normalized tables
  • Not unsubscribing from listeners — causes memory leaks
  • Using client-side joins across collections — slow and expensive
  • Storing arrays that grow unbounded — document size limit is 1MB
  • Not writing security rules — defaults to deny-all in production
  • Using getDoc in loops — use batch reads or restructure data
  • Ignoring Firestore pricing — reads, writes, and deletes all cost money

Install this skill directly: skilldb add database-services-skills

Get CLI access →