Skip to main content
Technology & EngineeringSearch Services238 lines

Quickwit

"Quickwit: cloud-native search engine, log search, distributed tracing, sub-second search on object storage, Elasticsearch-compatible API"

Quick Summary16 lines
You are an expert in integrating Quickwit for search and log analytics functionality.

## Quick Example

```bash
# Ingest from a file
curl -X POST "http://localhost:7280/api/v1/app-logs/ingest" \
  -H "Content-Type: application/x-ndjson" \
  --data-binary @logs.ndjson
```

```typescript
await qw.delete("/indexes/app-logs");
```
skilldb get search-services-skills/QuickwitFull skill: 238 lines
Paste into your CLAUDE.md or agent config

Quickwit — Search Integration

You are an expert in integrating Quickwit for search and log analytics functionality.

Core Philosophy

Overview

Quickwit is an open-source, cloud-native search engine designed for log management and distributed tracing. It is built in Rust and stores data on object storage (S3, GCS, Azure Blob) while delivering sub-second search performance through an indexing architecture based on tantivy. Quickwit provides an Elasticsearch-compatible API, native OpenTelemetry support for logs and traces, and scales compute independently from storage. It is the right choice when you need cost-efficient search over large volumes of append-mostly data like logs, traces, security events, or audit records.

Setup & Configuration

Running with Docker

docker run -d \
  --name quickwit \
  -p 7280:7280 \
  -v quickwit-data:/quickwit/qwdata \
  quickwit/quickwit \
  run

Port 7280 serves the REST API and the built-in UI.

Running with Object Storage (S3)

export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
export QW_DEFAULT_INDEX_ROOT_URI=s3://your-bucket/quickwit-indexes

docker run -d \
  --name quickwit \
  -p 7280:7280 \
  -e AWS_ACCESS_KEY_ID \
  -e AWS_SECRET_ACCESS_KEY \
  -e QW_DEFAULT_INDEX_ROOT_URI \
  quickwit/quickwit \
  run

Client Setup

Quickwit exposes a REST API. Use any HTTP client:

import axios from "axios";

const qw = axios.create({
  baseURL: "http://localhost:7280/api/v1",
  headers: { "Content-Type": "application/json" },
});
import requests

QW_URL = "http://localhost:7280/api/v1"

def qw_request(method: str, path: str, json=None):
    resp = requests.request(method, f"{QW_URL}{path}", json=json)
    resp.raise_for_status()
    return resp.json()

Core Patterns

Creating an Index

Quickwit indexes are defined with a YAML or JSON configuration that specifies the document schema, indexing settings, and search settings:

await qw.post("/indexes", {
  version: "0.7",
  index_id: "app-logs",
  doc_mapping: {
    field_mappings: [
      { name: "timestamp", type: "datetime", input_formats: ["iso8601", "unix_timestamp"], fast: true },
      { name: "severity", type: "text", tokenizer: "raw", fast: true },
      { name: "service", type: "text", tokenizer: "raw", fast: true },
      { name: "message", type: "text", tokenizer: "default", record: "position" },
      { name: "trace_id", type: "text", tokenizer: "raw" },
    ],
    timestamp_field: "timestamp",
    tag_fields: ["service", "severity"],
  },
  indexing_settings: {
    commit_timeout_secs: 30,
  },
  search_settings: {
    default_search_fields: ["message"],
  },
});

Ingesting Documents

// Quickwit uses newline-delimited JSON (NDJSON) for ingestion
const docs = [
  { timestamp: "2026-03-17T10:00:00Z", severity: "ERROR", service: "api", message: "Connection refused to database", trace_id: "abc123" },
  { timestamp: "2026-03-17T10:00:01Z", severity: "INFO", service: "worker", message: "Job completed successfully", trace_id: "def456" },
  { timestamp: "2026-03-17T10:00:02Z", severity: "WARN", service: "api", message: "Slow query detected: 2300ms", trace_id: "ghi789" },
];

const ndjson = docs.map((d) => JSON.stringify(d)).join("\n");

await qw.post("/app-logs/ingest", ndjson, {
  headers: { "Content-Type": "application/x-ndjson" },
});
# Ingest from a file
curl -X POST "http://localhost:7280/api/v1/app-logs/ingest" \
  -H "Content-Type: application/x-ndjson" \
  --data-binary @logs.ndjson

Searching Documents

// Quickwit uses a query language similar to Elasticsearch
const result = await qw.get("/app-logs/search", {
  params: {
    query: 'severity:ERROR AND message:"connection refused"',
    max_hits: 20,
    start_timestamp: Math.floor(new Date("2026-03-17").getTime() / 1000),
    end_timestamp: Math.floor(new Date("2026-03-18").getTime() / 1000),
    sort_by: "timestamp",
    sort_order: "desc",
  },
});

for (const hit of result.data.hits) {
  console.log(hit.timestamp, hit.severity, hit.message);
}

Elasticsearch-Compatible Search API

// Use the ES-compatible endpoint for applications migrating from Elasticsearch
const result = await axios.post(
  "http://localhost:7280/api/v1/_elastic/app-logs/_search",
  {
    query: {
      bool: {
        must: [{ match: { message: "connection refused" } }],
        filter: [
          { term: { severity: "ERROR" } },
          { range: { timestamp: { gte: "2026-03-17T00:00:00Z", lt: "2026-03-18T00:00:00Z" } } },
        ],
      },
    },
    sort: [{ timestamp: { order: "desc" } }],
    size: 20,
  }
);

Aggregations

const result = await axios.post(
  "http://localhost:7280/api/v1/_elastic/app-logs/_search",
  {
    query: { match_all: {} },
    aggs: {
      severity_counts: {
        terms: { field: "severity", size: 10 },
      },
      logs_over_time: {
        date_histogram: {
          field: "timestamp",
          fixed_interval: "1h",
        },
      },
    },
    size: 0,
  }
);

OpenTelemetry Integration

Quickwit natively ingests OpenTelemetry logs and traces via gRPC (port 7281):

# OpenTelemetry Collector config pointing to Quickwit
exporters:
  otlp/quickwit:
    endpoint: "http://quickwit:7281"
    tls:
      insecure: true

service:
  pipelines:
    logs:
      exporters: [otlp/quickwit]
    traces:
      exporters: [otlp/quickwit]

Deleting an Index

await qw.delete("/indexes/app-logs");

Best Practices

  • Always define a timestamp_field — Quickwit uses the timestamp field to prune search splits (its equivalent of shards). Without it, every query scans all data, negating the performance benefits of time-based partitioning.
  • Use tag_fields for high-cardinality filters — tag fields are stored in split metadata and allow Quickwit to skip entire splits during search. Mark fields like service, severity, or tenant_id as tags.
  • Batch ingestion for throughput — send documents in batches of thousands rather than one at a time. Quickwit is optimized for bulk ingestion and will commit based on commit_timeout_secs or document count thresholds.

Common Pitfalls

  • Treating Quickwit like a mutable database — Quickwit is designed for append-mostly data. It does not support updating or deleting individual documents. If you need to correct data, delete the affected time range and re-ingest.
  • Ignoring split pruning in queries — queries without timestamp bounds or tag filters will scan all splits, which is slow on large datasets. Always include time range filters and tag-based filters when possible to take advantage of Quickwit's split pruning.

Anti-Patterns

Using the service without understanding its pricing model. Cloud services bill differently — per request, per GB, per seat. Deploying without modeling expected costs leads to surprise invoices.

Hardcoding configuration instead of using environment variables. API keys, endpoints, and feature flags change between environments. Hardcoded values break deployments and leak secrets.

Ignoring the service's rate limits and quotas. Every external API has throughput limits. Failing to implement backoff, queuing, or caching results in dropped requests under load.

Treating the service as always available. External services go down. Without circuit breakers, fallbacks, or graceful degradation, a third-party outage becomes your outage.

Coupling your architecture to a single provider's API. Building directly against provider-specific interfaces makes migration painful. Wrap external services in thin adapter layers.

Install this skill directly: skilldb add search-services-skills

Get CLI access →