Worker Threads
Worker threads for offloading CPU-intensive tasks without blocking the Node.js event loop
You are an expert in Node.js Worker Threads for offloading CPU-intensive computation without blocking the main event loop. ## Key Points - Use a worker pool sized to `os.availableParallelism()` rather than spawning workers per request. - Transfer `ArrayBuffer` objects instead of copying when the sender no longer needs the data. - Keep worker files self-contained — avoid importing heavy dependencies that inflate startup time. - Use `worker.ref()` and `worker.unref()` to control whether idle workers keep the process alive. - Measure actual throughput; the overhead of serialization and message passing can outweigh the parallelism benefit for tasks under a few milliseconds. - **Using workers for I/O-bound tasks** — Node.js already handles I/O concurrently via the event loop; workers add overhead without benefit for network or disk calls. - **Forgetting to handle worker errors** — an unhandled error in a worker terminates it silently unless the main thread listens for the `'error'` event. - **Assuming shared global state** — each worker has its own global scope; module-level variables are not shared between threads. - **Data races with SharedArrayBuffer** — reading and writing shared memory without `Atomics` leads to subtle, hard-to-reproduce bugs. - **Leaking workers** — not calling `worker.terminate()` on shutdown leaves threads running and prevents the process from exiting cleanly.
skilldb get nodejs-patterns-skills/Worker ThreadsFull skill: 190 linesWorker Threads — Node.js Patterns
You are an expert in Node.js Worker Threads for offloading CPU-intensive computation without blocking the main event loop.
Core Philosophy
Overview
The worker_threads module enables true parallel execution of JavaScript in Node.js. Unlike child processes, worker threads share memory space and can transfer data efficiently via SharedArrayBuffer and the structured clone algorithm. They are the correct solution for CPU-bound work — image processing, parsing, compression, cryptographic operations — that would otherwise block the event loop and degrade server responsiveness.
Core Concepts
Main Thread vs Worker
The main thread creates workers using new Worker(filename). Each worker runs in its own V8 isolate with its own event loop but shares the same process. Communication happens through message passing or shared memory.
Message Passing
Workers and the main thread exchange data via postMessage() and 'message' events. Data is copied using the structured clone algorithm by default, or transferred zero-copy for ArrayBuffer and MessagePort objects.
SharedArrayBuffer
For high-throughput scenarios, SharedArrayBuffer provides true shared memory. Access must be coordinated using Atomics to avoid data races.
Worker Pool
Spawning a worker per request is expensive. A worker pool pre-creates a fixed number of workers and dispatches tasks to them, amortizing the startup cost.
Implementation Patterns
Basic worker with message passing
// main.js
const { Worker } = require('node:worker_threads');
function runHash(data) {
return new Promise((resolve, reject) => {
const worker = new Worker('./hash-worker.js', {
workerData: data,
});
worker.on('message', resolve);
worker.on('error', reject);
worker.on('exit', (code) => {
if (code !== 0) reject(new Error(`Worker exited with code ${code}`));
});
});
}
// hash-worker.js
const { parentPort, workerData } = require('node:worker_threads');
const crypto = require('node:crypto');
const hash = crypto.createHash('sha256').update(workerData).digest('hex');
parentPort.postMessage(hash);
Reusable worker pool
const { Worker } = require('node:worker_threads');
const os = require('node:os');
class WorkerPool {
#workers = [];
#queue = [];
constructor(workerFile, size = os.availableParallelism()) {
for (let i = 0; i < size; i++) {
this.#addWorker(workerFile);
}
}
#addWorker(file) {
const worker = new Worker(file);
worker.on('message', (result) => {
worker._callback(null, result);
worker._callback = null;
this.#assign(worker);
});
worker.on('error', (err) => {
if (worker._callback) worker._callback(err);
worker._callback = null;
this.#assign(worker);
});
this.#workers.push(worker);
this.#assign(worker);
}
#assign(worker) {
const task = this.#queue.shift();
if (task) {
worker._callback = task.callback;
worker.postMessage(task.data);
} else {
worker._idle = true;
}
}
run(data) {
return new Promise((resolve, reject) => {
const callback = (err, result) => err ? reject(err) : resolve(result);
const idle = this.#workers.find((w) => w._idle);
if (idle) {
idle._idle = false;
idle._callback = callback;
idle.postMessage(data);
} else {
this.#queue.push({ data, callback });
}
});
}
async destroy() {
await Promise.all(this.#workers.map((w) => w.terminate()));
}
}
Transferring ArrayBuffers for zero-copy
// Main thread
const buffer = new ArrayBuffer(1024 * 1024);
const view = new Uint8Array(buffer);
// fill view with data...
worker.postMessage(buffer, [buffer]); // transfer, not copy
// buffer.byteLength is now 0 in this thread
// Worker
parentPort.on('message', (buffer) => {
const view = new Uint8Array(buffer);
// process the data, then transfer back
parentPort.postMessage(buffer, [buffer]);
});
SharedArrayBuffer with Atomics
const shared = new SharedArrayBuffer(Int32Array.BYTES_PER_ELEMENT * 2);
const view = new Int32Array(shared);
// Main thread increments
Atomics.add(view, 0, 1);
// Worker can read the latest value
const value = Atomics.load(view, 0);
// Worker waits for a signal
Atomics.wait(view, 1, 0); // blocks until view[1] !== 0
// Main thread signals
Atomics.store(view, 1, 1);
Atomics.notify(view, 1);
Best Practices
- Use a worker pool sized to
os.availableParallelism()rather than spawning workers per request. - Transfer
ArrayBufferobjects instead of copying when the sender no longer needs the data. - Keep worker files self-contained — avoid importing heavy dependencies that inflate startup time.
- Use
worker.ref()andworker.unref()to control whether idle workers keep the process alive. - Measure actual throughput; the overhead of serialization and message passing can outweigh the parallelism benefit for tasks under a few milliseconds.
Common Pitfalls
- Using workers for I/O-bound tasks — Node.js already handles I/O concurrently via the event loop; workers add overhead without benefit for network or disk calls.
- Forgetting to handle worker errors — an unhandled error in a worker terminates it silently unless the main thread listens for the
'error'event. - Assuming shared global state — each worker has its own global scope; module-level variables are not shared between threads.
- Data races with SharedArrayBuffer — reading and writing shared memory without
Atomicsleads to subtle, hard-to-reproduce bugs. - Leaking workers — not calling
worker.terminate()on shutdown leaves threads running and prevents the process from exiting cleanly.
Anti-Patterns
Over-engineering for hypothetical scale. Building for millions of users when you have hundreds adds complexity without value. Solve today's problems first.
Ignoring the existing ecosystem. Reinventing functionality that mature libraries already provide well wastes time and introduces unnecessary risk.
Premature abstraction. Creating elaborate frameworks and utilities before you have enough concrete cases to know what the abstraction should look like produces the wrong abstraction.
Neglecting error handling at boundaries. Internal code can trust its inputs, but system boundaries (user input, APIs, file I/O) require defensive validation.
Skipping documentation for obvious code. What is obvious to you today will not be obvious to your colleague next month or to you next year.
Install this skill directly: skilldb add nodejs-patterns-skills
Related Skills
Child Processes
Child process management patterns for spawning, communicating with, and controlling external processes
Clustering
Cluster module patterns for scaling Node.js applications across multiple CPU cores
Error Handling
Comprehensive error handling strategies for robust and debuggable Node.js applications
Event Emitter
EventEmitter patterns for building decoupled, event-driven architectures in Node.js
File System
Modern fs/promises patterns for safe, efficient file system operations in Node.js
Native Modules
N-API and native addon patterns for extending Node.js with high-performance C/C++ and Rust modules