By Stanley Ulili, Rachel Lee and Vinayak Baranwal

The author selected Open Sourcing Mental Illness to receive a donation as part of the Write for DOnations program.
Node.js runs JavaScript code in a single thread, which means that your code can only do one task at a time. However, Node.js itself is multithreaded and provides hidden threads through the libuv library, which handles I/O operations like reading files from a disk or network requests. Through the use of hidden threads, Node.js provides asynchronous methods that allow your code to make I/O requests without blocking the main thread.
Although Node.js has hidden threads, you cannot use them to offload CPU-intensive tasks, such as complex calculations, image resizing, or video compression. Since JavaScript is single-threaded when a CPU-intensive task runs, it blocks the main thread and no other code executes until the task completes. Without using other threads, the only way to speed up a CPU-bound task is to increase the processor speed.
However, in recent years, CPUs haven’t been getting faster. Instead, computers are shipping with extra cores, and it’s now more common for computers to have 8 or more cores. Despite this trend, your code will not take advantage of the extra cores on your computer to speed up CPU-bound tasks or avoid breaking the main thread because JavaScript is single-threaded.
To remedy this, Node.js introduced the worker_threads module, which allows you to create threads and execute multiple JavaScript tasks in parallel. Once a thread finishes a task, it sends a message to the main thread that contains the result of the operation so that it can be used with other parts of the code. The advantage of using worker_threads is that CPU-bound tasks don’t block the main thread and you can divide and distribute a task to multiple workers to optimize it.
In this tutorial, you’ll create a Node.js app with a CPU-intensive task that blocks the main thread. Next, you will use the worker_threads module to offload the CPU-intensive task to another thread to avoid blocking the main thread. Finally, you will divide the CPU-bound task and have four threads work on it in parallel to speed up the task. You’ll also learn about worker pools, resource limits, error handling, monitoring, and production deployment strategies.
To complete this tutorial, you will need:
A multi-core system with four or more cores. You can still follow the tutorial from Steps 1 through 6 on a dual-core system. However, Step 7 requires four cores to see the performance improvements.
A Node.js v20 or newer development environment. If you’re on Ubuntu, install the recent version of Node.js by following How To Install Node.js on Ubuntu. If you’re on another operating system, see How to Install Node.js and Create a Local Development Environment.
You can use an older version of Node.js for many multithreading features, but this tutorial uses the current LTS (Long Term Support) version for best compatibility and support.
A good understanding of the event loop, callbacks, and promises in JavaScript, which you can find in our tutorial, Understanding the Event Loop, Callbacks, Promises, and Async/Await in JavaScript.
Basic knowledge of how to use the Express web framework. Check out our guide, How To Get Started with Node.js and Express.
Node.js’s default single-threaded execution model is ideal for I/O-bound operations, but CPU-bound work, such as cryptographic hashing, media processing, or machine learning inference blocks the event loop unless you offload it. The worker-threads module enables you to launch additional threads within the same process, sharing memory with configurable limits.
The “main thread” (event loop) dispatches work; a “worker” processes CPU tasks; a “pool” recycles workers for multiple jobs.
Core concepts:
postMessage/on('message') or MessageChannelBefore you start writing CPU-bound tasks and offloading them to separate threads, you first need to understand what processes and threads are, and the differences between them. Most importantly, you’ll review how the processes and threads execute on a single or multi-core computer system.
A process is a running program in the operating system. It has its own memory and cannot see or access the memory of other running programs. It also has an instruction pointer, which indicates the instruction currently being executed in a program. Only one task can be executed at a time.
When you run a program using the node command, you create a process. The operating system allocates memory for the program, locates the program executable on your computer’s disk, and loads the program into memory. It then assigns it a process ID and begins executing the program. At that point, your program has now become a process.
When the process is running, its process ID is added to the process list of the operating system and can be seen with tools like htop, top, or ps. The tools provide more details about the processes, as well as options to stop or prioritize them.
On a single core machine, the processes execute concurrently. That is, the operating system switches between the processes in regular intervals. For example, process D executes for a limited time, then its state is saved somewhere and the OS schedules process B to execute for a limited time, and so on. This happens back and forth until all the tasks have been finished. From the output, it might look like each process has run to completion, but in reality, the OS scheduler is constantly switching between them.
On a multi-core system, assuming you have four cores, the OS schedules each process to execute on each core at the same time. This is known as parallelism. However, if you create four more processes (bringing the total to eight), each core will execute two processes concurrently until they are finished.
Threads are like processes: they have their own instruction pointer and can execute one JavaScript task at a time. Unlike processes, threads do not have their own memory. Instead, they reside within a process’s memory. When you create a process, it can have multiple threads created with the worker_threads module executing JavaScript code in parallel. Furthermore, threads can communicate with one another through message passing or sharing data in the process’s memory. This makes them lightweight in comparison to processes, since spawning a thread does not ask for more memory from the operating system.
When it comes to the execution of threads, they have similar behavior to that of processes. If you have multiple threads running on a single core system, the operating system will switch between them in regular intervals, giving each thread a chance to execute directly on the single CPU. On a multi-core system, the OS schedules the threads across all cores and executes the JavaScript code at the same time. If you end up creating more threads than there are cores available, each core will execute multiple threads concurrently.
Node.js does provide extra threads, which is why it’s considered to be multithreaded. Node.js implements the libuv library, which provides four extra threads to a Node.js process. With these threads, the I/O operations are handled separately and when they are finished, the event loop adds the callback associated with the I/O task in a microtask queue. When the call stack in the main thread is clear, the callback is pushed on the call stack and then it executes. To make this clear, the callback associated with the given I/O task does not execute in parallel; however, the task itself of reading a file or a network request happens in parallel with the help of the threads. Once the I/O task finishes, the callback runs in the main thread.
In addition to these four threads, the V8 engine, also provides two threads for handling things like automatic garbage collection. This brings the total number of threads in a process to seven: one main thread, four Node.js threads, and two V8 threads.
As discussed previously, the four Node.js threads are used for I/O operations to make them non-blocking. They work well for that task, and creating threads yourself for I/O operations may even worsen your application performance. The same cannot be said about CPU-bound tasks. A CPU-bound task does not make use of any extra threads available in the process and blocks the main thread.
In this step, you’ll set up your project environment from scratch. You’ll begin by creating a new project directory and initializing it with npm. After that, you’ll enable ES modules for modern JavaScript support. Finally, you will install all the required dependencies, such as Express and worker pool libraries, to prepare for multithreading in Node.js.
To begin, create and move into the project directory:
- mkdir multi-threading_demo
- cd multi-threading_demo
The mkdir command creates a directory and the cd command changes the working directory to the newly created one.
Following this, initialize the project directory with npm using the npm init command:
- npm init -y
The -y option accepts all the default options.
Next, enable ES modules by adding "type": "module" to your package.json:
- npm pkg set type=module
To proceed, you will need to install several important dependencies for your project. These libraries will enable server creation, worker thread pooling, metrics collection, and queue management. This setup ensures you have all the essential tools for efficiently handling multithreading and monitoring in your Node.js application.
- npm install express piscina poolifier p-queue prom-client
You will use Express to create a server application that has blocking and non-blocking endpoints. Piscina and Poolifier are worker pool libraries that help manage worker threads efficiently. The p-queue package provides queue management for handling concurrent requests, and prom-client enables metrics collection for monitoring.
Node.js includes the worker-threads module by default, so you don’t need to install it separately.
Your package.json should now look similar to this:
{
"name": "multi-threading_demo",
"version": "1.0.0",
"type": "module",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.18.2",
"piscina": "^4.0.0",
"poolifier": "^2.0.0",
"p-queue": "^8.0.0",
"prom-client": "^15.0.0"
}
}
Workers should start quickly, exchange messages cleanly, and terminate after delivering results. In this section, you’ll create a worker that performs CPU-intensive work and communicates with the main thread.
First, create a worker.js file that will contain the CPU-intensive task:
import { parentPort, workerData } from 'node:worker_threads';
import { createHash } from 'node:crypto';
try {
const result = hashBuffer(workerData.payload);
parentPort.postMessage({ status: 'ok', result });
} catch (error) {
parentPort.postMessage({ status: 'error', message: error.message });
}
function hashBuffer(payload) {
const hash = createHash('sha256');
hash.update(payload, 'utf8');
return hash.digest('hex');
}
This worker receives data through workerData, performs a SHA-256 hash operation, and sends the result back to the main thread using postMessage.
Now, create the main server file index.js:
import express from 'express';
import { Worker } from 'node:worker_threads';
import { fileURLToPath } from 'node:url';
import { join, dirname } from 'node:path';
const app = express();
app.use(express.json({ limit: '1mb' }));
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const workerPath = join(__dirname, 'worker.js');
app.get('/non-blocking/', (req, res) => {
res.status(200).send('This page is non-blocking');
});
app.post('/api/hash', async (req, res, next) => {
try {
const hash = await runWorker({ payload: req.body.text });
res.json({ hash });
} catch (error) {
next(error);
}
});
function runWorker(workerData) {
return new Promise((resolve, reject) => {
let settled = false;
const worker = new Worker(workerPath, {
workerData,
resourceLimits: {
maxOldGenerationSizeMb: 64,
maxYoungGenerationSizeMb: 16,
stackSizeMb: 4
}
});
const timeout = setTimeout(() => {
if (settled) return;
settled = true;
worker.terminate();
reject(new Error('Worker timeout'));
}, 10_000);
function safeResolve(value) {
if (settled) return;
settled = true;
clearTimeout(timeout);
worker.terminate();
resolve(value);
}
function safeReject(error) {
if (settled) return;
settled = true;
clearTimeout(timeout);
worker.terminate();
reject(error);
}
worker.once('message', (message) => {
if (message.status === 'ok') {
safeResolve(message.result);
} else {
safeReject(new Error(message.message));
}
});
worker.once('error', (error) => {
safeReject(error);
});
worker.once('exit', (code) => {
if (code !== 0) {
safeReject(new Error(`Worker exited with code ${code}`));
}
});
});
}
app.use((error, req, res, _next) => {
res.status(500).json({ error: error.message });
});
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
This implementation includes:
Note: If you are following the tutorial on a remote server, you can use port forwarding to test the app in the browser.
While the Express server is still running, open another terminal on your local computer and enter the following command:
ssh -L 3000:localhost:3000 sammy@IP_ADDRESS
Replace sammy with your SSH username and IP_ADDRESS with the public IPv4 address of your Droplet. Upon connecting to the server, navigate to http://localhost:3000/non-blocking on your local machine’s web browser. Keep the second terminal open throughout the remainder of this tutorial.
Test the endpoint:
- curl -X POST http://localhost:3000/api/hash \
- -H 'Content-Type: application/json' \
- -d '{"text":"multithreading keeps Node fast"}'
You should receive a response with the hash value:
Output{"hash":"36fe0f5d8df0..."}
Creating a new worker for each request is inefficient. Worker pools reuse threads across multiple jobs, reducing startup overhead and improving performance. This section covers two popular pool libraries: Piscina and Poolifier.
| Library | Primary Use Case | Notable Features |
|---|---|---|
| Piscina | High-throughput HTTP/queue workloads | Automatic pooling, resourceLimits, asyncResource tracking |
| Poolifier | Dynamic pool sizing & prioritization | Thread/Cluster pools, job priority queues, observability hooks |
| Native API | Fine-grained control, minimal dependency | Manual worker recycling, ideal for low-level or security-first apps |
On a 4‑vCPU machine, hashing 10,000 small messages typically drops from ~2.3 seconds in a single-threaded implementation to ~0.7 seconds with a four-thread pool. Actual numbers vary by workload, but this illustrates the parallelism gains Worker Threads enable.
Create a pool configuration file:
import Piscina from 'piscina';
import { resolve, dirname } from 'node:path';
import { fileURLToPath } from 'node:url';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
const piscina = new Piscina({
filename: resolve(__dirname, 'worker.js'),
minThreads: 2,
maxThreads: Math.max(4, Piscina.availableParallelism()),
idleTimeout: 30_000,
resourceLimits: {
maxOldGenerationSizeMb: 80
}
});
export async function hashWithPool(payload) {
const message = await piscina.run({ payload });
if (message?.status === 'ok') {
return message.result;
}
throw new Error(message?.message || 'Worker failed');
}
export { piscina };
Update your index.js to use the pool:
import express from 'express';
import { hashWithPool } from './pool.js';
const app = express();
app.use(express.json());
app.get('/non-blocking/', (req, res) => {
res.status(200).send('This page is non-blocking');
});
app.post('/api/pool/hash', async (req, res, next) => {
try {
const hash = await hashWithPool(req.body.text);
res.json({ hash });
} catch (error) {
next(error);
}
});
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log(`App listening on port ${port}`);
});
Piscina automatically manages worker lifecycle, queues tasks when all workers are busy, and recycles workers to prevent memory leaks.
In production, a single misbehaving or unbounded worker can exhaust memory or CPU. Setting strict compute and memory limits ensures that one heavy job cannot degrade the entire service.
Avoid Worker Threads when your bottleneck is I/O rather than CPU. Database queries, network requests, filesystem reads, and cache lookups benefit more from async I/O than from parallel threads. Using Worker Threads for these operations adds overhead without improving latency. Threads should primarily be used for CPU-heavy tasks such as hashing, compression, image processing, or WebAssembly execution.
Always set resourceLimits when creating workers:
const worker = new Worker(workerPath, {
workerData: { payload: data },
resourceLimits: {
maxOldGenerationSizeMb: 64, // Heap size limit
maxYoungGenerationSizeMb: 16, // Young generation limit
stackSizeMb: 4 // Stack size limit
}
});
Enforce per-task deadlines, especially when executing untrusted input:
function runWorkerWithTimeout(workerData, timeoutMs = 10_000) {
return new Promise((resolve, reject) => {
const worker = new Worker(workerPath, { workerData });
const timeout = setTimeout(() => {
worker.terminate();
reject(new Error('Worker timeout'));
}, timeoutMs);
worker.once('message', (message) => {
clearTimeout(timeout);
resolve(message);
worker.terminate();
});
worker.once('error', (error) => {
clearTimeout(timeout);
worker.terminate();
reject(error);
});
});
}
Reject large payloads early:
app.use(express.json({ limit: '1mb' }));
app.post('/api/hash', async (req, res, next) => {
if (!req.body.text || req.body.text.length > 1_000_000) {
return res.status(400).json({ error: 'Payload too large' });
}
// ... process request
});
Security Best Practices: Use secure defaults in production (environment variables for limits and timeouts) and audit your dependencies with npm audit. Avoid eval or dynamic import() on untrusted content in workers.
Keep request/response lifetimes short and move heavy lifting to background flows. Here are production patterns:
202 Accepted with polling or callbacksExample with queue pattern:
import PQueue from 'p-queue';
import { hashWithPool } from './pool.js';
const queue = new PQueue({
concurrency: 8,
intervalCap: 100,
interval: 1000
});
export function enqueueHashJob(job) {
return queue.add(() => hashWithPool(job.text));
}
Managing workers in a production environment demands robust orchestration, including carefully handling their entire lifecycle. This involves initializing workers, monitoring their health, recovering from process crashes or unhandled errors, recycling threads when necessary, and ensuring graceful shutdown or restart to maintain application stability.
Retry idempotent jobs with exponential backoff; mark non-idempotent tasks as failed:
async function runWithRetry(workerData, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await runWorker(workerData);
} catch (error) {
if (attempt === maxRetries - 1) throw error;
await new Promise(resolve =>
setTimeout(resolve, Math.pow(2, attempt) * 1000)
);
}
}
}
Use AbortController to cancel work when clients disconnect:
import { Worker } from 'node:worker_threads';
export function runCancelableTask(workerPath, workerData, signal) {
return new Promise((resolve, reject) => {
const worker = new Worker(workerPath, { workerData });
signal.addEventListener('abort', () => {
worker.terminate();
reject(new Error('Task aborted'));
});
worker.once('message', (message) => {
resolve(message);
worker.terminate();
});
worker.once('error', reject);
worker.once('exit', (code) => {
if (code !== 0) reject(new Error(`Exit code: ${code}`));
});
});
}
Restart workers after a fixed number of tasks or when memory usage exceeds thresholds:
const MAX_TASKS_PER_WORKER = 1000;
const workerTaskCounts = new WeakMap();
function createRecyclableWorker(workerPath, workerData) {
const worker = new Worker(workerPath, { workerData });
workerTaskCounts.set(worker, 0);
return worker;
}
function recordWorkerTask(worker) {
const current = workerTaskCounts.get(worker) ?? 0;
const next = current + 1;
workerTaskCounts.set(worker, next);
if (next >= MAX_TASKS_PER_WORKER) {
worker.terminate();
workerTaskCounts.delete(worker);
}
}
Important: Earlier versions of the worker recycling example used a single global taskCount, which does not track usage per worker and can lead to misleading lifecycle behavior. The updated example in this tutorial uses a WeakMap to maintain per‑worker counters and avoids cross‑worker interference. If you see any examples online that still use a global counter, avoid them, because they do not correctly model worker lifetimes.
Monitoring queue depth, CPU usage, event loop delay, and worker memory helps you detect saturation early and maintain reliable throughput under real workloads.
perf_hooks.monitorEventLoopDelay() to detect saturationqueueSize, options.maxThreads, and completed countsprocess.resourceUsage() and heap snapshotsnode --inspect with load testsCreate a metrics endpoint:
import client from 'prom-client';
import { piscina } from './pool.js';
export function createMetricsRegistry(piscina) {
const register = new client.Registry();
client.collectDefaultMetrics({ register });
const queueGauge = new client.Gauge({
name: 'worker_queue_size',
help: 'Items waiting in queue'
});
register.registerMetric(queueGauge);
setInterval(() => {
queueGauge.set(piscina.queueSize);
}, 5000).unref();
return register;
}
Expose /metrics in your server:
import { createMetricsRegistry } from './metrics.js';
import { piscina } from './pool.js';
const register = createMetricsRegistry(piscina);
app.get('/metrics', async (req, res) => {
res.set('Content-Type', register.contentType);
res.end(await register.metrics());
});
When deploying multithreaded Node.js services, package your app with strict dependency locking and align your worker pool sizes with the actual vCPUs assigned to the container or server.
FROM node:20-bullseye-slim AS base
WORKDIR /app
ENV NODE_ENV=production
COPY package*.json ./
RUN npm ci --omit=dev
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
- docker build -t node-worker-app .
- docker run --cpus=2 --memory=512m -p 3000:3000 node-worker-app
maxThreads per application instance to match the available vCPUs on your server. For example, on a 4 vCPU server or cloud droplet, configure a maximum of 4–6 worker threads. This provides strong throughput for CPU-bound tasks without starving Node’s main event loop or other system processes.pm2 (with ecosystem configs and graceful reloads) or as a systemd service (Restart=on-failure, resource constraints, logging). This ensures automatic recovery from failures and efficient production restarts.stderr output for stack traces or syntax errors. Double-check that the worker script path is accurate and correctly resolved using import.meta.url.AbortController to interrupt runaway tasks and keep the pool healthy.--cpus and --cpu-shares flags to provide sufficient resources for your workload.Tip: Enable verbose logging around worker startup, errors, and shutdowns to speed up diagnostics and reduce mean time to recovery.
Does Node.js support multithreading?
Yes. Node.js started as a single-threaded JavaScript runtime that relied on the libuv thread pool for I/O, but modern versions also provide the worker_threads module for running JavaScript in parallel threads inside the same process. You still have one main event loop that handles requests, while Worker Threads are used for CPU-heavy tasks that would otherwise block that loop.
What are Worker Threads in Node.js?
Worker Threads are additional JavaScript execution contexts that run in parallel with the main thread inside the same Node.js process. Each worker has its own event loop and memory, but can share data with the main thread through structured cloning, SharedArrayBuffer, and Atomics. This makes them ideal for CPU-bound operations such as hashing, compression, image processing, or WebAssembly workloads that must not block the main event loop.
How do I run tasks in parallel in Node.js?
To run tasks in parallel, you create Worker Threads and dispatch CPU-heavy jobs to them instead of executing those jobs directly on the main thread. For production, you typically use a worker pool (for example, Piscina or Poolifier) that maintains a fixed number of workers and a queue of jobs. The pool ensures that workers are reused across tasks, preventing startup overhead and helping you stay within CPU and memory limits on your Droplet or container.
When should I use Worker Threads vs the Cluster module?
Use the Cluster module (or multiple Node.js processes) when you want process-level isolation and are primarily scaling I/O-bound HTTP workloads behind a load balancer. Use Worker Threads when you need to keep a single process but offload CPU-heavy work, share memory efficiently, or integrate tightly with WebAssembly or native addons. In many production architectures, you combine both: several Node.js processes for horizontal scaling, and each process runs a small worker pool for CPU-bound jobs.
How do threads communicate in Node.js?
Threads communicate using message passing and, when needed, shared memory. The most common pattern is sending plain JavaScript objects via postMessage and listening for the message event in both the main thread and workers. For high-performance or low-level coordination, you can use SharedArrayBuffer together with Atomics to implement counters, progress flags, or wait/notify patterns without copying large buffers between threads.
Is multithreading better than async I/O in Node.js?
Multithreading complements async I/O; it does not replace it. Async I/O, powered by libuv, is still the best choice for database queries, HTTP calls, and filesystem operations because it keeps the main event loop responsive without extra threads. Worker Threads are most valuable for CPU-bound work that cannot be made non-blocking with callbacks, promises, or async/await. For most web APIs, you will use async I/O for the request flow and Worker Threads only for specific CPU-heavy steps.
Can Node.js handle CPU-intensive tasks efficiently?
Yes, as long as you move CPU-intensive work off the main event loop and into Worker Threads. A well-sized worker pool can keep all CPU cores busy while the main thread remains responsive to new requests. To keep this efficient, you should configure resource limits, enforce timeouts, and monitor queue depth and CPU usage. For very large or sustained CPU workloads, you can also scale out horizontally across multiple Droplets or containers in addition to using threads.
How many worker threads should I start per CPU core?
A good starting point is one worker per logical CPU core, as reported by os.availableParallelism() or the equivalent helper from your worker pool library. From there, you adjust based on measurements: if queue latency remains high and CPU utilization is low, you can experiment with a slightly larger pool. If CPU is saturated and latency is still high, you should scale out with more instances rather than creating more threads than available cores.
Will Worker Threads speed up database or network-bound APIs?
No. Worker Threads are designed for CPU-bound workloads, not for accelerating I/O-bound operations. If your API is slow because of database queries or network calls, focus on indexing, caching, connection pooling, and query optimization. Offloading I/O-bound work to workers usually adds overhead without improving throughput or latency, and can make the system more complex than necessary.
Can I use Worker Threads in containers or serverless environments?
Yes, but you must respect the platform’s resource and lifecycle constraints. In containers, size your worker pool according to the vCPUs and memory assigned to the container, not the underlying host machine, and enforce timeouts so that stuck workers do not consume all resources. In serverless environments, be mindful of cold starts, execution time limits, and concurrency caps, because long-lived worker pools are better suited to services running on Droplets or Kubernetes rather than short-lived functions.
In this article, you built a Node.js application that demonstrates how CPU-bound work blocks the main thread, and how Worker Threads and worker pools keep the event loop responsive while utilizing all CPU cores efficiently.
You also learned about worker pools using Piscina and Poolifier, resource limits and security best practices, error handling and lifecycle management, monitoring with Prometheus, and Docker deployment strategies.
As a next step, see the Node.js worker_threads documentation to learn more about options. In addition, you can check out the piscina library, which allows you to create a worker pool for your CPU-intensive tasks. If you want to continue learning Node.js, see the tutorial series, How To Code in Node.js.
Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.
Software Engineer with a passion for sharing my knowledge.
Building future-ready infrastructure with Linux, Cloud, and DevOps. Full Stack Developer & System Administrator. Technical Writer @ DigitalOcean | GitHub Contributor | Passionate about Docker, PostgreSQL, and Open Source | Exploring NLP & AI-TensorFlow | Nailed over 50+ deployments across production environments.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
You might find this interesting/useful also? https://github.com/robtweed/qoper8-wt
A multi-core system with four or more cores. You can still follow the tutorial from Steps 1 through 6 on a dual-core system. However, Step 7 requires four cores to see the performance improvements.
So, is the number of CPU cores or threads, that determines how many processes can be run in parallel? Asking for a system that has 6 CPU Cores, but number of Threads is 12 (vendor specs).
Thanks for this great tutorial. I just have as small improvement in the code for those that want to change the value of THREAD_COUNT. The sum of the results should be:
const total = thread_results.reduce((acc, result) => acc + result, 0);
instead of:
const total =
thread_results[0] +
thread_results[1] +
thread_results[2] +
thread_results[3];
Otherwise, the total will be misleading despite of having the calculations being executed correctly.
Pls explain me one thing here and I shall be very grateful to you.
const thread_results = await Promise.all(workerPromises);
=> the above line is suppose to block the main thread as it will w8 for all the worker promises to get resolved…why it does not do so, here?
app.get(“/blocking”, async (req, res) => {
const workerPromises = [];
for (let i = 0; i < THREAD_COUNT; i++) {
workerPromises.push(createWorker());
}
const thread_results = await Promise.all(workerPromises);
const total =
thread_results[0] +
thread_results[1] +
thread_results[2] +
thread_results[3];
res.status(200).send(result is ${total});
});
I’m confused with one statement : With these threads, the I/O operations are handled separately and when they are finished, the event loop adds the callback associated with the I/O task in a microtask queue. As i remember the callbacks related to IO ops are executed in poll/check phase right? please clarify someone.
Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.
Full documentation for every DigitalOcean product.
The Wave has everything you need to know about building a business, from raising funding to marketing your product.
Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.
New accounts only. By submitting your email you agree to our Privacy Policy
Scale up as you grow — whether you're running one virtual machine or ten thousand.
Sign up and get $200 in credit for your first 60 days with DigitalOcean.*
*This promotional offer applies to new accounts only.