Multi-CPU Node JS: Cluster Module
Node JS is one of the most famous and most commonly used tools for building APIs and websites. It is known to be a Single-Threaded tool, meaning it only runs on a single thread when running in a real-world environment.
This causes an issue when having thousands of users using the API. It can cause slowness, requests being blocked for some time, and even the CPU reaching 100% usage on the machine.
In this article, we are going to go through the Cluster Module in Node JS, which allows us to create a Multi-CPU Node JS application. We will start by understanding how the Cluster module works, then we will dive into some code, as well as how the Cluster module can be used in Nodes JS, and finally, we will look at a tool that allows you to turn any Node JS app into a Multi-CPU app easily.
Prerequisites
In order to follow the tutorial, you need to have the following:
- Linux Machine (I am using Ubuntu 22.04)
- Latest Node JS on the Linux Machine (I am using v21.1.0)
What Is The Cluster Module?
The Cluster module in Node JS allows you to create multiple instances of the same application by creating an instance on each CPU available on the device you are running the app on.
We can do a quick demo by create a new file called index.js, and adding the following to it:
const cluster = require('node:cluster');
if(cluster.isPrimary) {
console.log("Primary process")
cluster.fork()
} else {
console.log("Worker process")
}
First, we import the cluster module from the Node JS standard library. Then, we add an if
condition to check whether the current process is the primary one or not. If the current process is the primary process, we print “Primary process” to the console, and spawn a new worker process using the function cluster.fork()
. And if the current process is not the primary process, it means it is a worker process, therefore it prints out “Worker process” to the console.
The way the cluster module is used in Node JS, is we have 1 primary, or parent, process and as many worker processes as needed. This method allows the primary process to decide on which worker a request should be handled based on a specific scheduling algorithm. The default algorithm used for distributing requests is the Round Robin approach.
We can move on to a more complex example.
HTTP Server
In this section, we will create an HTTP Server on multiple workers using the example provided by the official Node JS Cluster module documentation, which you can find here.
Create a new file called http.js
and add the following:
const cluster = require('node:cluster');
const http = require('node:http');
const numCPUs = require('node:os').availableParallelism();
const process = require('node:process');
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
// In this case it is an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
console.log(`Worker ${process.pid} started`);
}
First, we retrieve the current process accessing the file. If it is the primary process, then we create as many workers using the cluster.fork()
function as we have CPUs on the system we are running on. The number of CPUs on your system will be retrieved using the node:os
package which is provided by Node JS as well.
Next, if the current process accessing the file is a worker process, then we create an HTTP Server which listens on port 8000 and simply returns a hello world.
One thing to note is that we also print out the process IDs (PID) when running as either a primary process or worker process.
When we run the code by running the command node http.js
in the terminal. Once it runs, we should see a similar output as below:
Primary 47258 is running
Worker 47263 started
Worker 47261 started
Worker 47268 started
Worker 47260 started
You will get a similar output as the one below, with two main exceptions:
- You will probably get different numbers than the ones seen above. These are the PIDs
- You might get less or more logs saying that a Worker …. started. This depends on how many CPUs you have on your system.
You can access the app by opening your browser and typing the below link in the URL:
http://localhost:8000
You won’t know which worker process actually processed the request you made. To see that, we will make a small change in the code.
Change the response data from:
res.end('hello world\n');
To:
res.end(`The worker with process ID ${process.pid} processed the request`);
Next, reopen the app on your browser. Now, you should see the worker PID that is processing your request. If you refresh (or keep refreshing), you might see a different PID number.
To make this even better, open a new terminal and create a new file called test.js
, then, add the following to the file:
(async () => {
for(let i = 0; i < 10; i++){
const res = await fetch('http://localhost:8000')
console.log(await res.text())
}
})()
All this code does is access the app 10 times and prints the response each time. If we run this code using node test.js
, we should see a similar response as the one below:
$ node test.js
The worker with process ID 48576 processed the request
The worker with process ID 48578 processed the request
The worker with process ID 48576 processed the request
The worker with process ID 48578 processed the request
The worker with process ID 48576 processed the request
The worker with process ID 48578 processed the request
The worker with process ID 48576 processed the request
The worker with process ID 48578 processed the request
The worker with process ID 48576 processed the request
The worker with process ID 48578 processed the request
As you can see, different workers have responded to my request, meaning we can be sure that there is more than 1 process is being ran in the background.
Now to answer a question that you might have thought of…
How Are Multiple Processes Using The Same Port?
The way it works is that the primary process is the one that is listening to the port (8000 in this case), then, it distributes the load across the workers in a Round Robin fashion, as we mentioned previously.
I like to think about it as if it is a distributed system with the primary process being the load balancer that distributed the requests across the different systems.
An Easier Method: PM2
Before we end this article, we will go through using a simple tool that will allow you to convert any existing JavaScript app to use the Cluster Module and be a Multi-CPU application.
The app is called PM2
and it is a process management tool used to deploy and manage JavaScript applications in production, check it out here
To start using PM2, install it globally by running:
npm install pm2 -g
Next, we can create a new file and call it single.js
, which creates a single HTTP Server:
const http = require("node:http")
http.createServer((req, res) => {
res.writeHead(200);
res.end('Hello World')
}).listen(8000)
console.log("Accepting requests on port 8000");
Next, we need to create a configuration file for pm2
titled ecosystem.config.js
and add the following:
module.exports = {
apps : [{
script : "single.js",
instances : "max",
exec_mode : "cluster"
}]
}
This is the configuration for PM2, all it does is run the script we created, which is called single.js
and create it in a cluster
mode with the max
number of instances we can (which is dependent on how many CPUs you have).
We can start the PM2 process by running:
pm2 start ecosystem.config.js
After running the command, we will see a list of processes being created by PM2:
┌────┬───────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├────┼───────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ single │ default │ N/A │ cluster │ 49593 │ 0s │ 0 │ online │ 0% │ 56.8mb │ h....... │ disabled │
│ 1 │ single │ default │ N/A │ cluster │ 49594 │ 0s │ 0 │ online │ 0% │ 57.4mb │ h....... │ disabled │
│ 2 │ single │ default │ N/A │ cluster │ 49595 │ 0s │ 0 │ online │ 0% │ 56.9mb │ h....... │ disabled │
│ 3 │ single │ default │ N/A │ cluster │ 49596 │ 0s │ 0 │ online │ 0% │ 57.3mb │ h....... │ disabled │
└────┴───────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
PM2 create multiple instances of the same app we built in the single.js
file. This is the simplest method to make a JS app become Multi-CPU instantly and without any major changes to the main app.
Conclusion
The Cluster Module is a great module that allows you to use the full capabilities of your system, by making use of existing resources to enhance the speed.
PM2 is a great tool overall, not just for utilizing the Cluster Module easily, but to manage and deploy application in a production environment. It has been my favorite method to deploy apps and I have been using it for a while now.
I hope you learned something useful in the article and are able to speed up your Node JS apps with no major changes using PM2.
Thank you for reading and see you in the next one!